problem_id
stringlengths
18
22
source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
13
58
prompt
stringlengths
1.1k
25.4k
golden_diff
stringlengths
145
5.13k
verification_info
stringlengths
582
39.1k
num_tokens
int64
271
4.1k
num_tokens_diff
int64
47
1.02k
gh_patches_debug_6236
rasdani/github-patches
git_diff
engnadeau__pybotics-18
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Update examples Examples are now out of sync with current codebase. Potential use for iPython? --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `examples/example_robot.py` Content: ``` 1 import copy 2 3 import pybotics as py 4 import numpy as np 5 6 # set numpy print options 7 np.set_printoptions(precision=3) 8 np.set_printoptions(suppress=True) 9 10 # create robot 11 model = np.loadtxt('ur10-mdh.csv', delimiter=',') 12 robot = py.Robot(model) 13 14 print('Robot Model:\n{}\n'.format(robot.robot_model)) 15 16 # demonstrate forward kinematics 17 joints = [0] * robot.num_dof() 18 pose = robot.fk(joints) 19 20 print('Pose:\n{}\n'.format(pose)) 21 22 # demonstrate inverse kinematics 23 new_joints = robot.ik(pose) 24 print('Solved Joints:\n{}\n'.format(new_joints)) 25 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/examples/example_robot.py b/examples/example_robot.py deleted file mode 100644 --- a/examples/example_robot.py +++ /dev/null @@ -1,24 +0,0 @@ -import copy - -import pybotics as py -import numpy as np - -# set numpy print options -np.set_printoptions(precision=3) -np.set_printoptions(suppress=True) - -# create robot -model = np.loadtxt('ur10-mdh.csv', delimiter=',') -robot = py.Robot(model) - -print('Robot Model:\n{}\n'.format(robot.robot_model)) - -# demonstrate forward kinematics -joints = [0] * robot.num_dof() -pose = robot.fk(joints) - -print('Pose:\n{}\n'.format(pose)) - -# demonstrate inverse kinematics -new_joints = robot.ik(pose) -print('Solved Joints:\n{}\n'.format(new_joints))
{"golden_diff": "diff --git a/examples/example_robot.py b/examples/example_robot.py\ndeleted file mode 100644\n--- a/examples/example_robot.py\n+++ /dev/null\n@@ -1,24 +0,0 @@\n-import copy\n-\n-import pybotics as py\n-import numpy as np\n-\n-# set numpy print options\n-np.set_printoptions(precision=3)\n-np.set_printoptions(suppress=True)\n-\n-# create robot\n-model = np.loadtxt('ur10-mdh.csv', delimiter=',')\n-robot = py.Robot(model)\n-\n-print('Robot Model:\\n{}\\n'.format(robot.robot_model))\n-\n-# demonstrate forward kinematics\n-joints = [0] * robot.num_dof()\n-pose = robot.fk(joints)\n-\n-print('Pose:\\n{}\\n'.format(pose))\n-\n-# demonstrate inverse kinematics\n-new_joints = robot.ik(pose)\n-print('Solved Joints:\\n{}\\n'.format(new_joints))\n", "issue": "Update examples\nExamples are now out of sync with current codebase. Potential use for iPython?\n", "before_files": [{"content": "import copy\n\nimport pybotics as py\nimport numpy as np\n\n# set numpy print options\nnp.set_printoptions(precision=3)\nnp.set_printoptions(suppress=True)\n\n# create robot\nmodel = np.loadtxt('ur10-mdh.csv', delimiter=',')\nrobot = py.Robot(model)\n\nprint('Robot Model:\\n{}\\n'.format(robot.robot_model))\n\n# demonstrate forward kinematics\njoints = [0] * robot.num_dof()\npose = robot.fk(joints)\n\nprint('Pose:\\n{}\\n'.format(pose))\n\n# demonstrate inverse kinematics\nnew_joints = robot.ik(pose)\nprint('Solved Joints:\\n{}\\n'.format(new_joints))\n", "path": "examples/example_robot.py"}], "after_files": [{"content": null, "path": "examples/example_robot.py"}]}
474
212
gh_patches_debug_10739
rasdani/github-patches
git_diff
easybuilders__easybuild-easyblocks-2326
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- R package modules have priority over user-installed packages R has multiple variables that control the library path, `R_LIBS`, `R_LIBS_USER` and `R_LIBS_SITE`. The path can be inspected with `.libPaths()` and is made up of in order of the entries in `R_LIBS`, `R_LIBS_USER` and `R_LIBS_SITE` and last the R installation itself. Our R package modules currently prepend their path to `R_LIBS` similar to how `PYTHONPATH` is handled, but this means that the user's directory (typically specified in `~/.Renviron` or the default) is not at the front of the library path. This means that users cannot override system-installed packages and more importantly, `install.packages` only ever considers the first entry in the library path unless you explicitly specify a `lib=` to install to. Some installation functions like `install_github` doesn't even have that customization point. I suggest that we alter our R packages to prepend to `R_LIBS_SITE` instead, it seems to yield the preferred order for an end-user. Our R installations do not have a bundled `site-library` directory so the environment variable is currently unused. Upstream documentation on how the paths are constructed: https://stat.ethz.ch/R-manual/R-devel/library/base/html/libPaths.html --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `easybuild/easyblocks/generic/rpackage.py` Content: ``` 1 ## 2 # Copyright 2009-2021 Ghent University 3 # 4 # This file is part of EasyBuild, 5 # originally created by the HPC team of Ghent University (http://ugent.be/hpc/en), 6 # with support of Ghent University (http://ugent.be/hpc), 7 # the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be), 8 # Flemish Research Foundation (FWO) (http://www.fwo.be/en) 9 # and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en). 10 # 11 # https://github.com/easybuilders/easybuild 12 # 13 # EasyBuild is free software: you can redistribute it and/or modify 14 # it under the terms of the GNU General Public License as published by 15 # the Free Software Foundation v2. 16 # 17 # EasyBuild is distributed in the hope that it will be useful, 18 # but WITHOUT ANY WARRANTY; without even the implied warranty of 19 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 20 # GNU General Public License for more details. 21 # 22 # You should have received a copy of the GNU General Public License 23 # along with EasyBuild. If not, see <http://www.gnu.org/licenses/>. 24 ## 25 """ 26 EasyBuild support for building and installing R packages, implemented as an easyblock 27 28 @author: Stijn De Weirdt (Ghent University) 29 @author: Dries Verdegem (Ghent University) 30 @author: Kenneth Hoste (Ghent University) 31 @author: Jens Timmerman (Ghent University) 32 @author: Toon Willems (Ghent University) 33 @author: Balazs Hajgato (Vrije Universiteit Brussel) 34 """ 35 import os 36 37 from easybuild.easyblocks.r import EXTS_FILTER_R_PACKAGES, EB_R 38 from easybuild.easyblocks.generic.configuremake import check_config_guess, obtain_config_guess 39 from easybuild.framework.easyconfig import CUSTOM 40 from easybuild.framework.extensioneasyblock import ExtensionEasyBlock 41 from easybuild.tools.build_log import EasyBuildError 42 from easybuild.tools.filetools import mkdir, copy_file 43 from easybuild.tools.run import run_cmd, parse_log_for_error 44 45 46 def make_R_install_option(opt, values, cmdline=False): 47 """ 48 Make option list for install.packages, to specify in R environment. 49 """ 50 txt = "" 51 if values: 52 if cmdline: 53 txt = " --%s=\"%s" % (opt, values[0]) 54 else: 55 txt = "%s=c(\"%s" % (opt, values[0]) 56 for i in values[1:]: 57 txt += " %s" % i 58 if cmdline: 59 txt += "\"" 60 else: 61 txt += "\")" 62 return txt 63 64 65 class RPackage(ExtensionEasyBlock): 66 """ 67 Install an R package as a separate module, or as an extension. 68 """ 69 70 @staticmethod 71 def extra_options(extra_vars=None): 72 """Extra easyconfig parameters specific to RPackage.""" 73 extra_vars = ExtensionEasyBlock.extra_options(extra_vars=extra_vars) 74 extra_vars.update({ 75 'exts_subdir': ['', "Subdirectory where R extensions should be installed info", CUSTOM], 76 'unpack_sources': [False, "Unpack sources before installation", CUSTOM], 77 }) 78 return extra_vars 79 80 def __init__(self, *args, **kwargs): 81 """Initliaze RPackage-specific class variables.""" 82 83 super(RPackage, self).__init__(*args, **kwargs) 84 85 self.configurevars = [] 86 self.configureargs = [] 87 self.ext_src = None 88 89 def make_r_cmd(self, prefix=None): 90 """Create a command to run in R to install an R package.""" 91 confvars = "confvars" 92 confargs = "confargs" 93 confvarslist = make_R_install_option(confvars, self.configurevars) 94 confargslist = make_R_install_option(confargs, self.configureargs) 95 confvarsstr = "" 96 if confvarslist: 97 confvarslist = confvarslist + "; names(%s)=\"%s\"" % (confvars, self.name) 98 confvarsstr = ", configure.vars=%s" % confvars 99 confargsstr = "" 100 if confargslist: 101 confargslist = confargslist + "; names(%s)=\"%s\"" % (confargs, self.name) 102 confargsstr = ", configure.args=%s" % confargs 103 104 if prefix: 105 prefix = '"%s", ' % prefix 106 else: 107 prefix = '' 108 109 r_cmd = """ 110 options(repos=c(CRAN="http://www.freestatistics.org/cran")) 111 %s 112 %s 113 install.packages("%s", %s dependencies = FALSE %s%s) 114 """ % (confvarslist, confargslist, self.name, prefix, confvarsstr, confargsstr) 115 cmd = "%s R -q --no-save %s" % (self.cfg['preinstallopts'], self.cfg['installopts']) 116 117 self.log.debug("make_r_cmd returns %s with input %s" % (cmd, r_cmd)) 118 119 return (cmd, r_cmd) 120 121 def make_cmdline_cmd(self, prefix=None): 122 """Create a command line to install an R package.""" 123 confvars = "" 124 if self.configurevars: 125 confvars = make_R_install_option("configure-vars", self.configurevars, cmdline=True) 126 confargs = "" 127 if self.configureargs: 128 confargs = make_R_install_option("configure-args", self.configureargs, cmdline=True) 129 130 if prefix: 131 prefix = '--library=%s' % prefix 132 else: 133 prefix = '' 134 135 if self.start_dir: 136 loc = os.path.join(self.ext_dir or os.path.sep, self.start_dir) 137 else: 138 loc = self.ext_dir or self.ext_src 139 140 cmd = ' '.join([ 141 self.cfg['preinstallopts'], 142 "R CMD INSTALL", 143 loc, 144 confargs, 145 confvars, 146 prefix, 147 '--no-clean-on-error', 148 self.cfg['installopts'], 149 ]) 150 151 self.log.debug("make_cmdline_cmd returns %s" % cmd) 152 return cmd, None 153 154 def configure_step(self): 155 """No configuration for installing R packages.""" 156 pass 157 158 def build_step(self): 159 """No separate build step for R packages.""" 160 pass 161 162 def install_R_package(self, cmd, inp=None): 163 """Install R package as specified, and check for errors.""" 164 165 cmdttdouterr, _ = run_cmd(cmd, log_all=True, simple=False, inp=inp, regexp=False) 166 167 cmderrors = parse_log_for_error(cmdttdouterr, regExp="^ERROR:") 168 if cmderrors: 169 cmd = "R -q --no-save" 170 stdin = """ 171 remove.library(%s) 172 """ % self.name 173 # remove package if errors were detected 174 # it's possible that some of the dependencies failed, but the package itself was installed 175 run_cmd(cmd, log_all=False, log_ok=False, simple=False, inp=stdin, regexp=False) 176 raise EasyBuildError("Errors detected during installation of R package %s!", self.name) 177 else: 178 self.log.debug("R package %s installed succesfully" % self.name) 179 180 def update_config_guess(self, path): 181 """Update any config.guess found in specified directory""" 182 for config_guess_dir in (root for root, _, files in os.walk(path) if 'config.guess' in files): 183 config_guess = os.path.join(config_guess_dir, 'config.guess') 184 if not check_config_guess(config_guess): 185 updated_config_guess = obtain_config_guess() 186 if updated_config_guess: 187 self.log.debug("Replacing outdated %s with more recent %s", config_guess, updated_config_guess) 188 copy_file(updated_config_guess, config_guess) 189 else: 190 raise EasyBuildError("Failed to obtain updated config.guess") 191 192 def install_step(self): 193 """Install procedure for R packages.""" 194 # Update config.guess if the package was extracted 195 if self.start_dir: 196 self.update_config_guess(self.start_dir) 197 cmd, stdin = self.make_cmdline_cmd(prefix=os.path.join(self.installdir, self.cfg['exts_subdir'])) 198 self.install_R_package(cmd, inp=stdin) 199 200 def run(self): 201 """Install R package as an extension.""" 202 203 # determine location 204 if isinstance(self.master, EB_R): 205 # extension is being installed as part of an R installation/module 206 (out, _) = run_cmd("R RHOME", log_all=True, simple=False) 207 rhome = out.strip() 208 lib_install_prefix = os.path.join(rhome, 'library') 209 else: 210 # extension is being installed in a separate installation prefix 211 lib_install_prefix = os.path.join(self.installdir, self.cfg['exts_subdir']) 212 mkdir(lib_install_prefix, parents=True) 213 214 if self.src: 215 super(RPackage, self).run(unpack_src=True) 216 self.ext_src = self.src 217 self.update_config_guess(self.ext_dir) 218 self.log.debug("Installing R package %s version %s." % (self.name, self.version)) 219 cmd, stdin = self.make_cmdline_cmd(prefix=lib_install_prefix) 220 else: 221 if self.patches: 222 raise EasyBuildError("Cannot patch R package %s as no explicit source is given!", self.name) 223 self.log.debug("Installing most recent version of R package %s (source not found)." % self.name) 224 cmd, stdin = self.make_r_cmd(prefix=lib_install_prefix) 225 226 self.install_R_package(cmd, inp=stdin) 227 228 def sanity_check_step(self, *args, **kwargs): 229 """ 230 Custom sanity check for R packages 231 """ 232 return super(RPackage, self).sanity_check_step(EXTS_FILTER_R_PACKAGES, *args, **kwargs) 233 234 def make_module_extra(self): 235 """Add install path to R_LIBS""" 236 # prepend R_LIBS with install path 237 extra = self.module_generator.prepend_paths("R_LIBS", [self.cfg['exts_subdir']]) 238 return super(RPackage, self).make_module_extra(extra) 239 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/easybuild/easyblocks/generic/rpackage.py b/easybuild/easyblocks/generic/rpackage.py --- a/easybuild/easyblocks/generic/rpackage.py +++ b/easybuild/easyblocks/generic/rpackage.py @@ -232,7 +232,7 @@ return super(RPackage, self).sanity_check_step(EXTS_FILTER_R_PACKAGES, *args, **kwargs) def make_module_extra(self): - """Add install path to R_LIBS""" - # prepend R_LIBS with install path - extra = self.module_generator.prepend_paths("R_LIBS", [self.cfg['exts_subdir']]) + """Add install path to R_LIBS_SITE""" + # prepend R_LIBS_SITE with install path + extra = self.module_generator.prepend_paths("R_LIBS_SITE", [self.cfg['exts_subdir']]) return super(RPackage, self).make_module_extra(extra)
{"golden_diff": "diff --git a/easybuild/easyblocks/generic/rpackage.py b/easybuild/easyblocks/generic/rpackage.py\n--- a/easybuild/easyblocks/generic/rpackage.py\n+++ b/easybuild/easyblocks/generic/rpackage.py\n@@ -232,7 +232,7 @@\n return super(RPackage, self).sanity_check_step(EXTS_FILTER_R_PACKAGES, *args, **kwargs)\n \n def make_module_extra(self):\n- \"\"\"Add install path to R_LIBS\"\"\"\n- # prepend R_LIBS with install path\n- extra = self.module_generator.prepend_paths(\"R_LIBS\", [self.cfg['exts_subdir']])\n+ \"\"\"Add install path to R_LIBS_SITE\"\"\"\n+ # prepend R_LIBS_SITE with install path\n+ extra = self.module_generator.prepend_paths(\"R_LIBS_SITE\", [self.cfg['exts_subdir']])\n return super(RPackage, self).make_module_extra(extra)\n", "issue": "R package modules have priority over user-installed packages\nR has multiple variables that control the library path, `R_LIBS`, `R_LIBS_USER` and `R_LIBS_SITE`.\r\n\r\nThe path can be inspected with `.libPaths()` and is made up of in order of the entries in `R_LIBS`, `R_LIBS_USER` and `R_LIBS_SITE` and last the R installation itself.\r\n\r\nOur R package modules currently prepend their path to `R_LIBS` similar to how `PYTHONPATH` is handled, but this means that the user's directory (typically specified in `~/.Renviron` or the default) is not at the front of the library path.\r\n\r\nThis means that users cannot override system-installed packages and more importantly, `install.packages` only ever considers the first entry in the library path unless you explicitly specify a `lib=` to install to. Some installation functions like `install_github` doesn't even have that customization point.\r\n\r\nI suggest that we alter our R packages to prepend to `R_LIBS_SITE` instead, it seems to yield the preferred order for an end-user.\r\n\r\nOur R installations do not have a bundled `site-library` directory so the environment variable is currently unused.\r\n\r\nUpstream documentation on how the paths are constructed: https://stat.ethz.ch/R-manual/R-devel/library/base/html/libPaths.html\n", "before_files": [{"content": "##\n# Copyright 2009-2021 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nEasyBuild support for building and installing R packages, implemented as an easyblock\n\n@author: Stijn De Weirdt (Ghent University)\n@author: Dries Verdegem (Ghent University)\n@author: Kenneth Hoste (Ghent University)\n@author: Jens Timmerman (Ghent University)\n@author: Toon Willems (Ghent University)\n@author: Balazs Hajgato (Vrije Universiteit Brussel)\n\"\"\"\nimport os\n\nfrom easybuild.easyblocks.r import EXTS_FILTER_R_PACKAGES, EB_R\nfrom easybuild.easyblocks.generic.configuremake import check_config_guess, obtain_config_guess\nfrom easybuild.framework.easyconfig import CUSTOM\nfrom easybuild.framework.extensioneasyblock import ExtensionEasyBlock\nfrom easybuild.tools.build_log import EasyBuildError\nfrom easybuild.tools.filetools import mkdir, copy_file\nfrom easybuild.tools.run import run_cmd, parse_log_for_error\n\n\ndef make_R_install_option(opt, values, cmdline=False):\n \"\"\"\n Make option list for install.packages, to specify in R environment.\n \"\"\"\n txt = \"\"\n if values:\n if cmdline:\n txt = \" --%s=\\\"%s\" % (opt, values[0])\n else:\n txt = \"%s=c(\\\"%s\" % (opt, values[0])\n for i in values[1:]:\n txt += \" %s\" % i\n if cmdline:\n txt += \"\\\"\"\n else:\n txt += \"\\\")\"\n return txt\n\n\nclass RPackage(ExtensionEasyBlock):\n \"\"\"\n Install an R package as a separate module, or as an extension.\n \"\"\"\n\n @staticmethod\n def extra_options(extra_vars=None):\n \"\"\"Extra easyconfig parameters specific to RPackage.\"\"\"\n extra_vars = ExtensionEasyBlock.extra_options(extra_vars=extra_vars)\n extra_vars.update({\n 'exts_subdir': ['', \"Subdirectory where R extensions should be installed info\", CUSTOM],\n 'unpack_sources': [False, \"Unpack sources before installation\", CUSTOM],\n })\n return extra_vars\n\n def __init__(self, *args, **kwargs):\n \"\"\"Initliaze RPackage-specific class variables.\"\"\"\n\n super(RPackage, self).__init__(*args, **kwargs)\n\n self.configurevars = []\n self.configureargs = []\n self.ext_src = None\n\n def make_r_cmd(self, prefix=None):\n \"\"\"Create a command to run in R to install an R package.\"\"\"\n confvars = \"confvars\"\n confargs = \"confargs\"\n confvarslist = make_R_install_option(confvars, self.configurevars)\n confargslist = make_R_install_option(confargs, self.configureargs)\n confvarsstr = \"\"\n if confvarslist:\n confvarslist = confvarslist + \"; names(%s)=\\\"%s\\\"\" % (confvars, self.name)\n confvarsstr = \", configure.vars=%s\" % confvars\n confargsstr = \"\"\n if confargslist:\n confargslist = confargslist + \"; names(%s)=\\\"%s\\\"\" % (confargs, self.name)\n confargsstr = \", configure.args=%s\" % confargs\n\n if prefix:\n prefix = '\"%s\", ' % prefix\n else:\n prefix = ''\n\n r_cmd = \"\"\"\n options(repos=c(CRAN=\"http://www.freestatistics.org/cran\"))\n %s\n %s\n install.packages(\"%s\", %s dependencies = FALSE %s%s)\n \"\"\" % (confvarslist, confargslist, self.name, prefix, confvarsstr, confargsstr)\n cmd = \"%s R -q --no-save %s\" % (self.cfg['preinstallopts'], self.cfg['installopts'])\n\n self.log.debug(\"make_r_cmd returns %s with input %s\" % (cmd, r_cmd))\n\n return (cmd, r_cmd)\n\n def make_cmdline_cmd(self, prefix=None):\n \"\"\"Create a command line to install an R package.\"\"\"\n confvars = \"\"\n if self.configurevars:\n confvars = make_R_install_option(\"configure-vars\", self.configurevars, cmdline=True)\n confargs = \"\"\n if self.configureargs:\n confargs = make_R_install_option(\"configure-args\", self.configureargs, cmdline=True)\n\n if prefix:\n prefix = '--library=%s' % prefix\n else:\n prefix = ''\n\n if self.start_dir:\n loc = os.path.join(self.ext_dir or os.path.sep, self.start_dir)\n else:\n loc = self.ext_dir or self.ext_src\n\n cmd = ' '.join([\n self.cfg['preinstallopts'],\n \"R CMD INSTALL\",\n loc,\n confargs,\n confvars,\n prefix,\n '--no-clean-on-error',\n self.cfg['installopts'],\n ])\n\n self.log.debug(\"make_cmdline_cmd returns %s\" % cmd)\n return cmd, None\n\n def configure_step(self):\n \"\"\"No configuration for installing R packages.\"\"\"\n pass\n\n def build_step(self):\n \"\"\"No separate build step for R packages.\"\"\"\n pass\n\n def install_R_package(self, cmd, inp=None):\n \"\"\"Install R package as specified, and check for errors.\"\"\"\n\n cmdttdouterr, _ = run_cmd(cmd, log_all=True, simple=False, inp=inp, regexp=False)\n\n cmderrors = parse_log_for_error(cmdttdouterr, regExp=\"^ERROR:\")\n if cmderrors:\n cmd = \"R -q --no-save\"\n stdin = \"\"\"\n remove.library(%s)\n \"\"\" % self.name\n # remove package if errors were detected\n # it's possible that some of the dependencies failed, but the package itself was installed\n run_cmd(cmd, log_all=False, log_ok=False, simple=False, inp=stdin, regexp=False)\n raise EasyBuildError(\"Errors detected during installation of R package %s!\", self.name)\n else:\n self.log.debug(\"R package %s installed succesfully\" % self.name)\n\n def update_config_guess(self, path):\n \"\"\"Update any config.guess found in specified directory\"\"\"\n for config_guess_dir in (root for root, _, files in os.walk(path) if 'config.guess' in files):\n config_guess = os.path.join(config_guess_dir, 'config.guess')\n if not check_config_guess(config_guess):\n updated_config_guess = obtain_config_guess()\n if updated_config_guess:\n self.log.debug(\"Replacing outdated %s with more recent %s\", config_guess, updated_config_guess)\n copy_file(updated_config_guess, config_guess)\n else:\n raise EasyBuildError(\"Failed to obtain updated config.guess\")\n\n def install_step(self):\n \"\"\"Install procedure for R packages.\"\"\"\n # Update config.guess if the package was extracted\n if self.start_dir:\n self.update_config_guess(self.start_dir)\n cmd, stdin = self.make_cmdline_cmd(prefix=os.path.join(self.installdir, self.cfg['exts_subdir']))\n self.install_R_package(cmd, inp=stdin)\n\n def run(self):\n \"\"\"Install R package as an extension.\"\"\"\n\n # determine location\n if isinstance(self.master, EB_R):\n # extension is being installed as part of an R installation/module\n (out, _) = run_cmd(\"R RHOME\", log_all=True, simple=False)\n rhome = out.strip()\n lib_install_prefix = os.path.join(rhome, 'library')\n else:\n # extension is being installed in a separate installation prefix\n lib_install_prefix = os.path.join(self.installdir, self.cfg['exts_subdir'])\n mkdir(lib_install_prefix, parents=True)\n\n if self.src:\n super(RPackage, self).run(unpack_src=True)\n self.ext_src = self.src\n self.update_config_guess(self.ext_dir)\n self.log.debug(\"Installing R package %s version %s.\" % (self.name, self.version))\n cmd, stdin = self.make_cmdline_cmd(prefix=lib_install_prefix)\n else:\n if self.patches:\n raise EasyBuildError(\"Cannot patch R package %s as no explicit source is given!\", self.name)\n self.log.debug(\"Installing most recent version of R package %s (source not found).\" % self.name)\n cmd, stdin = self.make_r_cmd(prefix=lib_install_prefix)\n\n self.install_R_package(cmd, inp=stdin)\n\n def sanity_check_step(self, *args, **kwargs):\n \"\"\"\n Custom sanity check for R packages\n \"\"\"\n return super(RPackage, self).sanity_check_step(EXTS_FILTER_R_PACKAGES, *args, **kwargs)\n\n def make_module_extra(self):\n \"\"\"Add install path to R_LIBS\"\"\"\n # prepend R_LIBS with install path\n extra = self.module_generator.prepend_paths(\"R_LIBS\", [self.cfg['exts_subdir']])\n return super(RPackage, self).make_module_extra(extra)\n", "path": "easybuild/easyblocks/generic/rpackage.py"}], "after_files": [{"content": "##\n# Copyright 2009-2021 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nEasyBuild support for building and installing R packages, implemented as an easyblock\n\n@author: Stijn De Weirdt (Ghent University)\n@author: Dries Verdegem (Ghent University)\n@author: Kenneth Hoste (Ghent University)\n@author: Jens Timmerman (Ghent University)\n@author: Toon Willems (Ghent University)\n@author: Balazs Hajgato (Vrije Universiteit Brussel)\n\"\"\"\nimport os\n\nfrom easybuild.easyblocks.r import EXTS_FILTER_R_PACKAGES, EB_R\nfrom easybuild.easyblocks.generic.configuremake import check_config_guess, obtain_config_guess\nfrom easybuild.framework.easyconfig import CUSTOM\nfrom easybuild.framework.extensioneasyblock import ExtensionEasyBlock\nfrom easybuild.tools.build_log import EasyBuildError\nfrom easybuild.tools.filetools import mkdir, copy_file\nfrom easybuild.tools.run import run_cmd, parse_log_for_error\n\n\ndef make_R_install_option(opt, values, cmdline=False):\n \"\"\"\n Make option list for install.packages, to specify in R environment.\n \"\"\"\n txt = \"\"\n if values:\n if cmdline:\n txt = \" --%s=\\\"%s\" % (opt, values[0])\n else:\n txt = \"%s=c(\\\"%s\" % (opt, values[0])\n for i in values[1:]:\n txt += \" %s\" % i\n if cmdline:\n txt += \"\\\"\"\n else:\n txt += \"\\\")\"\n return txt\n\n\nclass RPackage(ExtensionEasyBlock):\n \"\"\"\n Install an R package as a separate module, or as an extension.\n \"\"\"\n\n @staticmethod\n def extra_options(extra_vars=None):\n \"\"\"Extra easyconfig parameters specific to RPackage.\"\"\"\n extra_vars = ExtensionEasyBlock.extra_options(extra_vars=extra_vars)\n extra_vars.update({\n 'exts_subdir': ['', \"Subdirectory where R extensions should be installed info\", CUSTOM],\n 'unpack_sources': [False, \"Unpack sources before installation\", CUSTOM],\n })\n return extra_vars\n\n def __init__(self, *args, **kwargs):\n \"\"\"Initliaze RPackage-specific class variables.\"\"\"\n\n super(RPackage, self).__init__(*args, **kwargs)\n\n self.configurevars = []\n self.configureargs = []\n self.ext_src = None\n\n def make_r_cmd(self, prefix=None):\n \"\"\"Create a command to run in R to install an R package.\"\"\"\n confvars = \"confvars\"\n confargs = \"confargs\"\n confvarslist = make_R_install_option(confvars, self.configurevars)\n confargslist = make_R_install_option(confargs, self.configureargs)\n confvarsstr = \"\"\n if confvarslist:\n confvarslist = confvarslist + \"; names(%s)=\\\"%s\\\"\" % (confvars, self.name)\n confvarsstr = \", configure.vars=%s\" % confvars\n confargsstr = \"\"\n if confargslist:\n confargslist = confargslist + \"; names(%s)=\\\"%s\\\"\" % (confargs, self.name)\n confargsstr = \", configure.args=%s\" % confargs\n\n if prefix:\n prefix = '\"%s\", ' % prefix\n else:\n prefix = ''\n\n r_cmd = \"\"\"\n options(repos=c(CRAN=\"http://www.freestatistics.org/cran\"))\n %s\n %s\n install.packages(\"%s\", %s dependencies = FALSE %s%s)\n \"\"\" % (confvarslist, confargslist, self.name, prefix, confvarsstr, confargsstr)\n cmd = \"%s R -q --no-save %s\" % (self.cfg['preinstallopts'], self.cfg['installopts'])\n\n self.log.debug(\"make_r_cmd returns %s with input %s\" % (cmd, r_cmd))\n\n return (cmd, r_cmd)\n\n def make_cmdline_cmd(self, prefix=None):\n \"\"\"Create a command line to install an R package.\"\"\"\n confvars = \"\"\n if self.configurevars:\n confvars = make_R_install_option(\"configure-vars\", self.configurevars, cmdline=True)\n confargs = \"\"\n if self.configureargs:\n confargs = make_R_install_option(\"configure-args\", self.configureargs, cmdline=True)\n\n if prefix:\n prefix = '--library=%s' % prefix\n else:\n prefix = ''\n\n if self.start_dir:\n loc = os.path.join(self.ext_dir or os.path.sep, self.start_dir)\n else:\n loc = self.ext_dir or self.ext_src\n\n cmd = ' '.join([\n self.cfg['preinstallopts'],\n \"R CMD INSTALL\",\n loc,\n confargs,\n confvars,\n prefix,\n '--no-clean-on-error',\n self.cfg['installopts'],\n ])\n\n self.log.debug(\"make_cmdline_cmd returns %s\" % cmd)\n return cmd, None\n\n def configure_step(self):\n \"\"\"No configuration for installing R packages.\"\"\"\n pass\n\n def build_step(self):\n \"\"\"No separate build step for R packages.\"\"\"\n pass\n\n def install_R_package(self, cmd, inp=None):\n \"\"\"Install R package as specified, and check for errors.\"\"\"\n\n cmdttdouterr, _ = run_cmd(cmd, log_all=True, simple=False, inp=inp, regexp=False)\n\n cmderrors = parse_log_for_error(cmdttdouterr, regExp=\"^ERROR:\")\n if cmderrors:\n cmd = \"R -q --no-save\"\n stdin = \"\"\"\n remove.library(%s)\n \"\"\" % self.name\n # remove package if errors were detected\n # it's possible that some of the dependencies failed, but the package itself was installed\n run_cmd(cmd, log_all=False, log_ok=False, simple=False, inp=stdin, regexp=False)\n raise EasyBuildError(\"Errors detected during installation of R package %s!\", self.name)\n else:\n self.log.debug(\"R package %s installed succesfully\" % self.name)\n\n def update_config_guess(self, path):\n \"\"\"Update any config.guess found in specified directory\"\"\"\n for config_guess_dir in (root for root, _, files in os.walk(path) if 'config.guess' in files):\n config_guess = os.path.join(config_guess_dir, 'config.guess')\n if not check_config_guess(config_guess):\n updated_config_guess = obtain_config_guess()\n if updated_config_guess:\n self.log.debug(\"Replacing outdated %s with more recent %s\", config_guess, updated_config_guess)\n copy_file(updated_config_guess, config_guess)\n else:\n raise EasyBuildError(\"Failed to obtain updated config.guess\")\n\n def install_step(self):\n \"\"\"Install procedure for R packages.\"\"\"\n # Update config.guess if the package was extracted\n if self.start_dir:\n self.update_config_guess(self.start_dir)\n cmd, stdin = self.make_cmdline_cmd(prefix=os.path.join(self.installdir, self.cfg['exts_subdir']))\n self.install_R_package(cmd, inp=stdin)\n\n def run(self):\n \"\"\"Install R package as an extension.\"\"\"\n\n # determine location\n if isinstance(self.master, EB_R):\n # extension is being installed as part of an R installation/module\n (out, _) = run_cmd(\"R RHOME\", log_all=True, simple=False)\n rhome = out.strip()\n lib_install_prefix = os.path.join(rhome, 'library')\n else:\n # extension is being installed in a separate installation prefix\n lib_install_prefix = os.path.join(self.installdir, self.cfg['exts_subdir'])\n mkdir(lib_install_prefix, parents=True)\n\n if self.src:\n super(RPackage, self).run(unpack_src=True)\n self.ext_src = self.src\n self.update_config_guess(self.ext_dir)\n self.log.debug(\"Installing R package %s version %s.\" % (self.name, self.version))\n cmd, stdin = self.make_cmdline_cmd(prefix=lib_install_prefix)\n else:\n if self.patches:\n raise EasyBuildError(\"Cannot patch R package %s as no explicit source is given!\", self.name)\n self.log.debug(\"Installing most recent version of R package %s (source not found).\" % self.name)\n cmd, stdin = self.make_r_cmd(prefix=lib_install_prefix)\n\n self.install_R_package(cmd, inp=stdin)\n\n def sanity_check_step(self, *args, **kwargs):\n \"\"\"\n Custom sanity check for R packages\n \"\"\"\n return super(RPackage, self).sanity_check_step(EXTS_FILTER_R_PACKAGES, *args, **kwargs)\n\n def make_module_extra(self):\n \"\"\"Add install path to R_LIBS_SITE\"\"\"\n # prepend R_LIBS_SITE with install path\n extra = self.module_generator.prepend_paths(\"R_LIBS_SITE\", [self.cfg['exts_subdir']])\n return super(RPackage, self).make_module_extra(extra)\n", "path": "easybuild/easyblocks/generic/rpackage.py"}]}
3,343
210
gh_patches_debug_31739
rasdani/github-patches
git_diff
streamlink__streamlink-1863
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Remove 9anime.to As discussed over here: https://github.com/streamlink/streamlink/issues/1110#issuecomment-400687075 9anime.to isn't worth supporting at this point and is broken so I'm proposing we remove it. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `src/streamlink/plugins/nineanime.py` Content: ``` 1 import re 2 from streamlink.plugin import Plugin 3 from streamlink.plugin.api import http 4 from streamlink.plugin.api import useragents 5 from streamlink.plugin.api import validate 6 from streamlink.stream import HTTPStream 7 from streamlink.compat import urlparse 8 9 10 class NineAnime(Plugin): 11 _episode_info_url = "//9anime.to/ajax/episode/info" 12 13 _info_schema = validate.Schema({ 14 "grabber": validate.url(), 15 "params": { 16 "id": validate.text, 17 "token": validate.text, 18 "options": validate.text, 19 } 20 }) 21 22 _streams_schema = validate.Schema({ 23 "token": validate.text, 24 "error": None, 25 "data": [{ 26 "label": validate.text, 27 "file": validate.url(), 28 "type": "mp4" 29 }] 30 }) 31 32 _url_re = re.compile(r"https?://9anime.to/watch/(?:[^.]+?\.)(\w+)/(\w+)") 33 34 @classmethod 35 def can_handle_url(cls, url): 36 return cls._url_re.match(url) is not None 37 38 def add_scheme(self, url): 39 # update the scheme for the grabber url if required 40 if url.startswith("//"): 41 url = "{0}:{1}".format(urlparse(self.url).scheme, url) 42 return url 43 44 @Plugin.broken(1110) 45 def _get_streams(self): 46 match = self._url_re.match(self.url) 47 film_id, episode_id = match.groups() 48 49 headers = { 50 "Referer": self.url, 51 "User-Agent": useragents.FIREFOX 52 } 53 54 # Get the info about the Episode, including the Grabber API URL 55 info_res = http.get(self.add_scheme(self._episode_info_url), 56 params=dict(update=0, film=film_id, id=episode_id), 57 headers=headers) 58 info = http.json(info_res, schema=self._info_schema) 59 60 # Get the data about the streams from the Grabber API 61 grabber_url = self.add_scheme(info["grabber"]) 62 stream_list_res = http.get(grabber_url, params=info["params"], headers=headers) 63 stream_data = http.json(stream_list_res, schema=self._streams_schema) 64 65 for stream in stream_data["data"]: 66 yield stream["label"], HTTPStream(self.session, stream["file"]) 67 68 69 __plugin__ = NineAnime 70 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/src/streamlink/plugins/nineanime.py b/src/streamlink/plugins/nineanime.py --- a/src/streamlink/plugins/nineanime.py +++ b/src/streamlink/plugins/nineanime.py @@ -1,69 +1 @@ -import re -from streamlink.plugin import Plugin -from streamlink.plugin.api import http -from streamlink.plugin.api import useragents -from streamlink.plugin.api import validate -from streamlink.stream import HTTPStream -from streamlink.compat import urlparse - - -class NineAnime(Plugin): - _episode_info_url = "//9anime.to/ajax/episode/info" - - _info_schema = validate.Schema({ - "grabber": validate.url(), - "params": { - "id": validate.text, - "token": validate.text, - "options": validate.text, - } - }) - - _streams_schema = validate.Schema({ - "token": validate.text, - "error": None, - "data": [{ - "label": validate.text, - "file": validate.url(), - "type": "mp4" - }] - }) - - _url_re = re.compile(r"https?://9anime.to/watch/(?:[^.]+?\.)(\w+)/(\w+)") - - @classmethod - def can_handle_url(cls, url): - return cls._url_re.match(url) is not None - - def add_scheme(self, url): - # update the scheme for the grabber url if required - if url.startswith("//"): - url = "{0}:{1}".format(urlparse(self.url).scheme, url) - return url - - @Plugin.broken(1110) - def _get_streams(self): - match = self._url_re.match(self.url) - film_id, episode_id = match.groups() - - headers = { - "Referer": self.url, - "User-Agent": useragents.FIREFOX - } - - # Get the info about the Episode, including the Grabber API URL - info_res = http.get(self.add_scheme(self._episode_info_url), - params=dict(update=0, film=film_id, id=episode_id), - headers=headers) - info = http.json(info_res, schema=self._info_schema) - - # Get the data about the streams from the Grabber API - grabber_url = self.add_scheme(info["grabber"]) - stream_list_res = http.get(grabber_url, params=info["params"], headers=headers) - stream_data = http.json(stream_list_res, schema=self._streams_schema) - - for stream in stream_data["data"]: - yield stream["label"], HTTPStream(self.session, stream["file"]) - - -__plugin__ = NineAnime +# Plugin removed - https://github.com/streamlink/streamlink/issues/1862
{"golden_diff": "diff --git a/src/streamlink/plugins/nineanime.py b/src/streamlink/plugins/nineanime.py\n--- a/src/streamlink/plugins/nineanime.py\n+++ b/src/streamlink/plugins/nineanime.py\n@@ -1,69 +1 @@\n-import re\n-from streamlink.plugin import Plugin\n-from streamlink.plugin.api import http\n-from streamlink.plugin.api import useragents\n-from streamlink.plugin.api import validate\n-from streamlink.stream import HTTPStream\n-from streamlink.compat import urlparse\n-\n-\n-class NineAnime(Plugin):\n- _episode_info_url = \"//9anime.to/ajax/episode/info\"\n-\n- _info_schema = validate.Schema({\n- \"grabber\": validate.url(),\n- \"params\": {\n- \"id\": validate.text,\n- \"token\": validate.text,\n- \"options\": validate.text,\n- }\n- })\n-\n- _streams_schema = validate.Schema({\n- \"token\": validate.text,\n- \"error\": None,\n- \"data\": [{\n- \"label\": validate.text,\n- \"file\": validate.url(),\n- \"type\": \"mp4\"\n- }]\n- })\n-\n- _url_re = re.compile(r\"https?://9anime.to/watch/(?:[^.]+?\\.)(\\w+)/(\\w+)\")\n-\n- @classmethod\n- def can_handle_url(cls, url):\n- return cls._url_re.match(url) is not None\n-\n- def add_scheme(self, url):\n- # update the scheme for the grabber url if required\n- if url.startswith(\"//\"):\n- url = \"{0}:{1}\".format(urlparse(self.url).scheme, url)\n- return url\n-\n- @Plugin.broken(1110)\n- def _get_streams(self):\n- match = self._url_re.match(self.url)\n- film_id, episode_id = match.groups()\n-\n- headers = {\n- \"Referer\": self.url,\n- \"User-Agent\": useragents.FIREFOX\n- }\n-\n- # Get the info about the Episode, including the Grabber API URL\n- info_res = http.get(self.add_scheme(self._episode_info_url),\n- params=dict(update=0, film=film_id, id=episode_id),\n- headers=headers)\n- info = http.json(info_res, schema=self._info_schema)\n-\n- # Get the data about the streams from the Grabber API\n- grabber_url = self.add_scheme(info[\"grabber\"])\n- stream_list_res = http.get(grabber_url, params=info[\"params\"], headers=headers)\n- stream_data = http.json(stream_list_res, schema=self._streams_schema)\n-\n- for stream in stream_data[\"data\"]:\n- yield stream[\"label\"], HTTPStream(self.session, stream[\"file\"])\n-\n-\n-__plugin__ = NineAnime\n+# Plugin removed - https://github.com/streamlink/streamlink/issues/1862\n", "issue": "Remove 9anime.to\nAs discussed over here: https://github.com/streamlink/streamlink/issues/1110#issuecomment-400687075 9anime.to isn't worth supporting at this point and is broken so I'm proposing we remove it.\r\n\n", "before_files": [{"content": "import re\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http\nfrom streamlink.plugin.api import useragents\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream import HTTPStream\nfrom streamlink.compat import urlparse\n\n\nclass NineAnime(Plugin):\n _episode_info_url = \"//9anime.to/ajax/episode/info\"\n\n _info_schema = validate.Schema({\n \"grabber\": validate.url(),\n \"params\": {\n \"id\": validate.text,\n \"token\": validate.text,\n \"options\": validate.text,\n }\n })\n\n _streams_schema = validate.Schema({\n \"token\": validate.text,\n \"error\": None,\n \"data\": [{\n \"label\": validate.text,\n \"file\": validate.url(),\n \"type\": \"mp4\"\n }]\n })\n\n _url_re = re.compile(r\"https?://9anime.to/watch/(?:[^.]+?\\.)(\\w+)/(\\w+)\")\n\n @classmethod\n def can_handle_url(cls, url):\n return cls._url_re.match(url) is not None\n\n def add_scheme(self, url):\n # update the scheme for the grabber url if required\n if url.startswith(\"//\"):\n url = \"{0}:{1}\".format(urlparse(self.url).scheme, url)\n return url\n\n @Plugin.broken(1110)\n def _get_streams(self):\n match = self._url_re.match(self.url)\n film_id, episode_id = match.groups()\n\n headers = {\n \"Referer\": self.url,\n \"User-Agent\": useragents.FIREFOX\n }\n\n # Get the info about the Episode, including the Grabber API URL\n info_res = http.get(self.add_scheme(self._episode_info_url),\n params=dict(update=0, film=film_id, id=episode_id),\n headers=headers)\n info = http.json(info_res, schema=self._info_schema)\n\n # Get the data about the streams from the Grabber API\n grabber_url = self.add_scheme(info[\"grabber\"])\n stream_list_res = http.get(grabber_url, params=info[\"params\"], headers=headers)\n stream_data = http.json(stream_list_res, schema=self._streams_schema)\n\n for stream in stream_data[\"data\"]:\n yield stream[\"label\"], HTTPStream(self.session, stream[\"file\"])\n\n\n__plugin__ = NineAnime\n", "path": "src/streamlink/plugins/nineanime.py"}], "after_files": [{"content": "# Plugin removed - https://github.com/streamlink/streamlink/issues/1862\n", "path": "src/streamlink/plugins/nineanime.py"}]}
968
632
gh_patches_debug_8550
rasdani/github-patches
git_diff
zulip__zulip-12755
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- portico/docs: Improve OpenAPI descriptions and titles. Followups to https://github.com/zulip/zulip/pull/11045. - [ ] Currently we only use the first paragraph of text for the description. It would be better to concatenate the first few paragraphs, and truncate. Basically it's just using bs.find_all('p') and joining things together in a loop until it gets too long. One catch is that is that we probably want to stop at the first non-paragraph block so we don't cross a list or something. - [ ] Currently, we've just tested our openAPI things with /api and /help. We should extend to the /integrations pages, and also figure out what we want to do for the top-10 major portico pages (especially /features, /why-zulip, and /for/open-source), which could benefit from a similar treatment. To work on this I would start by skimming the code from #11045. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `zerver/views/documentation.py` Content: ``` 1 from typing import Any, Dict, Tuple 2 from collections import OrderedDict 3 from django.views.generic import TemplateView 4 from django.conf import settings 5 from django.http import HttpRequest, HttpResponse, HttpResponseNotFound 6 from django.template import loader 7 8 import os 9 import random 10 import re 11 12 from zerver.lib.integrations import CATEGORIES, INTEGRATIONS, HubotIntegration, \ 13 WebhookIntegration, EmailIntegration 14 from zerver.lib.request import has_request_variables, REQ 15 from zerver.lib.subdomains import get_subdomain 16 from zerver.models import Realm 17 from zerver.templatetags.app_filters import render_markdown_path 18 from zerver.context_processors import zulip_default_context 19 20 def add_api_uri_context(context: Dict[str, Any], request: HttpRequest) -> None: 21 context.update(zulip_default_context(request)) 22 23 subdomain = get_subdomain(request) 24 if (subdomain != Realm.SUBDOMAIN_FOR_ROOT_DOMAIN 25 or not settings.ROOT_DOMAIN_LANDING_PAGE): 26 display_subdomain = subdomain 27 html_settings_links = True 28 else: 29 display_subdomain = 'yourZulipDomain' 30 html_settings_links = False 31 32 display_host = Realm.host_for_subdomain(display_subdomain) 33 api_url_scheme_relative = display_host + "/api" 34 api_url = settings.EXTERNAL_URI_SCHEME + api_url_scheme_relative 35 zulip_url = settings.EXTERNAL_URI_SCHEME + display_host 36 37 context['external_uri_scheme'] = settings.EXTERNAL_URI_SCHEME 38 context['api_url'] = api_url 39 context['api_url_scheme_relative'] = api_url_scheme_relative 40 context['zulip_url'] = zulip_url 41 42 context["html_settings_links"] = html_settings_links 43 if html_settings_links: 44 settings_html = '<a href="/#settings">Zulip settings page</a>' 45 subscriptions_html = '<a target="_blank" href="/#streams">streams page</a>' 46 else: 47 settings_html = 'Zulip settings page' 48 subscriptions_html = 'streams page' 49 context['settings_html'] = settings_html 50 context['subscriptions_html'] = subscriptions_html 51 52 class ApiURLView(TemplateView): 53 def get_context_data(self, **kwargs: Any) -> Dict[str, str]: 54 context = super().get_context_data(**kwargs) 55 add_api_uri_context(context, self.request) 56 return context 57 58 class APIView(ApiURLView): 59 template_name = 'zerver/api.html' 60 61 62 class MarkdownDirectoryView(ApiURLView): 63 path_template = "" 64 65 def get_path(self, article: str) -> Tuple[str, int]: 66 http_status = 200 67 if article == "": 68 article = "index" 69 elif article == "include/sidebar_index": 70 pass 71 elif "/" in article: 72 article = "missing" 73 http_status = 404 74 elif len(article) > 100 or not re.match('^[0-9a-zA-Z_-]+$', article): 75 article = "missing" 76 http_status = 404 77 78 path = self.path_template % (article,) 79 try: 80 loader.get_template(path) 81 return (path, http_status) 82 except loader.TemplateDoesNotExist: 83 return (self.path_template % ("missing",), 404) 84 85 def get_context_data(self, **kwargs: Any) -> Dict[str, Any]: 86 article = kwargs["article"] 87 context = super().get_context_data() # type: Dict[str, Any] 88 (context["article"], http_status_ignored) = self.get_path(article) 89 90 # For disabling the "Back to home" on the homepage 91 context["not_index_page"] = not context["article"].endswith("/index.md") 92 if self.path_template == '/zerver/help/%s.md': 93 context["page_is_help_center"] = True 94 context["doc_root"] = "/help/" 95 (sidebar_index, http_status_ignored) = self.get_path("include/sidebar_index") 96 # We want the sliding/collapsing behavior for /help pages only 97 sidebar_class = "sidebar slide" 98 title_base = "Zulip Help Center" 99 else: 100 context["page_is_api_center"] = True 101 context["doc_root"] = "/api/" 102 (sidebar_index, http_status_ignored) = self.get_path("sidebar_index") 103 sidebar_class = "sidebar" 104 title_base = "Zulip API Documentation" 105 106 # The following is a somewhat hacky approach to extract titles from articles. 107 # Hack: `context["article"] has a leading `/`, so we use + to add directories. 108 article_path = os.path.join(settings.DEPLOY_ROOT, 'templates') + context["article"] 109 if os.path.exists(article_path): 110 with open(article_path) as article_file: 111 first_line = article_file.readlines()[0] 112 # Strip the header and then use the first line to get the article title 113 article_title = first_line.strip().lstrip("# ") 114 if context["not_index_page"]: 115 context["OPEN_GRAPH_TITLE"] = "%s (%s)" % (article_title, title_base) 116 else: 117 context["OPEN_GRAPH_TITLE"] = title_base 118 self.request.placeholder_open_graph_description = ( 119 "REPLACMENT_OPEN_GRAPH_DESCRIPTION_%s" % (int(2**24 * random.random()),)) 120 context["OPEN_GRAPH_DESCRIPTION"] = self.request.placeholder_open_graph_description 121 122 context["sidebar_index"] = sidebar_index 123 context["sidebar_class"] = sidebar_class 124 # An "article" might require the api_uri_context to be rendered 125 api_uri_context = {} # type: Dict[str, Any] 126 add_api_uri_context(api_uri_context, self.request) 127 api_uri_context["run_content_validators"] = True 128 context["api_uri_context"] = api_uri_context 129 return context 130 131 def get(self, request: HttpRequest, article: str="") -> HttpResponse: 132 (path, http_status) = self.get_path(article) 133 result = super().get(self, article=article) 134 if http_status != 200: 135 result.status_code = http_status 136 return result 137 138 def add_integrations_context(context: Dict[str, Any]) -> None: 139 alphabetical_sorted_categories = OrderedDict(sorted(CATEGORIES.items())) 140 alphabetical_sorted_integration = OrderedDict(sorted(INTEGRATIONS.items())) 141 enabled_integrations_count = len(list(filter(lambda v: v.is_enabled(), INTEGRATIONS.values()))) 142 # Subtract 1 so saying "Over X integrations" is correct. Then, 143 # round down to the nearest multiple of 10. 144 integrations_count_display = ((enabled_integrations_count - 1) // 10) * 10 145 context['categories_dict'] = alphabetical_sorted_categories 146 context['integrations_dict'] = alphabetical_sorted_integration 147 context['integrations_count_display'] = integrations_count_display 148 149 def add_integrations_open_graph_context(context: Dict[str, Any], request: HttpRequest) -> None: 150 path_name = request.path.rstrip('/').split('/')[-1] 151 description = ('Zulip comes with over a hundred native integrations out of the box, ' 152 'and integrates with Zapier, IFTTT, and Hubot to provide hundreds more. ' 153 'Connect the apps you use everyday to Zulip.') 154 155 if path_name in INTEGRATIONS: 156 integration = INTEGRATIONS[path_name] 157 context['OPEN_GRAPH_TITLE'] = 'Connect {name} to Zulip'.format(name=integration.display_name) 158 context['OPEN_GRAPH_DESCRIPTION'] = description 159 160 elif path_name in CATEGORIES: 161 category = CATEGORIES[path_name] 162 context['OPEN_GRAPH_TITLE'] = 'Connect your {category} tools to Zulip'.format(category=category) 163 context['OPEN_GRAPH_DESCRIPTION'] = description 164 165 class IntegrationView(ApiURLView): 166 template_name = 'zerver/integrations/index.html' 167 168 def get_context_data(self, **kwargs: Any) -> Dict[str, Any]: 169 context = super().get_context_data(**kwargs) # type: Dict[str, Any] 170 add_integrations_context(context) 171 add_integrations_open_graph_context(context, self.request) 172 return context 173 174 175 @has_request_variables 176 def integration_doc(request: HttpRequest, integration_name: str=REQ(default=None)) -> HttpResponse: 177 if not request.is_ajax(): 178 return HttpResponseNotFound() 179 try: 180 integration = INTEGRATIONS[integration_name] 181 except KeyError: 182 return HttpResponseNotFound() 183 184 context = {} # type: Dict[str, Any] 185 add_api_uri_context(context, request) 186 187 context['integration_name'] = integration.name 188 context['integration_display_name'] = integration.display_name 189 if hasattr(integration, 'stream_name'): 190 context['recommended_stream_name'] = integration.stream_name 191 if isinstance(integration, WebhookIntegration): 192 context['integration_url'] = integration.url[3:] 193 if isinstance(integration, HubotIntegration): 194 context['hubot_docs_url'] = integration.hubot_docs_url 195 if isinstance(integration, EmailIntegration): 196 context['email_gateway_example'] = settings.EMAIL_GATEWAY_EXAMPLE 197 198 doc_html_str = render_markdown_path(integration.doc, context) 199 200 return HttpResponse(doc_html_str) 201 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/zerver/views/documentation.py b/zerver/views/documentation.py --- a/zerver/views/documentation.py +++ b/zerver/views/documentation.py @@ -162,6 +162,10 @@ context['OPEN_GRAPH_TITLE'] = 'Connect your {category} tools to Zulip'.format(category=category) context['OPEN_GRAPH_DESCRIPTION'] = description + elif path_name == 'integrations': + context['OPEN_GRAPH_TITLE'] = 'Connect the tools you use to Zulip' + context['OPEN_GRAPH_DESCRIPTION'] = description + class IntegrationView(ApiURLView): template_name = 'zerver/integrations/index.html'
{"golden_diff": "diff --git a/zerver/views/documentation.py b/zerver/views/documentation.py\n--- a/zerver/views/documentation.py\n+++ b/zerver/views/documentation.py\n@@ -162,6 +162,10 @@\n context['OPEN_GRAPH_TITLE'] = 'Connect your {category} tools to Zulip'.format(category=category)\n context['OPEN_GRAPH_DESCRIPTION'] = description\n \n+ elif path_name == 'integrations':\n+ context['OPEN_GRAPH_TITLE'] = 'Connect the tools you use to Zulip'\n+ context['OPEN_GRAPH_DESCRIPTION'] = description\n+\n class IntegrationView(ApiURLView):\n template_name = 'zerver/integrations/index.html'\n", "issue": "portico/docs: Improve OpenAPI descriptions and titles.\nFollowups to https://github.com/zulip/zulip/pull/11045.\r\n- [ ] Currently we only use the first paragraph of text for the description. It would be better to concatenate the first few paragraphs, and truncate. Basically it's just using bs.find_all('p') and joining things together in a loop until it gets too long. \r\nOne catch is that is that we probably want to stop at the first non-paragraph block so we don't cross a list or something.\r\n- [ ] Currently, we've just tested our openAPI things with /api and /help. We should extend to the /integrations pages, and also figure out what we want to do for the top-10 major portico pages (especially /features, /why-zulip, and /for/open-source), which could benefit from a similar treatment.\r\n\r\nTo work on this I would start by skimming the code from #11045. \n", "before_files": [{"content": "from typing import Any, Dict, Tuple\nfrom collections import OrderedDict\nfrom django.views.generic import TemplateView\nfrom django.conf import settings\nfrom django.http import HttpRequest, HttpResponse, HttpResponseNotFound\nfrom django.template import loader\n\nimport os\nimport random\nimport re\n\nfrom zerver.lib.integrations import CATEGORIES, INTEGRATIONS, HubotIntegration, \\\n WebhookIntegration, EmailIntegration\nfrom zerver.lib.request import has_request_variables, REQ\nfrom zerver.lib.subdomains import get_subdomain\nfrom zerver.models import Realm\nfrom zerver.templatetags.app_filters import render_markdown_path\nfrom zerver.context_processors import zulip_default_context\n\ndef add_api_uri_context(context: Dict[str, Any], request: HttpRequest) -> None:\n context.update(zulip_default_context(request))\n\n subdomain = get_subdomain(request)\n if (subdomain != Realm.SUBDOMAIN_FOR_ROOT_DOMAIN\n or not settings.ROOT_DOMAIN_LANDING_PAGE):\n display_subdomain = subdomain\n html_settings_links = True\n else:\n display_subdomain = 'yourZulipDomain'\n html_settings_links = False\n\n display_host = Realm.host_for_subdomain(display_subdomain)\n api_url_scheme_relative = display_host + \"/api\"\n api_url = settings.EXTERNAL_URI_SCHEME + api_url_scheme_relative\n zulip_url = settings.EXTERNAL_URI_SCHEME + display_host\n\n context['external_uri_scheme'] = settings.EXTERNAL_URI_SCHEME\n context['api_url'] = api_url\n context['api_url_scheme_relative'] = api_url_scheme_relative\n context['zulip_url'] = zulip_url\n\n context[\"html_settings_links\"] = html_settings_links\n if html_settings_links:\n settings_html = '<a href=\"/#settings\">Zulip settings page</a>'\n subscriptions_html = '<a target=\"_blank\" href=\"/#streams\">streams page</a>'\n else:\n settings_html = 'Zulip settings page'\n subscriptions_html = 'streams page'\n context['settings_html'] = settings_html\n context['subscriptions_html'] = subscriptions_html\n\nclass ApiURLView(TemplateView):\n def get_context_data(self, **kwargs: Any) -> Dict[str, str]:\n context = super().get_context_data(**kwargs)\n add_api_uri_context(context, self.request)\n return context\n\nclass APIView(ApiURLView):\n template_name = 'zerver/api.html'\n\n\nclass MarkdownDirectoryView(ApiURLView):\n path_template = \"\"\n\n def get_path(self, article: str) -> Tuple[str, int]:\n http_status = 200\n if article == \"\":\n article = \"index\"\n elif article == \"include/sidebar_index\":\n pass\n elif \"/\" in article:\n article = \"missing\"\n http_status = 404\n elif len(article) > 100 or not re.match('^[0-9a-zA-Z_-]+$', article):\n article = \"missing\"\n http_status = 404\n\n path = self.path_template % (article,)\n try:\n loader.get_template(path)\n return (path, http_status)\n except loader.TemplateDoesNotExist:\n return (self.path_template % (\"missing\",), 404)\n\n def get_context_data(self, **kwargs: Any) -> Dict[str, Any]:\n article = kwargs[\"article\"]\n context = super().get_context_data() # type: Dict[str, Any]\n (context[\"article\"], http_status_ignored) = self.get_path(article)\n\n # For disabling the \"Back to home\" on the homepage\n context[\"not_index_page\"] = not context[\"article\"].endswith(\"/index.md\")\n if self.path_template == '/zerver/help/%s.md':\n context[\"page_is_help_center\"] = True\n context[\"doc_root\"] = \"/help/\"\n (sidebar_index, http_status_ignored) = self.get_path(\"include/sidebar_index\")\n # We want the sliding/collapsing behavior for /help pages only\n sidebar_class = \"sidebar slide\"\n title_base = \"Zulip Help Center\"\n else:\n context[\"page_is_api_center\"] = True\n context[\"doc_root\"] = \"/api/\"\n (sidebar_index, http_status_ignored) = self.get_path(\"sidebar_index\")\n sidebar_class = \"sidebar\"\n title_base = \"Zulip API Documentation\"\n\n # The following is a somewhat hacky approach to extract titles from articles.\n # Hack: `context[\"article\"] has a leading `/`, so we use + to add directories.\n article_path = os.path.join(settings.DEPLOY_ROOT, 'templates') + context[\"article\"]\n if os.path.exists(article_path):\n with open(article_path) as article_file:\n first_line = article_file.readlines()[0]\n # Strip the header and then use the first line to get the article title\n article_title = first_line.strip().lstrip(\"# \")\n if context[\"not_index_page\"]:\n context[\"OPEN_GRAPH_TITLE\"] = \"%s (%s)\" % (article_title, title_base)\n else:\n context[\"OPEN_GRAPH_TITLE\"] = title_base\n self.request.placeholder_open_graph_description = (\n \"REPLACMENT_OPEN_GRAPH_DESCRIPTION_%s\" % (int(2**24 * random.random()),))\n context[\"OPEN_GRAPH_DESCRIPTION\"] = self.request.placeholder_open_graph_description\n\n context[\"sidebar_index\"] = sidebar_index\n context[\"sidebar_class\"] = sidebar_class\n # An \"article\" might require the api_uri_context to be rendered\n api_uri_context = {} # type: Dict[str, Any]\n add_api_uri_context(api_uri_context, self.request)\n api_uri_context[\"run_content_validators\"] = True\n context[\"api_uri_context\"] = api_uri_context\n return context\n\n def get(self, request: HttpRequest, article: str=\"\") -> HttpResponse:\n (path, http_status) = self.get_path(article)\n result = super().get(self, article=article)\n if http_status != 200:\n result.status_code = http_status\n return result\n\ndef add_integrations_context(context: Dict[str, Any]) -> None:\n alphabetical_sorted_categories = OrderedDict(sorted(CATEGORIES.items()))\n alphabetical_sorted_integration = OrderedDict(sorted(INTEGRATIONS.items()))\n enabled_integrations_count = len(list(filter(lambda v: v.is_enabled(), INTEGRATIONS.values())))\n # Subtract 1 so saying \"Over X integrations\" is correct. Then,\n # round down to the nearest multiple of 10.\n integrations_count_display = ((enabled_integrations_count - 1) // 10) * 10\n context['categories_dict'] = alphabetical_sorted_categories\n context['integrations_dict'] = alphabetical_sorted_integration\n context['integrations_count_display'] = integrations_count_display\n\ndef add_integrations_open_graph_context(context: Dict[str, Any], request: HttpRequest) -> None:\n path_name = request.path.rstrip('/').split('/')[-1]\n description = ('Zulip comes with over a hundred native integrations out of the box, '\n 'and integrates with Zapier, IFTTT, and Hubot to provide hundreds more. '\n 'Connect the apps you use everyday to Zulip.')\n\n if path_name in INTEGRATIONS:\n integration = INTEGRATIONS[path_name]\n context['OPEN_GRAPH_TITLE'] = 'Connect {name} to Zulip'.format(name=integration.display_name)\n context['OPEN_GRAPH_DESCRIPTION'] = description\n\n elif path_name in CATEGORIES:\n category = CATEGORIES[path_name]\n context['OPEN_GRAPH_TITLE'] = 'Connect your {category} tools to Zulip'.format(category=category)\n context['OPEN_GRAPH_DESCRIPTION'] = description\n\nclass IntegrationView(ApiURLView):\n template_name = 'zerver/integrations/index.html'\n\n def get_context_data(self, **kwargs: Any) -> Dict[str, Any]:\n context = super().get_context_data(**kwargs) # type: Dict[str, Any]\n add_integrations_context(context)\n add_integrations_open_graph_context(context, self.request)\n return context\n\n\n@has_request_variables\ndef integration_doc(request: HttpRequest, integration_name: str=REQ(default=None)) -> HttpResponse:\n if not request.is_ajax():\n return HttpResponseNotFound()\n try:\n integration = INTEGRATIONS[integration_name]\n except KeyError:\n return HttpResponseNotFound()\n\n context = {} # type: Dict[str, Any]\n add_api_uri_context(context, request)\n\n context['integration_name'] = integration.name\n context['integration_display_name'] = integration.display_name\n if hasattr(integration, 'stream_name'):\n context['recommended_stream_name'] = integration.stream_name\n if isinstance(integration, WebhookIntegration):\n context['integration_url'] = integration.url[3:]\n if isinstance(integration, HubotIntegration):\n context['hubot_docs_url'] = integration.hubot_docs_url\n if isinstance(integration, EmailIntegration):\n context['email_gateway_example'] = settings.EMAIL_GATEWAY_EXAMPLE\n\n doc_html_str = render_markdown_path(integration.doc, context)\n\n return HttpResponse(doc_html_str)\n", "path": "zerver/views/documentation.py"}], "after_files": [{"content": "from typing import Any, Dict, Tuple\nfrom collections import OrderedDict\nfrom django.views.generic import TemplateView\nfrom django.conf import settings\nfrom django.http import HttpRequest, HttpResponse, HttpResponseNotFound\nfrom django.template import loader\n\nimport os\nimport random\nimport re\n\nfrom zerver.lib.integrations import CATEGORIES, INTEGRATIONS, HubotIntegration, \\\n WebhookIntegration, EmailIntegration\nfrom zerver.lib.request import has_request_variables, REQ\nfrom zerver.lib.subdomains import get_subdomain\nfrom zerver.models import Realm\nfrom zerver.templatetags.app_filters import render_markdown_path\nfrom zerver.context_processors import zulip_default_context\n\ndef add_api_uri_context(context: Dict[str, Any], request: HttpRequest) -> None:\n context.update(zulip_default_context(request))\n\n subdomain = get_subdomain(request)\n if (subdomain != Realm.SUBDOMAIN_FOR_ROOT_DOMAIN\n or not settings.ROOT_DOMAIN_LANDING_PAGE):\n display_subdomain = subdomain\n html_settings_links = True\n else:\n display_subdomain = 'yourZulipDomain'\n html_settings_links = False\n\n display_host = Realm.host_for_subdomain(display_subdomain)\n api_url_scheme_relative = display_host + \"/api\"\n api_url = settings.EXTERNAL_URI_SCHEME + api_url_scheme_relative\n zulip_url = settings.EXTERNAL_URI_SCHEME + display_host\n\n context['external_uri_scheme'] = settings.EXTERNAL_URI_SCHEME\n context['api_url'] = api_url\n context['api_url_scheme_relative'] = api_url_scheme_relative\n context['zulip_url'] = zulip_url\n\n context[\"html_settings_links\"] = html_settings_links\n if html_settings_links:\n settings_html = '<a href=\"/#settings\">Zulip settings page</a>'\n subscriptions_html = '<a target=\"_blank\" href=\"/#streams\">streams page</a>'\n else:\n settings_html = 'Zulip settings page'\n subscriptions_html = 'streams page'\n context['settings_html'] = settings_html\n context['subscriptions_html'] = subscriptions_html\n\nclass ApiURLView(TemplateView):\n def get_context_data(self, **kwargs: Any) -> Dict[str, str]:\n context = super().get_context_data(**kwargs)\n add_api_uri_context(context, self.request)\n return context\n\nclass APIView(ApiURLView):\n template_name = 'zerver/api.html'\n\n\nclass MarkdownDirectoryView(ApiURLView):\n path_template = \"\"\n\n def get_path(self, article: str) -> Tuple[str, int]:\n http_status = 200\n if article == \"\":\n article = \"index\"\n elif article == \"include/sidebar_index\":\n pass\n elif \"/\" in article:\n article = \"missing\"\n http_status = 404\n elif len(article) > 100 or not re.match('^[0-9a-zA-Z_-]+$', article):\n article = \"missing\"\n http_status = 404\n\n path = self.path_template % (article,)\n try:\n loader.get_template(path)\n return (path, http_status)\n except loader.TemplateDoesNotExist:\n return (self.path_template % (\"missing\",), 404)\n\n def get_context_data(self, **kwargs: Any) -> Dict[str, Any]:\n article = kwargs[\"article\"]\n context = super().get_context_data() # type: Dict[str, Any]\n (context[\"article\"], http_status_ignored) = self.get_path(article)\n\n # For disabling the \"Back to home\" on the homepage\n context[\"not_index_page\"] = not context[\"article\"].endswith(\"/index.md\")\n if self.path_template == '/zerver/help/%s.md':\n context[\"page_is_help_center\"] = True\n context[\"doc_root\"] = \"/help/\"\n (sidebar_index, http_status_ignored) = self.get_path(\"include/sidebar_index\")\n # We want the sliding/collapsing behavior for /help pages only\n sidebar_class = \"sidebar slide\"\n title_base = \"Zulip Help Center\"\n else:\n context[\"page_is_api_center\"] = True\n context[\"doc_root\"] = \"/api/\"\n (sidebar_index, http_status_ignored) = self.get_path(\"sidebar_index\")\n sidebar_class = \"sidebar\"\n title_base = \"Zulip API Documentation\"\n\n # The following is a somewhat hacky approach to extract titles from articles.\n # Hack: `context[\"article\"] has a leading `/`, so we use + to add directories.\n article_path = os.path.join(settings.DEPLOY_ROOT, 'templates') + context[\"article\"]\n if os.path.exists(article_path):\n with open(article_path) as article_file:\n first_line = article_file.readlines()[0]\n # Strip the header and then use the first line to get the article title\n article_title = first_line.strip().lstrip(\"# \")\n if context[\"not_index_page\"]:\n context[\"OPEN_GRAPH_TITLE\"] = \"%s (%s)\" % (article_title, title_base)\n else:\n context[\"OPEN_GRAPH_TITLE\"] = title_base\n self.request.placeholder_open_graph_description = (\n \"REPLACMENT_OPEN_GRAPH_DESCRIPTION_%s\" % (int(2**24 * random.random()),))\n context[\"OPEN_GRAPH_DESCRIPTION\"] = self.request.placeholder_open_graph_description\n\n context[\"sidebar_index\"] = sidebar_index\n context[\"sidebar_class\"] = sidebar_class\n # An \"article\" might require the api_uri_context to be rendered\n api_uri_context = {} # type: Dict[str, Any]\n add_api_uri_context(api_uri_context, self.request)\n api_uri_context[\"run_content_validators\"] = True\n context[\"api_uri_context\"] = api_uri_context\n return context\n\n def get(self, request: HttpRequest, article: str=\"\") -> HttpResponse:\n (path, http_status) = self.get_path(article)\n result = super().get(self, article=article)\n if http_status != 200:\n result.status_code = http_status\n return result\n\ndef add_integrations_context(context: Dict[str, Any]) -> None:\n alphabetical_sorted_categories = OrderedDict(sorted(CATEGORIES.items()))\n alphabetical_sorted_integration = OrderedDict(sorted(INTEGRATIONS.items()))\n enabled_integrations_count = len(list(filter(lambda v: v.is_enabled(), INTEGRATIONS.values())))\n # Subtract 1 so saying \"Over X integrations\" is correct. Then,\n # round down to the nearest multiple of 10.\n integrations_count_display = ((enabled_integrations_count - 1) // 10) * 10\n context['categories_dict'] = alphabetical_sorted_categories\n context['integrations_dict'] = alphabetical_sorted_integration\n context['integrations_count_display'] = integrations_count_display\n\ndef add_integrations_open_graph_context(context: Dict[str, Any], request: HttpRequest) -> None:\n path_name = request.path.rstrip('/').split('/')[-1]\n description = ('Zulip comes with over a hundred native integrations out of the box, '\n 'and integrates with Zapier, IFTTT, and Hubot to provide hundreds more. '\n 'Connect the apps you use everyday to Zulip.')\n\n if path_name in INTEGRATIONS:\n integration = INTEGRATIONS[path_name]\n context['OPEN_GRAPH_TITLE'] = 'Connect {name} to Zulip'.format(name=integration.display_name)\n context['OPEN_GRAPH_DESCRIPTION'] = description\n\n elif path_name in CATEGORIES:\n category = CATEGORIES[path_name]\n context['OPEN_GRAPH_TITLE'] = 'Connect your {category} tools to Zulip'.format(category=category)\n context['OPEN_GRAPH_DESCRIPTION'] = description\n\n elif path_name == 'integrations':\n context['OPEN_GRAPH_TITLE'] = 'Connect the tools you use to Zulip'\n context['OPEN_GRAPH_DESCRIPTION'] = description\n\nclass IntegrationView(ApiURLView):\n template_name = 'zerver/integrations/index.html'\n\n def get_context_data(self, **kwargs: Any) -> Dict[str, Any]:\n context = super().get_context_data(**kwargs) # type: Dict[str, Any]\n add_integrations_context(context)\n add_integrations_open_graph_context(context, self.request)\n return context\n\n\n@has_request_variables\ndef integration_doc(request: HttpRequest, integration_name: str=REQ(default=None)) -> HttpResponse:\n if not request.is_ajax():\n return HttpResponseNotFound()\n try:\n integration = INTEGRATIONS[integration_name]\n except KeyError:\n return HttpResponseNotFound()\n\n context = {} # type: Dict[str, Any]\n add_api_uri_context(context, request)\n\n context['integration_name'] = integration.name\n context['integration_display_name'] = integration.display_name\n if hasattr(integration, 'stream_name'):\n context['recommended_stream_name'] = integration.stream_name\n if isinstance(integration, WebhookIntegration):\n context['integration_url'] = integration.url[3:]\n if isinstance(integration, HubotIntegration):\n context['hubot_docs_url'] = integration.hubot_docs_url\n if isinstance(integration, EmailIntegration):\n context['email_gateway_example'] = settings.EMAIL_GATEWAY_EXAMPLE\n\n doc_html_str = render_markdown_path(integration.doc, context)\n\n return HttpResponse(doc_html_str)\n", "path": "zerver/views/documentation.py"}]}
2,942
144
gh_patches_debug_5172
rasdani/github-patches
git_diff
yt-project__yt-4776
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- BUG: modifications through matplotlib engine cannot be properly displayed <!--To help us understand and resolve your issue, please fill out the form to the best of your ability.--> <!--You can feel free to delete the sections that do not apply.--> ### Bug report **Bug summary** The Modifications through Matplotlib engine cannot be properly displayed. Taking the following code for example, the expected modifications can only be shown by the containing matplotlib figure object like `fig.savefig("sloshing.png")`. **Code for reproduction** adapted from [docs](https://yt-project.org/docs/dev/cookbook/simple_plots.html#accessing-and-modifying-plots-directly) (also broken there) ```python import numpy as np import yt # Load the dataset. ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150") # Create a slice object slc = yt.SlicePlot(ds, "x", ("gas", "density"), width=(800.0, "kpc")) # Get a reference to the matplotlib axes object for the plot ax = slc.plots[("gas", "density")].axes # Let's adjust the x axis tick labels for label in ax.xaxis.get_ticklabels(): label.set_color("red") label.set_fontsize(16) # Get a reference to the matplotlib figure object for the plot fig = slc.plots[("gas", "density")].figure # And create a mini-panel of a gaussian histogram inside the plot rect = (0.2, 0.2, 0.2, 0.2) new_ax = fig.add_axes(rect) n, bins, patches = new_ax.hist( np.random.randn(1000) + 20, 50, facecolor="black", edgecolor="black" ) # Make sure its visible new_ax.tick_params(colors="white") # And label it la = new_ax.set_xlabel("Dinosaurs per furlong") la.set_color("white") slc.save() ``` **Actual outcome** ![iTerm2 1b3woO sloshing_nomag2_hdf5_plt_cnt_0150_Slice_x_density](https://github.com/yt-project/yt/assets/22034707/9bece844-afaa-4327-9660-05a4d009c137) **Expected outcome** The changes of the x-axis tick labels ![iTerm2 h1XCIV sloshing](https://github.com/yt-project/yt/assets/22034707/8178509e-88ef-4eeb-9d28-ce3438ffd900) **Version Information** * Operating System: MacOS 14.1.1 and Red Hat Enterprise Linux Server release 7.8 (Maipo) * Python Version: 3.9 * yt version: 4.2.1 and 4.3.0 <!--Please tell us how you installed yt and python e.g., from source, pip, conda. If you installed from conda, please specify which channel you used if not the default--> --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `doc/source/cookbook/simple_slice_matplotlib_example.py` Content: ``` 1 import numpy as np 2 3 import yt 4 5 # Load the dataset. 6 ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150") 7 8 # Create a slice object 9 slc = yt.SlicePlot(ds, "x", ("gas", "density"), width=(800.0, "kpc")) 10 11 # Get a reference to the matplotlib axes object for the plot 12 ax = slc.plots[("gas", "density")].axes 13 14 # Let's adjust the x axis tick labels 15 for label in ax.xaxis.get_ticklabels(): 16 label.set_color("red") 17 label.set_fontsize(16) 18 19 # Get a reference to the matplotlib figure object for the plot 20 fig = slc.plots[("gas", "density")].figure 21 22 # And create a mini-panel of a gaussian histogram inside the plot 23 rect = (0.2, 0.2, 0.2, 0.2) 24 new_ax = fig.add_axes(rect) 25 26 n, bins, patches = new_ax.hist( 27 np.random.randn(1000) + 20, 50, facecolor="black", edgecolor="black" 28 ) 29 30 # Make sure its visible 31 new_ax.tick_params(colors="white") 32 33 # And label it 34 la = new_ax.set_xlabel("Dinosaurs per furlong") 35 la.set_color("white") 36 37 slc.save() 38 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/doc/source/cookbook/simple_slice_matplotlib_example.py b/doc/source/cookbook/simple_slice_matplotlib_example.py --- a/doc/source/cookbook/simple_slice_matplotlib_example.py +++ b/doc/source/cookbook/simple_slice_matplotlib_example.py @@ -8,6 +8,10 @@ # Create a slice object slc = yt.SlicePlot(ds, "x", ("gas", "density"), width=(800.0, "kpc")) +# Rendering should be performed explicitly *before* any modification is +# performed directly with matplotlib. +slc.render() + # Get a reference to the matplotlib axes object for the plot ax = slc.plots[("gas", "density")].axes
{"golden_diff": "diff --git a/doc/source/cookbook/simple_slice_matplotlib_example.py b/doc/source/cookbook/simple_slice_matplotlib_example.py\n--- a/doc/source/cookbook/simple_slice_matplotlib_example.py\n+++ b/doc/source/cookbook/simple_slice_matplotlib_example.py\n@@ -8,6 +8,10 @@\n # Create a slice object\n slc = yt.SlicePlot(ds, \"x\", (\"gas\", \"density\"), width=(800.0, \"kpc\"))\n \n+# Rendering should be performed explicitly *before* any modification is\n+# performed directly with matplotlib.\n+slc.render()\n+\n # Get a reference to the matplotlib axes object for the plot\n ax = slc.plots[(\"gas\", \"density\")].axes\n", "issue": "BUG: modifications through matplotlib engine cannot be properly displayed\n<!--To help us understand and resolve your issue, please fill out the form to\r\nthe best of your ability.-->\r\n<!--You can feel free to delete the sections that do not apply.-->\r\n\r\n### Bug report\r\n\r\n**Bug summary**\r\n\r\nThe Modifications through Matplotlib engine cannot be properly displayed. \r\n\r\nTaking the following code for example, the expected modifications can only be shown by the containing matplotlib figure object like `fig.savefig(\"sloshing.png\")`. \r\n\r\n**Code for reproduction**\r\n\r\nadapted from [docs](https://yt-project.org/docs/dev/cookbook/simple_plots.html#accessing-and-modifying-plots-directly) (also broken there)\r\n\r\n```python\r\nimport numpy as np\r\n\r\nimport yt\r\n\r\n# Load the dataset.\r\nds = yt.load(\"GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150\")\r\n\r\n# Create a slice object\r\nslc = yt.SlicePlot(ds, \"x\", (\"gas\", \"density\"), width=(800.0, \"kpc\"))\r\n\r\n# Get a reference to the matplotlib axes object for the plot\r\nax = slc.plots[(\"gas\", \"density\")].axes\r\n\r\n# Let's adjust the x axis tick labels\r\nfor label in ax.xaxis.get_ticklabels():\r\n label.set_color(\"red\")\r\n label.set_fontsize(16)\r\n\r\n# Get a reference to the matplotlib figure object for the plot\r\nfig = slc.plots[(\"gas\", \"density\")].figure\r\n\r\n# And create a mini-panel of a gaussian histogram inside the plot\r\nrect = (0.2, 0.2, 0.2, 0.2)\r\nnew_ax = fig.add_axes(rect)\r\n\r\nn, bins, patches = new_ax.hist(\r\n np.random.randn(1000) + 20, 50, facecolor=\"black\", edgecolor=\"black\"\r\n)\r\n\r\n# Make sure its visible\r\nnew_ax.tick_params(colors=\"white\")\r\n\r\n# And label it\r\nla = new_ax.set_xlabel(\"Dinosaurs per furlong\")\r\nla.set_color(\"white\")\r\n\r\nslc.save()\r\n```\r\n\r\n**Actual outcome**\r\n\r\n![iTerm2 1b3woO sloshing_nomag2_hdf5_plt_cnt_0150_Slice_x_density](https://github.com/yt-project/yt/assets/22034707/9bece844-afaa-4327-9660-05a4d009c137)\r\n\r\n**Expected outcome**\r\n\r\nThe changes of the x-axis tick labels\r\n![iTerm2 h1XCIV sloshing](https://github.com/yt-project/yt/assets/22034707/8178509e-88ef-4eeb-9d28-ce3438ffd900)\r\n\r\n**Version Information**\r\n * Operating System: MacOS 14.1.1 and Red Hat Enterprise Linux Server release 7.8 (Maipo)\r\n * Python Version: 3.9\r\n * yt version: 4.2.1 and 4.3.0\r\n\r\n<!--Please tell us how you installed yt and python e.g., from source,\r\npip, conda. If you installed from conda, please specify which channel you used\r\nif not the default-->\r\n\n", "before_files": [{"content": "import numpy as np\n\nimport yt\n\n# Load the dataset.\nds = yt.load(\"GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150\")\n\n# Create a slice object\nslc = yt.SlicePlot(ds, \"x\", (\"gas\", \"density\"), width=(800.0, \"kpc\"))\n\n# Get a reference to the matplotlib axes object for the plot\nax = slc.plots[(\"gas\", \"density\")].axes\n\n# Let's adjust the x axis tick labels\nfor label in ax.xaxis.get_ticklabels():\n label.set_color(\"red\")\n label.set_fontsize(16)\n\n# Get a reference to the matplotlib figure object for the plot\nfig = slc.plots[(\"gas\", \"density\")].figure\n\n# And create a mini-panel of a gaussian histogram inside the plot\nrect = (0.2, 0.2, 0.2, 0.2)\nnew_ax = fig.add_axes(rect)\n\nn, bins, patches = new_ax.hist(\n np.random.randn(1000) + 20, 50, facecolor=\"black\", edgecolor=\"black\"\n)\n\n# Make sure its visible\nnew_ax.tick_params(colors=\"white\")\n\n# And label it\nla = new_ax.set_xlabel(\"Dinosaurs per furlong\")\nla.set_color(\"white\")\n\nslc.save()\n", "path": "doc/source/cookbook/simple_slice_matplotlib_example.py"}], "after_files": [{"content": "import numpy as np\n\nimport yt\n\n# Load the dataset.\nds = yt.load(\"GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150\")\n\n# Create a slice object\nslc = yt.SlicePlot(ds, \"x\", (\"gas\", \"density\"), width=(800.0, \"kpc\"))\n\n# Rendering should be performed explicitly *before* any modification is\n# performed directly with matplotlib.\nslc.render()\n\n# Get a reference to the matplotlib axes object for the plot\nax = slc.plots[(\"gas\", \"density\")].axes\n\n# Let's adjust the x axis tick labels\nfor label in ax.xaxis.get_ticklabels():\n label.set_color(\"red\")\n label.set_fontsize(16)\n\n# Get a reference to the matplotlib figure object for the plot\nfig = slc.plots[(\"gas\", \"density\")].figure\n\n# And create a mini-panel of a gaussian histogram inside the plot\nrect = (0.2, 0.2, 0.2, 0.2)\nnew_ax = fig.add_axes(rect)\n\nn, bins, patches = new_ax.hist(\n np.random.randn(1000) + 20, 50, facecolor=\"black\", edgecolor=\"black\"\n)\n\n# Make sure its visible\nnew_ax.tick_params(colors=\"white\")\n\n# And label it\nla = new_ax.set_xlabel(\"Dinosaurs per furlong\")\nla.set_color(\"white\")\n\nslc.save()\n", "path": "doc/source/cookbook/simple_slice_matplotlib_example.py"}]}
1,360
157
gh_patches_debug_32427
rasdani/github-patches
git_diff
HypothesisWorks__hypothesis-2356
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- pytest function scoped fixtures should run once per example, not once per test Because of the way Hypothesis just exposes a single test function, pytest just runs its fixtures once for that function. This is a long running problem with Hypothesis's interaction with pytest and there is [an open bug about it on the pytest side too](https://github.com/pytest-dev/pytest/issues/916). Fixing this is tricky. The [fixtures-integration](https://github.com/HypothesisWorks/hypothesis-python/tree/fixtures-integration) branch has a partial prototype which makes _some_ progress towards making it work, but it's very far from complete and has a number of issues which are likely to need substantial work to fix. I [am currently seeking funding to work on this ticket](http://hypothesis.works/articles/pytest-integration-sponsorship/) and it is unlikely to make much progress without said funding. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `hypothesis-python/src/hypothesis/extra/pytestplugin.py` Content: ``` 1 # This file is part of Hypothesis, which may be found at 2 # https://github.com/HypothesisWorks/hypothesis/ 3 # 4 # Most of this work is copyright (C) 2013-2020 David R. MacIver 5 # ([email protected]), but it contains contributions by others. See 6 # CONTRIBUTING.rst for a full list of people who may hold copyright, and 7 # consult the git log if you need to determine who owns an individual 8 # contribution. 9 # 10 # This Source Code Form is subject to the terms of the Mozilla Public License, 11 # v. 2.0. If a copy of the MPL was not distributed with this file, You can 12 # obtain one at https://mozilla.org/MPL/2.0/. 13 # 14 # END HEADER 15 16 from distutils.version import LooseVersion 17 18 import pytest 19 20 from hypothesis import Verbosity, core, settings 21 from hypothesis.errors import InvalidArgument 22 from hypothesis.internal.detection import is_hypothesis_test 23 from hypothesis.reporting import default as default_reporter, with_reporter 24 from hypothesis.statistics import collector 25 26 LOAD_PROFILE_OPTION = "--hypothesis-profile" 27 VERBOSITY_OPTION = "--hypothesis-verbosity" 28 PRINT_STATISTICS_OPTION = "--hypothesis-show-statistics" 29 SEED_OPTION = "--hypothesis-seed" 30 31 32 class StoringReporter: 33 def __init__(self, config): 34 self.config = config 35 self.results = [] 36 37 def __call__(self, msg): 38 if self.config.getoption("capture", "fd") == "no": 39 default_reporter(msg) 40 if not isinstance(msg, str): 41 msg = repr(msg) 42 self.results.append(msg) 43 44 45 if LooseVersion(pytest.__version__) < "4.3": # pragma: no cover 46 import warnings 47 from hypothesis.errors import HypothesisWarning 48 49 PYTEST_TOO_OLD_MESSAGE = """ 50 You are using Pytest version %s. Hypothesis tests work with any test 51 runner, but our Pytest plugin requires Pytest 4.3 or newer. 52 Note that the Pytest developers no longer support this version either! 53 Disabling the Hypothesis pytest plugin... 54 """ 55 warnings.warn(PYTEST_TOO_OLD_MESSAGE % (pytest.__version__,), HypothesisWarning) 56 57 else: 58 59 def pytest_addoption(parser): 60 group = parser.getgroup("hypothesis", "Hypothesis") 61 group.addoption( 62 LOAD_PROFILE_OPTION, 63 action="store", 64 help="Load in a registered hypothesis.settings profile", 65 ) 66 group.addoption( 67 VERBOSITY_OPTION, 68 action="store", 69 choices=[opt.name for opt in Verbosity], 70 help="Override profile with verbosity setting specified", 71 ) 72 group.addoption( 73 PRINT_STATISTICS_OPTION, 74 action="store_true", 75 help="Configure when statistics are printed", 76 default=False, 77 ) 78 group.addoption( 79 SEED_OPTION, 80 action="store", 81 help="Set a seed to use for all Hypothesis tests", 82 ) 83 84 def pytest_report_header(config): 85 profile = config.getoption(LOAD_PROFILE_OPTION) 86 if not profile: 87 profile = settings._current_profile 88 settings_str = settings.get_profile(profile).show_changed() 89 if settings_str != "": 90 settings_str = " -> %s" % (settings_str) 91 if ( 92 config.option.verbose >= 1 93 or settings.default.verbosity >= Verbosity.verbose 94 ): 95 return "hypothesis profile %r%s" % (profile, settings_str) 96 97 def pytest_configure(config): 98 core.running_under_pytest = True 99 profile = config.getoption(LOAD_PROFILE_OPTION) 100 if profile: 101 settings.load_profile(profile) 102 verbosity_name = config.getoption(VERBOSITY_OPTION) 103 if verbosity_name: 104 verbosity_value = Verbosity[verbosity_name] 105 profile_name = "%s-with-%s-verbosity" % ( 106 settings._current_profile, 107 verbosity_name, 108 ) 109 # register_profile creates a new profile, exactly like the current one, 110 # with the extra values given (in this case 'verbosity') 111 settings.register_profile(profile_name, verbosity=verbosity_value) 112 settings.load_profile(profile_name) 113 seed = config.getoption(SEED_OPTION) 114 if seed is not None: 115 try: 116 seed = int(seed) 117 except ValueError: 118 pass 119 core.global_force_seed = seed 120 config.addinivalue_line("markers", "hypothesis: Tests which use hypothesis.") 121 122 @pytest.hookimpl(hookwrapper=True) 123 def pytest_runtest_call(item): 124 if not hasattr(item, "obj"): 125 yield 126 elif not is_hypothesis_test(item.obj): 127 # If @given was not applied, check whether other hypothesis 128 # decorators were applied, and raise an error if they were. 129 if getattr(item.obj, "is_hypothesis_strategy_function", False): 130 raise InvalidArgument( 131 "%s is a function that returns a Hypothesis strategy, but pytest " 132 "has collected it as a test function. This is useless as the " 133 "function body will never be executed. To define a test " 134 "function, use @given instead of @composite." % (item.nodeid,) 135 ) 136 message = "Using `@%s` on a test without `@given` is completely pointless." 137 for name, attribute in [ 138 ("example", "hypothesis_explicit_examples"), 139 ("seed", "_hypothesis_internal_use_seed"), 140 ("settings", "_hypothesis_internal_settings_applied"), 141 ("reproduce_example", "_hypothesis_internal_use_reproduce_failure"), 142 ]: 143 if hasattr(item.obj, attribute): 144 raise InvalidArgument(message % (name,)) 145 yield 146 else: 147 if item.get_closest_marker("parametrize") is not None: 148 # Give every parametrized test invocation a unique database key 149 key = item.nodeid.encode("utf-8") 150 item.obj.hypothesis.inner_test._hypothesis_internal_add_digest = key 151 152 store = StoringReporter(item.config) 153 154 def note_statistics(stats): 155 lines = [item.nodeid + ":", ""] + stats.get_description() + [""] 156 item.hypothesis_statistics = lines 157 158 with collector.with_value(note_statistics): 159 with with_reporter(store): 160 yield 161 if store.results: 162 item.hypothesis_report_information = list(store.results) 163 164 @pytest.hookimpl(hookwrapper=True) 165 def pytest_runtest_makereport(item, call): 166 report = (yield).get_result() 167 if hasattr(item, "hypothesis_report_information"): 168 report.sections.append( 169 ("Hypothesis", "\n".join(item.hypothesis_report_information)) 170 ) 171 if hasattr(item, "hypothesis_statistics") and report.when == "teardown": 172 val = ("hypothesis-stats", item.hypothesis_statistics) 173 report.user_properties.append(val) 174 175 def pytest_terminal_summary(terminalreporter): 176 if not terminalreporter.config.getoption(PRINT_STATISTICS_OPTION): 177 return 178 terminalreporter.section("Hypothesis Statistics") 179 # terminalreporter.stats is a dict, where the empty string appears to 180 # always be the key for a list of _pytest.reports.TestReport objects 181 # (where we stored the statistics data in pytest_runtest_makereport above) 182 for test_report in terminalreporter.stats.get("", []): 183 for name, lines in test_report.user_properties: 184 if name == "hypothesis-stats" and test_report.when == "teardown": 185 for li in lines: 186 terminalreporter.write_line(li) 187 188 def pytest_collection_modifyitems(items): 189 for item in items: 190 if isinstance(item, pytest.Function) and is_hypothesis_test(item.obj): 191 item.add_marker("hypothesis") 192 193 194 def load(): 195 """Required for `pluggy` to load a plugin from setuptools entrypoints.""" 196 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/hypothesis-python/src/hypothesis/extra/pytestplugin.py b/hypothesis-python/src/hypothesis/extra/pytestplugin.py --- a/hypothesis-python/src/hypothesis/extra/pytestplugin.py +++ b/hypothesis-python/src/hypothesis/extra/pytestplugin.py @@ -14,10 +14,12 @@ # END HEADER from distutils.version import LooseVersion +from inspect import signature import pytest from hypothesis import Verbosity, core, settings +from hypothesis._settings import note_deprecation from hypothesis.errors import InvalidArgument from hypothesis.internal.detection import is_hypothesis_test from hypothesis.reporting import default as default_reporter, with_reporter @@ -144,6 +146,25 @@ raise InvalidArgument(message % (name,)) yield else: + # Warn about function-scoped fixtures, excluding autouse fixtures because + # the advice is probably not actionable and the status quo seems OK... + # See https://github.com/HypothesisWorks/hypothesis/issues/377 for detail. + argnames = None + for fx_defs in item._request._fixturemanager.getfixtureinfo( + node=item, func=item.function, cls=None + ).name2fixturedefs.values(): + if argnames is None: + argnames = frozenset(signature(item.function).parameters) + for fx in fx_defs: + if fx.scope == "function" and fx.argname in argnames: + note_deprecation( + "%s uses the %r fixture, but function-scoped fixtures " + "should not be used with @given(...) tests, because " + "fixtures are not reset between generated examples!" + % (item.nodeid, fx.argname), + since="RELEASEDAY", + ) + if item.get_closest_marker("parametrize") is not None: # Give every parametrized test invocation a unique database key key = item.nodeid.encode("utf-8")
{"golden_diff": "diff --git a/hypothesis-python/src/hypothesis/extra/pytestplugin.py b/hypothesis-python/src/hypothesis/extra/pytestplugin.py\n--- a/hypothesis-python/src/hypothesis/extra/pytestplugin.py\n+++ b/hypothesis-python/src/hypothesis/extra/pytestplugin.py\n@@ -14,10 +14,12 @@\n # END HEADER\n \n from distutils.version import LooseVersion\n+from inspect import signature\n \n import pytest\n \n from hypothesis import Verbosity, core, settings\n+from hypothesis._settings import note_deprecation\n from hypothesis.errors import InvalidArgument\n from hypothesis.internal.detection import is_hypothesis_test\n from hypothesis.reporting import default as default_reporter, with_reporter\n@@ -144,6 +146,25 @@\n raise InvalidArgument(message % (name,))\n yield\n else:\n+ # Warn about function-scoped fixtures, excluding autouse fixtures because\n+ # the advice is probably not actionable and the status quo seems OK...\n+ # See https://github.com/HypothesisWorks/hypothesis/issues/377 for detail.\n+ argnames = None\n+ for fx_defs in item._request._fixturemanager.getfixtureinfo(\n+ node=item, func=item.function, cls=None\n+ ).name2fixturedefs.values():\n+ if argnames is None:\n+ argnames = frozenset(signature(item.function).parameters)\n+ for fx in fx_defs:\n+ if fx.scope == \"function\" and fx.argname in argnames:\n+ note_deprecation(\n+ \"%s uses the %r fixture, but function-scoped fixtures \"\n+ \"should not be used with @given(...) tests, because \"\n+ \"fixtures are not reset between generated examples!\"\n+ % (item.nodeid, fx.argname),\n+ since=\"RELEASEDAY\",\n+ )\n+\n if item.get_closest_marker(\"parametrize\") is not None:\n # Give every parametrized test invocation a unique database key\n key = item.nodeid.encode(\"utf-8\")\n", "issue": "pytest function scoped fixtures should run once per example, not once per test\nBecause of the way Hypothesis just exposes a single test function, pytest just runs its fixtures once for that function.\n\nThis is a long running problem with Hypothesis's interaction with pytest and there is [an open bug about it on the pytest side too](https://github.com/pytest-dev/pytest/issues/916).\n\nFixing this is tricky. The [fixtures-integration](https://github.com/HypothesisWorks/hypothesis-python/tree/fixtures-integration) branch has a partial prototype which makes _some_ progress towards making it work, but it's very far from complete and has a number of issues which are likely to need substantial work to fix.\n\nI [am currently seeking funding to work on this ticket](http://hypothesis.works/articles/pytest-integration-sponsorship/) and it is unlikely to make much progress without said funding.\n\n", "before_files": [{"content": "# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis/\n#\n# Most of this work is copyright (C) 2013-2020 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at https://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom distutils.version import LooseVersion\n\nimport pytest\n\nfrom hypothesis import Verbosity, core, settings\nfrom hypothesis.errors import InvalidArgument\nfrom hypothesis.internal.detection import is_hypothesis_test\nfrom hypothesis.reporting import default as default_reporter, with_reporter\nfrom hypothesis.statistics import collector\n\nLOAD_PROFILE_OPTION = \"--hypothesis-profile\"\nVERBOSITY_OPTION = \"--hypothesis-verbosity\"\nPRINT_STATISTICS_OPTION = \"--hypothesis-show-statistics\"\nSEED_OPTION = \"--hypothesis-seed\"\n\n\nclass StoringReporter:\n def __init__(self, config):\n self.config = config\n self.results = []\n\n def __call__(self, msg):\n if self.config.getoption(\"capture\", \"fd\") == \"no\":\n default_reporter(msg)\n if not isinstance(msg, str):\n msg = repr(msg)\n self.results.append(msg)\n\n\nif LooseVersion(pytest.__version__) < \"4.3\": # pragma: no cover\n import warnings\n from hypothesis.errors import HypothesisWarning\n\n PYTEST_TOO_OLD_MESSAGE = \"\"\"\n You are using Pytest version %s. Hypothesis tests work with any test\n runner, but our Pytest plugin requires Pytest 4.3 or newer.\n Note that the Pytest developers no longer support this version either!\n Disabling the Hypothesis pytest plugin...\n \"\"\"\n warnings.warn(PYTEST_TOO_OLD_MESSAGE % (pytest.__version__,), HypothesisWarning)\n\nelse:\n\n def pytest_addoption(parser):\n group = parser.getgroup(\"hypothesis\", \"Hypothesis\")\n group.addoption(\n LOAD_PROFILE_OPTION,\n action=\"store\",\n help=\"Load in a registered hypothesis.settings profile\",\n )\n group.addoption(\n VERBOSITY_OPTION,\n action=\"store\",\n choices=[opt.name for opt in Verbosity],\n help=\"Override profile with verbosity setting specified\",\n )\n group.addoption(\n PRINT_STATISTICS_OPTION,\n action=\"store_true\",\n help=\"Configure when statistics are printed\",\n default=False,\n )\n group.addoption(\n SEED_OPTION,\n action=\"store\",\n help=\"Set a seed to use for all Hypothesis tests\",\n )\n\n def pytest_report_header(config):\n profile = config.getoption(LOAD_PROFILE_OPTION)\n if not profile:\n profile = settings._current_profile\n settings_str = settings.get_profile(profile).show_changed()\n if settings_str != \"\":\n settings_str = \" -> %s\" % (settings_str)\n if (\n config.option.verbose >= 1\n or settings.default.verbosity >= Verbosity.verbose\n ):\n return \"hypothesis profile %r%s\" % (profile, settings_str)\n\n def pytest_configure(config):\n core.running_under_pytest = True\n profile = config.getoption(LOAD_PROFILE_OPTION)\n if profile:\n settings.load_profile(profile)\n verbosity_name = config.getoption(VERBOSITY_OPTION)\n if verbosity_name:\n verbosity_value = Verbosity[verbosity_name]\n profile_name = \"%s-with-%s-verbosity\" % (\n settings._current_profile,\n verbosity_name,\n )\n # register_profile creates a new profile, exactly like the current one,\n # with the extra values given (in this case 'verbosity')\n settings.register_profile(profile_name, verbosity=verbosity_value)\n settings.load_profile(profile_name)\n seed = config.getoption(SEED_OPTION)\n if seed is not None:\n try:\n seed = int(seed)\n except ValueError:\n pass\n core.global_force_seed = seed\n config.addinivalue_line(\"markers\", \"hypothesis: Tests which use hypothesis.\")\n\n @pytest.hookimpl(hookwrapper=True)\n def pytest_runtest_call(item):\n if not hasattr(item, \"obj\"):\n yield\n elif not is_hypothesis_test(item.obj):\n # If @given was not applied, check whether other hypothesis\n # decorators were applied, and raise an error if they were.\n if getattr(item.obj, \"is_hypothesis_strategy_function\", False):\n raise InvalidArgument(\n \"%s is a function that returns a Hypothesis strategy, but pytest \"\n \"has collected it as a test function. This is useless as the \"\n \"function body will never be executed. To define a test \"\n \"function, use @given instead of @composite.\" % (item.nodeid,)\n )\n message = \"Using `@%s` on a test without `@given` is completely pointless.\"\n for name, attribute in [\n (\"example\", \"hypothesis_explicit_examples\"),\n (\"seed\", \"_hypothesis_internal_use_seed\"),\n (\"settings\", \"_hypothesis_internal_settings_applied\"),\n (\"reproduce_example\", \"_hypothesis_internal_use_reproduce_failure\"),\n ]:\n if hasattr(item.obj, attribute):\n raise InvalidArgument(message % (name,))\n yield\n else:\n if item.get_closest_marker(\"parametrize\") is not None:\n # Give every parametrized test invocation a unique database key\n key = item.nodeid.encode(\"utf-8\")\n item.obj.hypothesis.inner_test._hypothesis_internal_add_digest = key\n\n store = StoringReporter(item.config)\n\n def note_statistics(stats):\n lines = [item.nodeid + \":\", \"\"] + stats.get_description() + [\"\"]\n item.hypothesis_statistics = lines\n\n with collector.with_value(note_statistics):\n with with_reporter(store):\n yield\n if store.results:\n item.hypothesis_report_information = list(store.results)\n\n @pytest.hookimpl(hookwrapper=True)\n def pytest_runtest_makereport(item, call):\n report = (yield).get_result()\n if hasattr(item, \"hypothesis_report_information\"):\n report.sections.append(\n (\"Hypothesis\", \"\\n\".join(item.hypothesis_report_information))\n )\n if hasattr(item, \"hypothesis_statistics\") and report.when == \"teardown\":\n val = (\"hypothesis-stats\", item.hypothesis_statistics)\n report.user_properties.append(val)\n\n def pytest_terminal_summary(terminalreporter):\n if not terminalreporter.config.getoption(PRINT_STATISTICS_OPTION):\n return\n terminalreporter.section(\"Hypothesis Statistics\")\n # terminalreporter.stats is a dict, where the empty string appears to\n # always be the key for a list of _pytest.reports.TestReport objects\n # (where we stored the statistics data in pytest_runtest_makereport above)\n for test_report in terminalreporter.stats.get(\"\", []):\n for name, lines in test_report.user_properties:\n if name == \"hypothesis-stats\" and test_report.when == \"teardown\":\n for li in lines:\n terminalreporter.write_line(li)\n\n def pytest_collection_modifyitems(items):\n for item in items:\n if isinstance(item, pytest.Function) and is_hypothesis_test(item.obj):\n item.add_marker(\"hypothesis\")\n\n\ndef load():\n \"\"\"Required for `pluggy` to load a plugin from setuptools entrypoints.\"\"\"\n", "path": "hypothesis-python/src/hypothesis/extra/pytestplugin.py"}], "after_files": [{"content": "# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis/\n#\n# Most of this work is copyright (C) 2013-2020 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at https://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom distutils.version import LooseVersion\nfrom inspect import signature\n\nimport pytest\n\nfrom hypothesis import Verbosity, core, settings\nfrom hypothesis._settings import note_deprecation\nfrom hypothesis.errors import InvalidArgument\nfrom hypothesis.internal.detection import is_hypothesis_test\nfrom hypothesis.reporting import default as default_reporter, with_reporter\nfrom hypothesis.statistics import collector\n\nLOAD_PROFILE_OPTION = \"--hypothesis-profile\"\nVERBOSITY_OPTION = \"--hypothesis-verbosity\"\nPRINT_STATISTICS_OPTION = \"--hypothesis-show-statistics\"\nSEED_OPTION = \"--hypothesis-seed\"\n\n\nclass StoringReporter:\n def __init__(self, config):\n self.config = config\n self.results = []\n\n def __call__(self, msg):\n if self.config.getoption(\"capture\", \"fd\") == \"no\":\n default_reporter(msg)\n if not isinstance(msg, str):\n msg = repr(msg)\n self.results.append(msg)\n\n\nif LooseVersion(pytest.__version__) < \"4.3\": # pragma: no cover\n import warnings\n from hypothesis.errors import HypothesisWarning\n\n PYTEST_TOO_OLD_MESSAGE = \"\"\"\n You are using Pytest version %s. Hypothesis tests work with any test\n runner, but our Pytest plugin requires Pytest 4.3 or newer.\n Note that the Pytest developers no longer support this version either!\n Disabling the Hypothesis pytest plugin...\n \"\"\"\n warnings.warn(PYTEST_TOO_OLD_MESSAGE % (pytest.__version__,), HypothesisWarning)\n\nelse:\n\n def pytest_addoption(parser):\n group = parser.getgroup(\"hypothesis\", \"Hypothesis\")\n group.addoption(\n LOAD_PROFILE_OPTION,\n action=\"store\",\n help=\"Load in a registered hypothesis.settings profile\",\n )\n group.addoption(\n VERBOSITY_OPTION,\n action=\"store\",\n choices=[opt.name for opt in Verbosity],\n help=\"Override profile with verbosity setting specified\",\n )\n group.addoption(\n PRINT_STATISTICS_OPTION,\n action=\"store_true\",\n help=\"Configure when statistics are printed\",\n default=False,\n )\n group.addoption(\n SEED_OPTION,\n action=\"store\",\n help=\"Set a seed to use for all Hypothesis tests\",\n )\n\n def pytest_report_header(config):\n profile = config.getoption(LOAD_PROFILE_OPTION)\n if not profile:\n profile = settings._current_profile\n settings_str = settings.get_profile(profile).show_changed()\n if settings_str != \"\":\n settings_str = \" -> %s\" % (settings_str)\n if (\n config.option.verbose >= 1\n or settings.default.verbosity >= Verbosity.verbose\n ):\n return \"hypothesis profile %r%s\" % (profile, settings_str)\n\n def pytest_configure(config):\n core.running_under_pytest = True\n profile = config.getoption(LOAD_PROFILE_OPTION)\n if profile:\n settings.load_profile(profile)\n verbosity_name = config.getoption(VERBOSITY_OPTION)\n if verbosity_name:\n verbosity_value = Verbosity[verbosity_name]\n profile_name = \"%s-with-%s-verbosity\" % (\n settings._current_profile,\n verbosity_name,\n )\n # register_profile creates a new profile, exactly like the current one,\n # with the extra values given (in this case 'verbosity')\n settings.register_profile(profile_name, verbosity=verbosity_value)\n settings.load_profile(profile_name)\n seed = config.getoption(SEED_OPTION)\n if seed is not None:\n try:\n seed = int(seed)\n except ValueError:\n pass\n core.global_force_seed = seed\n config.addinivalue_line(\"markers\", \"hypothesis: Tests which use hypothesis.\")\n\n @pytest.hookimpl(hookwrapper=True)\n def pytest_runtest_call(item):\n if not hasattr(item, \"obj\"):\n yield\n elif not is_hypothesis_test(item.obj):\n # If @given was not applied, check whether other hypothesis\n # decorators were applied, and raise an error if they were.\n if getattr(item.obj, \"is_hypothesis_strategy_function\", False):\n raise InvalidArgument(\n \"%s is a function that returns a Hypothesis strategy, but pytest \"\n \"has collected it as a test function. This is useless as the \"\n \"function body will never be executed. To define a test \"\n \"function, use @given instead of @composite.\" % (item.nodeid,)\n )\n message = \"Using `@%s` on a test without `@given` is completely pointless.\"\n for name, attribute in [\n (\"example\", \"hypothesis_explicit_examples\"),\n (\"seed\", \"_hypothesis_internal_use_seed\"),\n (\"settings\", \"_hypothesis_internal_settings_applied\"),\n (\"reproduce_example\", \"_hypothesis_internal_use_reproduce_failure\"),\n ]:\n if hasattr(item.obj, attribute):\n raise InvalidArgument(message % (name,))\n yield\n else:\n # Warn about function-scoped fixtures, excluding autouse fixtures because\n # the advice is probably not actionable and the status quo seems OK...\n # See https://github.com/HypothesisWorks/hypothesis/issues/377 for detail.\n argnames = None\n for fx_defs in item._request._fixturemanager.getfixtureinfo(\n node=item, func=item.function, cls=None\n ).name2fixturedefs.values():\n if argnames is None:\n argnames = frozenset(signature(item.function).parameters)\n for fx in fx_defs:\n if fx.scope == \"function\" and fx.argname in argnames:\n note_deprecation(\n \"%s uses the %r fixture, but function-scoped fixtures \"\n \"should not be used with @given(...) tests, because \"\n \"fixtures are not reset between generated examples!\"\n % (item.nodeid, fx.argname),\n since=\"RELEASEDAY\",\n )\n\n if item.get_closest_marker(\"parametrize\") is not None:\n # Give every parametrized test invocation a unique database key\n key = item.nodeid.encode(\"utf-8\")\n item.obj.hypothesis.inner_test._hypothesis_internal_add_digest = key\n\n store = StoringReporter(item.config)\n\n def note_statistics(stats):\n lines = [item.nodeid + \":\", \"\"] + stats.get_description() + [\"\"]\n item.hypothesis_statistics = lines\n\n with collector.with_value(note_statistics):\n with with_reporter(store):\n yield\n if store.results:\n item.hypothesis_report_information = list(store.results)\n\n @pytest.hookimpl(hookwrapper=True)\n def pytest_runtest_makereport(item, call):\n report = (yield).get_result()\n if hasattr(item, \"hypothesis_report_information\"):\n report.sections.append(\n (\"Hypothesis\", \"\\n\".join(item.hypothesis_report_information))\n )\n if hasattr(item, \"hypothesis_statistics\") and report.when == \"teardown\":\n val = (\"hypothesis-stats\", item.hypothesis_statistics)\n report.user_properties.append(val)\n\n def pytest_terminal_summary(terminalreporter):\n if not terminalreporter.config.getoption(PRINT_STATISTICS_OPTION):\n return\n terminalreporter.section(\"Hypothesis Statistics\")\n # terminalreporter.stats is a dict, where the empty string appears to\n # always be the key for a list of _pytest.reports.TestReport objects\n # (where we stored the statistics data in pytest_runtest_makereport above)\n for test_report in terminalreporter.stats.get(\"\", []):\n for name, lines in test_report.user_properties:\n if name == \"hypothesis-stats\" and test_report.when == \"teardown\":\n for li in lines:\n terminalreporter.write_line(li)\n\n def pytest_collection_modifyitems(items):\n for item in items:\n if isinstance(item, pytest.Function) and is_hypothesis_test(item.obj):\n item.add_marker(\"hypothesis\")\n\n\ndef load():\n \"\"\"Required for `pluggy` to load a plugin from setuptools entrypoints.\"\"\"\n", "path": "hypothesis-python/src/hypothesis/extra/pytestplugin.py"}]}
2,635
444
gh_patches_debug_30981
rasdani/github-patches
git_diff
mosaicml__composer-298
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- TQDM spams lines during validation when running trainer.fit() in notebook ** Environment ** Colab notebook. <img width="607" alt="TQDM Spam" src="https://user-images.githubusercontent.com/11728034/151625792-31dc2c0e-d572-43d1-9514-5127a6a3751b.png"> ** To reproduce run `trainer.fit()` in a Colab notebook. See [this notebook](https://colab.research.google.com/drive/12Dl0NVDaj4tf4gfpfg-rkIAoO_H7edo3?usp=sharing) for an example. ## Expected behavior TQDM output should remain on a single line during evaluation, which happens during training. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `composer/loggers/tqdm_logger.py` Content: ``` 1 # Copyright 2021 MosaicML. All Rights Reserved. 2 3 from __future__ import annotations 4 5 import sys 6 from dataclasses import asdict, dataclass 7 from typing import TYPE_CHECKING, Any, Dict, List, Optional 8 9 import tqdm 10 import yaml 11 12 from composer.core.logging import LogLevel, TLogData, TLogDataValue, format_log_data_value 13 from composer.core.logging.base_backend import BaseLoggerBackend 14 from composer.core.state import State 15 from composer.core.types import StateDict 16 from composer.utils import dist 17 18 if TYPE_CHECKING: 19 from composer.core.logging import Logger 20 21 _IS_TRAIN_TO_KEYS_TO_LOG = {True: ['loss/train'], False: ['accuracy/val']} 22 23 24 @dataclass 25 class _TQDMLoggerInstanceState: 26 total: Optional[int] 27 description: str 28 position: int 29 keys_to_log: List[str] 30 n: int 31 epoch_metrics: Dict[str, TLogDataValue] 32 33 34 class _TQDMLoggerInstance: 35 36 def __init__(self, state: _TQDMLoggerInstanceState) -> None: 37 self.state = state 38 self.pbar = tqdm.tqdm(total=state.total, 39 desc=state.description, 40 position=state.position, 41 bar_format="{l_bar}{bar:10}{r_bar}{bar:-10b}") 42 self.pbar.set_postfix(state.epoch_metrics) 43 44 def log_metric(self, data: TLogData): 45 formatted_data = {k: format_log_data_value(v) for (k, v) in data.items() if k in self.state.keys_to_log} 46 self.state.epoch_metrics.update(formatted_data) 47 self.pbar.set_postfix(self.state.epoch_metrics) 48 49 def update(self): 50 self.pbar.update() 51 self.state.n = self.pbar.n 52 53 def close(self): 54 self.pbar.close() 55 56 def state_dict(self) -> StateDict: 57 return asdict(self.state) 58 59 60 class TQDMLoggerBackend(BaseLoggerBackend): 61 """Shows TQDM progress bars. 62 63 During training, the progress bar logs the batch and training loss. 64 During validation, the progress bar logs the batch and validation accuracy. 65 66 Example output:: 67 68 Epoch 1: 100%|██████████| 64/64 [00:01<00:00, 53.17it/s, loss/train=2.3023] 69 Epoch 1 (val): 100%|██████████| 20/20 [00:00<00:00, 100.96it/s, accuracy/val=0.0995] 70 71 .. note:: 72 73 It is currently not possible to show additional metrics. 74 Custom metrics for the TQDM progress bar will be supported in a future version. 75 76 Args: 77 config (dict or None, optional): 78 Trainer configuration. If provided, it is printed to the terminal as YAML. 79 """ 80 81 def __init__(self, config: Optional[Dict[str, Any]] = None) -> None: 82 super().__init__() 83 self.pbars: Dict[bool, _TQDMLoggerInstance] = {} 84 self.is_train: Optional[bool] = None 85 self.config = config 86 87 def will_log(self, state: State, log_level: LogLevel) -> bool: 88 del state # Unused 89 return dist.get_global_rank() == 0 and log_level <= LogLevel.BATCH 90 91 def log_metric(self, epoch: int, step: int, log_level: LogLevel, data: TLogData) -> None: 92 del epoch, step, log_level # Unused 93 if self.is_train in self.pbars: 94 # Logging outside an epoch 95 assert self.is_train is not None 96 self.pbars[self.is_train].log_metric(data) 97 98 def init(self, state: State, logger: Logger) -> None: 99 del state, logger # unused 100 if self.config is not None: 101 print("Config") 102 print("-" * 30) 103 yaml.safe_dump(self.config, stream=sys.stdout) 104 print("-" * 30) 105 print() 106 107 def _start(self, state: State): 108 if dist.get_global_rank() != 0: 109 return 110 assert self.is_train is not None, "self.is_train should be set by the callback" 111 # TODO(anis) -- in #120, len(state.eval_dataloader) is inaccurate, as it does not incorporate 112 # trainer._eval_subset_num_batches. The evaluator spec should fix this. 113 total_steps = state.steps_per_epoch if self.is_train else len(state.eval_dataloader) 114 desc = f'Epoch {int(state.timer.epoch)}' 115 position = 0 if self.is_train else 1 116 if not self.is_train: 117 desc += f", Batch {int(state.timer.batch)} (val)" 118 self.pbars[self.is_train] = _TQDMLoggerInstance( 119 _TQDMLoggerInstanceState(total=total_steps, 120 position=position, 121 n=0, 122 keys_to_log=_IS_TRAIN_TO_KEYS_TO_LOG[self.is_train], 123 description=desc, 124 epoch_metrics={})) 125 126 def epoch_start(self, state: State, logger: Logger) -> None: 127 del logger # unused 128 if dist.get_global_rank() != 0: 129 return 130 self.is_train = True 131 self._start(state) 132 133 def eval_start(self, state: State, logger: Logger) -> None: 134 del logger # unused 135 if dist.get_global_rank() != 0: 136 return 137 self.is_train = False 138 self._start(state) 139 140 def _update(self): 141 if dist.get_global_rank() != 0: 142 return 143 if self.is_train in self.pbars: 144 assert self.is_train is not None 145 self.pbars[self.is_train].update() 146 147 def batch_end(self, state: State, logger: Logger) -> None: 148 del state, logger # unused 149 if dist.get_global_rank() != 0: 150 return 151 self._update() 152 153 def eval_after_forward(self, state: State, logger: Logger) -> None: 154 del state, logger # unused 155 if dist.get_global_rank() != 0: 156 return 157 self._update() 158 159 def _end(self): 160 if dist.get_global_rank() != 0: 161 return 162 if self.is_train in self.pbars: 163 assert self.is_train is not None 164 self.pbars[self.is_train].close() 165 del self.pbars[self.is_train] 166 self.is_train = None 167 168 def epoch_end(self, state: State, logger: Logger) -> None: 169 del state, logger # unused 170 if dist.get_global_rank() != 0: 171 return 172 self._end() 173 174 def eval_end(self, state: State, logger: Logger) -> None: 175 del state, logger # unused 176 if dist.get_global_rank() != 0: 177 return 178 self._end() 179 180 def state_dict(self) -> StateDict: 181 return { 182 "pbars": {k: v.state_dict() for (k, v) in self.pbars.items()}, 183 "is_train": self.is_train, 184 } 185 186 def load_state_dict(self, state: StateDict) -> None: 187 self.pbars = {k: _TQDMLoggerInstance(**v) for (k, v) in state["pbars"].items()} 188 self.is_train = state["is_train"] 189 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/composer/loggers/tqdm_logger.py b/composer/loggers/tqdm_logger.py --- a/composer/loggers/tqdm_logger.py +++ b/composer/loggers/tqdm_logger.py @@ -6,8 +6,8 @@ from dataclasses import asdict, dataclass from typing import TYPE_CHECKING, Any, Dict, List, Optional -import tqdm import yaml +from tqdm import auto from composer.core.logging import LogLevel, TLogData, TLogDataValue, format_log_data_value from composer.core.logging.base_backend import BaseLoggerBackend @@ -35,7 +35,7 @@ def __init__(self, state: _TQDMLoggerInstanceState) -> None: self.state = state - self.pbar = tqdm.tqdm(total=state.total, + self.pbar = auto.tqdm(total=state.total, desc=state.description, position=state.position, bar_format="{l_bar}{bar:10}{r_bar}{bar:-10b}") @@ -65,12 +65,12 @@ Example output:: - Epoch 1: 100%|██████████| 64/64 [00:01<00:00, 53.17it/s, loss/train=2.3023] - Epoch 1 (val): 100%|██████████| 20/20 [00:00<00:00, 100.96it/s, accuracy/val=0.0995] + Epoch 1: 100%|██████████| 64/64 [00:01<00:00, 53.17it/s, loss/train=2.3023] + Epoch 1 (val): 100%|██████████| 20/20 [00:00<00:00, 100.96it/s, accuracy/val=0.0995] .. note:: - It is currently not possible to show additional metrics. + It is currently not possible to show additional metrics. Custom metrics for the TQDM progress bar will be supported in a future version. Args:
{"golden_diff": "diff --git a/composer/loggers/tqdm_logger.py b/composer/loggers/tqdm_logger.py\n--- a/composer/loggers/tqdm_logger.py\n+++ b/composer/loggers/tqdm_logger.py\n@@ -6,8 +6,8 @@\n from dataclasses import asdict, dataclass\n from typing import TYPE_CHECKING, Any, Dict, List, Optional\n \n-import tqdm\n import yaml\n+from tqdm import auto\n \n from composer.core.logging import LogLevel, TLogData, TLogDataValue, format_log_data_value\n from composer.core.logging.base_backend import BaseLoggerBackend\n@@ -35,7 +35,7 @@\n \n def __init__(self, state: _TQDMLoggerInstanceState) -> None:\n self.state = state\n- self.pbar = tqdm.tqdm(total=state.total,\n+ self.pbar = auto.tqdm(total=state.total,\n desc=state.description,\n position=state.position,\n bar_format=\"{l_bar}{bar:10}{r_bar}{bar:-10b}\")\n@@ -65,12 +65,12 @@\n \n Example output::\n \n- Epoch 1: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 64/64 [00:01<00:00, 53.17it/s, loss/train=2.3023] \n- Epoch 1 (val): 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 20/20 [00:00<00:00, 100.96it/s, accuracy/val=0.0995] \n+ Epoch 1: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 64/64 [00:01<00:00, 53.17it/s, loss/train=2.3023]\n+ Epoch 1 (val): 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 20/20 [00:00<00:00, 100.96it/s, accuracy/val=0.0995]\n \n .. note::\n \n- It is currently not possible to show additional metrics. \n+ It is currently not possible to show additional metrics.\n Custom metrics for the TQDM progress bar will be supported in a future version.\n \n Args:\n", "issue": "TQDM spams lines during validation when running trainer.fit() in notebook\n** Environment **\r\nColab notebook.\r\n\r\n<img width=\"607\" alt=\"TQDM Spam\" src=\"https://user-images.githubusercontent.com/11728034/151625792-31dc2c0e-d572-43d1-9514-5127a6a3751b.png\">\r\n\r\n** To reproduce\r\n\r\nrun `trainer.fit()` in a Colab notebook. See [this notebook](https://colab.research.google.com/drive/12Dl0NVDaj4tf4gfpfg-rkIAoO_H7edo3?usp=sharing) for an example.\r\n\r\n## Expected behavior\r\n\r\nTQDM output should remain on a single line during evaluation, which happens during training.\r\n\n", "before_files": [{"content": "# Copyright 2021 MosaicML. All Rights Reserved.\n\nfrom __future__ import annotations\n\nimport sys\nfrom dataclasses import asdict, dataclass\nfrom typing import TYPE_CHECKING, Any, Dict, List, Optional\n\nimport tqdm\nimport yaml\n\nfrom composer.core.logging import LogLevel, TLogData, TLogDataValue, format_log_data_value\nfrom composer.core.logging.base_backend import BaseLoggerBackend\nfrom composer.core.state import State\nfrom composer.core.types import StateDict\nfrom composer.utils import dist\n\nif TYPE_CHECKING:\n from composer.core.logging import Logger\n\n_IS_TRAIN_TO_KEYS_TO_LOG = {True: ['loss/train'], False: ['accuracy/val']}\n\n\n@dataclass\nclass _TQDMLoggerInstanceState:\n total: Optional[int]\n description: str\n position: int\n keys_to_log: List[str]\n n: int\n epoch_metrics: Dict[str, TLogDataValue]\n\n\nclass _TQDMLoggerInstance:\n\n def __init__(self, state: _TQDMLoggerInstanceState) -> None:\n self.state = state\n self.pbar = tqdm.tqdm(total=state.total,\n desc=state.description,\n position=state.position,\n bar_format=\"{l_bar}{bar:10}{r_bar}{bar:-10b}\")\n self.pbar.set_postfix(state.epoch_metrics)\n\n def log_metric(self, data: TLogData):\n formatted_data = {k: format_log_data_value(v) for (k, v) in data.items() if k in self.state.keys_to_log}\n self.state.epoch_metrics.update(formatted_data)\n self.pbar.set_postfix(self.state.epoch_metrics)\n\n def update(self):\n self.pbar.update()\n self.state.n = self.pbar.n\n\n def close(self):\n self.pbar.close()\n\n def state_dict(self) -> StateDict:\n return asdict(self.state)\n\n\nclass TQDMLoggerBackend(BaseLoggerBackend):\n \"\"\"Shows TQDM progress bars.\n\n During training, the progress bar logs the batch and training loss.\n During validation, the progress bar logs the batch and validation accuracy.\n\n Example output::\n\n Epoch 1: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 64/64 [00:01<00:00, 53.17it/s, loss/train=2.3023] \n Epoch 1 (val): 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 20/20 [00:00<00:00, 100.96it/s, accuracy/val=0.0995] \n\n .. note::\n\n It is currently not possible to show additional metrics. \n Custom metrics for the TQDM progress bar will be supported in a future version.\n\n Args:\n config (dict or None, optional):\n Trainer configuration. If provided, it is printed to the terminal as YAML.\n \"\"\"\n\n def __init__(self, config: Optional[Dict[str, Any]] = None) -> None:\n super().__init__()\n self.pbars: Dict[bool, _TQDMLoggerInstance] = {}\n self.is_train: Optional[bool] = None\n self.config = config\n\n def will_log(self, state: State, log_level: LogLevel) -> bool:\n del state # Unused\n return dist.get_global_rank() == 0 and log_level <= LogLevel.BATCH\n\n def log_metric(self, epoch: int, step: int, log_level: LogLevel, data: TLogData) -> None:\n del epoch, step, log_level # Unused\n if self.is_train in self.pbars:\n # Logging outside an epoch\n assert self.is_train is not None\n self.pbars[self.is_train].log_metric(data)\n\n def init(self, state: State, logger: Logger) -> None:\n del state, logger # unused\n if self.config is not None:\n print(\"Config\")\n print(\"-\" * 30)\n yaml.safe_dump(self.config, stream=sys.stdout)\n print(\"-\" * 30)\n print()\n\n def _start(self, state: State):\n if dist.get_global_rank() != 0:\n return\n assert self.is_train is not None, \"self.is_train should be set by the callback\"\n # TODO(anis) -- in #120, len(state.eval_dataloader) is inaccurate, as it does not incorporate\n # trainer._eval_subset_num_batches. The evaluator spec should fix this.\n total_steps = state.steps_per_epoch if self.is_train else len(state.eval_dataloader)\n desc = f'Epoch {int(state.timer.epoch)}'\n position = 0 if self.is_train else 1\n if not self.is_train:\n desc += f\", Batch {int(state.timer.batch)} (val)\"\n self.pbars[self.is_train] = _TQDMLoggerInstance(\n _TQDMLoggerInstanceState(total=total_steps,\n position=position,\n n=0,\n keys_to_log=_IS_TRAIN_TO_KEYS_TO_LOG[self.is_train],\n description=desc,\n epoch_metrics={}))\n\n def epoch_start(self, state: State, logger: Logger) -> None:\n del logger # unused\n if dist.get_global_rank() != 0:\n return\n self.is_train = True\n self._start(state)\n\n def eval_start(self, state: State, logger: Logger) -> None:\n del logger # unused\n if dist.get_global_rank() != 0:\n return\n self.is_train = False\n self._start(state)\n\n def _update(self):\n if dist.get_global_rank() != 0:\n return\n if self.is_train in self.pbars:\n assert self.is_train is not None\n self.pbars[self.is_train].update()\n\n def batch_end(self, state: State, logger: Logger) -> None:\n del state, logger # unused\n if dist.get_global_rank() != 0:\n return\n self._update()\n\n def eval_after_forward(self, state: State, logger: Logger) -> None:\n del state, logger # unused\n if dist.get_global_rank() != 0:\n return\n self._update()\n\n def _end(self):\n if dist.get_global_rank() != 0:\n return\n if self.is_train in self.pbars:\n assert self.is_train is not None\n self.pbars[self.is_train].close()\n del self.pbars[self.is_train]\n self.is_train = None\n\n def epoch_end(self, state: State, logger: Logger) -> None:\n del state, logger # unused\n if dist.get_global_rank() != 0:\n return\n self._end()\n\n def eval_end(self, state: State, logger: Logger) -> None:\n del state, logger # unused\n if dist.get_global_rank() != 0:\n return\n self._end()\n\n def state_dict(self) -> StateDict:\n return {\n \"pbars\": {k: v.state_dict() for (k, v) in self.pbars.items()},\n \"is_train\": self.is_train,\n }\n\n def load_state_dict(self, state: StateDict) -> None:\n self.pbars = {k: _TQDMLoggerInstance(**v) for (k, v) in state[\"pbars\"].items()}\n self.is_train = state[\"is_train\"]\n", "path": "composer/loggers/tqdm_logger.py"}], "after_files": [{"content": "# Copyright 2021 MosaicML. All Rights Reserved.\n\nfrom __future__ import annotations\n\nimport sys\nfrom dataclasses import asdict, dataclass\nfrom typing import TYPE_CHECKING, Any, Dict, List, Optional\n\nimport yaml\nfrom tqdm import auto\n\nfrom composer.core.logging import LogLevel, TLogData, TLogDataValue, format_log_data_value\nfrom composer.core.logging.base_backend import BaseLoggerBackend\nfrom composer.core.state import State\nfrom composer.core.types import StateDict\nfrom composer.utils import dist\n\nif TYPE_CHECKING:\n from composer.core.logging import Logger\n\n_IS_TRAIN_TO_KEYS_TO_LOG = {True: ['loss/train'], False: ['accuracy/val']}\n\n\n@dataclass\nclass _TQDMLoggerInstanceState:\n total: Optional[int]\n description: str\n position: int\n keys_to_log: List[str]\n n: int\n epoch_metrics: Dict[str, TLogDataValue]\n\n\nclass _TQDMLoggerInstance:\n\n def __init__(self, state: _TQDMLoggerInstanceState) -> None:\n self.state = state\n self.pbar = auto.tqdm(total=state.total,\n desc=state.description,\n position=state.position,\n bar_format=\"{l_bar}{bar:10}{r_bar}{bar:-10b}\")\n self.pbar.set_postfix(state.epoch_metrics)\n\n def log_metric(self, data: TLogData):\n formatted_data = {k: format_log_data_value(v) for (k, v) in data.items() if k in self.state.keys_to_log}\n self.state.epoch_metrics.update(formatted_data)\n self.pbar.set_postfix(self.state.epoch_metrics)\n\n def update(self):\n self.pbar.update()\n self.state.n = self.pbar.n\n\n def close(self):\n self.pbar.close()\n\n def state_dict(self) -> StateDict:\n return asdict(self.state)\n\n\nclass TQDMLoggerBackend(BaseLoggerBackend):\n \"\"\"Shows TQDM progress bars.\n\n During training, the progress bar logs the batch and training loss.\n During validation, the progress bar logs the batch and validation accuracy.\n\n Example output::\n\n Epoch 1: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 64/64 [00:01<00:00, 53.17it/s, loss/train=2.3023]\n Epoch 1 (val): 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 20/20 [00:00<00:00, 100.96it/s, accuracy/val=0.0995]\n\n .. note::\n\n It is currently not possible to show additional metrics.\n Custom metrics for the TQDM progress bar will be supported in a future version.\n\n Args:\n config (dict or None, optional):\n Trainer configuration. If provided, it is printed to the terminal as YAML.\n \"\"\"\n\n def __init__(self, config: Optional[Dict[str, Any]] = None) -> None:\n super().__init__()\n self.pbars: Dict[bool, _TQDMLoggerInstance] = {}\n self.is_train: Optional[bool] = None\n self.config = config\n\n def will_log(self, state: State, log_level: LogLevel) -> bool:\n del state # Unused\n return dist.get_global_rank() == 0 and log_level <= LogLevel.BATCH\n\n def log_metric(self, epoch: int, step: int, log_level: LogLevel, data: TLogData) -> None:\n del epoch, step, log_level # Unused\n if self.is_train in self.pbars:\n # Logging outside an epoch\n assert self.is_train is not None\n self.pbars[self.is_train].log_metric(data)\n\n def init(self, state: State, logger: Logger) -> None:\n del state, logger # unused\n if self.config is not None:\n print(\"Config\")\n print(\"-\" * 30)\n yaml.safe_dump(self.config, stream=sys.stdout)\n print(\"-\" * 30)\n print()\n\n def _start(self, state: State):\n if dist.get_global_rank() != 0:\n return\n assert self.is_train is not None, \"self.is_train should be set by the callback\"\n # TODO(anis) -- in #120, len(state.eval_dataloader) is inaccurate, as it does not incorporate\n # trainer._eval_subset_num_batches. The evaluator spec should fix this.\n total_steps = state.steps_per_epoch if self.is_train else len(state.eval_dataloader)\n desc = f'Epoch {int(state.timer.epoch)}'\n position = 0 if self.is_train else 1\n if not self.is_train:\n desc += f\", Batch {int(state.timer.batch)} (val)\"\n self.pbars[self.is_train] = _TQDMLoggerInstance(\n _TQDMLoggerInstanceState(total=total_steps,\n position=position,\n n=0,\n keys_to_log=_IS_TRAIN_TO_KEYS_TO_LOG[self.is_train],\n description=desc,\n epoch_metrics={}))\n\n def epoch_start(self, state: State, logger: Logger) -> None:\n del logger # unused\n if dist.get_global_rank() != 0:\n return\n self.is_train = True\n self._start(state)\n\n def eval_start(self, state: State, logger: Logger) -> None:\n del logger # unused\n if dist.get_global_rank() != 0:\n return\n self.is_train = False\n self._start(state)\n\n def _update(self):\n if dist.get_global_rank() != 0:\n return\n if self.is_train in self.pbars:\n assert self.is_train is not None\n self.pbars[self.is_train].update()\n\n def batch_end(self, state: State, logger: Logger) -> None:\n del state, logger # unused\n if dist.get_global_rank() != 0:\n return\n self._update()\n\n def eval_after_forward(self, state: State, logger: Logger) -> None:\n del state, logger # unused\n if dist.get_global_rank() != 0:\n return\n self._update()\n\n def _end(self):\n if dist.get_global_rank() != 0:\n return\n if self.is_train in self.pbars:\n assert self.is_train is not None\n self.pbars[self.is_train].close()\n del self.pbars[self.is_train]\n self.is_train = None\n\n def epoch_end(self, state: State, logger: Logger) -> None:\n del state, logger # unused\n if dist.get_global_rank() != 0:\n return\n self._end()\n\n def eval_end(self, state: State, logger: Logger) -> None:\n del state, logger # unused\n if dist.get_global_rank() != 0:\n return\n self._end()\n\n def state_dict(self) -> StateDict:\n return {\n \"pbars\": {k: v.state_dict() for (k, v) in self.pbars.items()},\n \"is_train\": self.is_train,\n }\n\n def load_state_dict(self, state: StateDict) -> None:\n self.pbars = {k: _TQDMLoggerInstance(**v) for (k, v) in state[\"pbars\"].items()}\n self.is_train = state[\"is_train\"]\n", "path": "composer/loggers/tqdm_logger.py"}]}
2,552
528
gh_patches_debug_35925
rasdani/github-patches
git_diff
easybuilders__easybuild-framework-3424
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- RPath compiler wrapper breaking precompiled header builds This issue was found when building OpenCV-4.0.1-foss-2018b-Python-3.6.6.eb with EB 4.1.1. The build has multiple errors `undefined reference to main`. This is caused by the rpath wrappers adding `-Wl,-rpath=...` options to the g++ call, which turns otherwise compile-only commands to generate precompiled headers (`-x c++-header`) into commands that link. See [easybuild-OpenCV-4.0.1-20200623.143513.mcEaZ.log.gz](https://github.com/easybuilders/easybuild-framework/files/4819395/easybuild-OpenCV-4.0.1-20200623.143513.mcEaZ.log.gz) for the full logs. But it can also be shown using only the generated rpath wrapper to compile a simple file: ``` paulm@tcn180 14:43 ~$ m list Currently Loaded Modulefiles: 1) surfsara 2) 2019 3) EasyBuild/4.1.1 4) compilerwrappers 5) eb/4.1.1(default) 6) GCCcore/8.2.0 7) zlib/1.2.11-GCCcore-8.2.0 8) binutils/2.31.1-GCCcore-8.2.0 9) GCC/8.2.0-2.31.1 paulm@tcn180:~$ cat t.cc #include <cstdio> # Straight g++ call, no wrappers, succeeds paulm@tcn180 14:45 ~$ /sw/arch/RedHatEnterpriseServer7/EB_production/2019/software/GCCcore/8.2.0/bin/g++ -x c++-header t.cc paulm@tcn180 14:45 ~$ # Wrapped call, fails paulm@tcn180 14:45 ~$ /scratch-shared/paulm/eb-3OiLzK/tmpvR7vdm/rpath_wrappers/gxx_wrapper/g++ -x c++-header t.cc /lib/../lib64/crt1.o(.text+0x20): error: undefined reference to 'main' collect2: error: ld returned 1 exit status # Straight g++ call with -Wl,-rpath addition, no wrappers, fails paulm@tcn180 14:46 ~$ /sw/arch/RedHatEnterpriseServer7/EB_production/2019/software/GCCcore/8.2.0/bin/g++ -x c++-header -Wl,-rpath=/tmp/doh t.cc /lib/../lib64/crt1.o(.text+0x20): error: undefined reference to 'main' collect2: error: ld returned 1 exit status ``` We (me and @casparvl) suggest that the rpath wrappers check for the `-x c++-header` (and similar) options to make sure an otherwise correct call of g++ is not turned into one that fails. Note that there is a bit of a mist as to the expected behaviour of GCC with various options, as the docs aren't entirely clear: * For example, calling `g++ t.cc` fails if `t.cc` does not contain a main function as it does linking of the generated object code. But calling `g++ -x c++-header t.cc` apparently turns on compile-only mode and succeeds (but the [gcc docs](https://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html) only say: "To create a precompiled header file, simply compile it as you would any other file, if necessary using the -x option to make the driver treat it as a C or C++ header file") * Adding `-Wl,-rpath=...` makes GCC link (although the docs do not specify this), unless `-c` is used, in which case linking is never done. * The [examples in the GCC docs of using precompiled headers](https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_headers.html) do not use `-c` when using `-x c++-header`. But upstream package could be more explicit by using `-c -x c++-header`, which would still work when the RPATH wrappers would add `-Wl,-rpath=...`, but it can't really be expected of those packages to handle that case of downstream building. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `easybuild/scripts/rpath_args.py` Content: ``` 1 #!/usr/bin/env python 2 ## 3 # Copyright 2016-2020 Ghent University 4 # 5 # This file is part of EasyBuild, 6 # originally created by the HPC team of Ghent University (http://ugent.be/hpc/en), 7 # with support of Ghent University (http://ugent.be/hpc), 8 # the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be), 9 # Flemish Research Foundation (FWO) (http://www.fwo.be/en) 10 # and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en). 11 # 12 # https://github.com/easybuilders/easybuild 13 # 14 # EasyBuild is free software: you can redistribute it and/or modify 15 # it under the terms of the GNU General Public License as published by 16 # the Free Software Foundation v2. 17 # 18 # EasyBuild is distributed in the hope that it will be useful, 19 # but WITHOUT ANY WARRANTY; without even the implied warranty of 20 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 21 # GNU General Public License for more details. 22 # 23 # You should have received a copy of the GNU General Public License 24 # along with EasyBuild. If not, see <http://www.gnu.org/licenses/>. 25 ## 26 """ 27 Utility script used by RPATH wrapper script; 28 output is statements that define the following environment variables 29 * $CMD_ARGS: new list of command line arguments to pass 30 * $RPATH_ARGS: command line option to specify list of paths to RPATH 31 32 author: Kenneth Hoste (HPC-UGent) 33 """ 34 import os 35 import re 36 import sys 37 38 39 cmd = sys.argv[1] 40 rpath_filter = sys.argv[2] 41 rpath_include = sys.argv[3] 42 args = sys.argv[4:] 43 44 # wheter or not to use -Wl to pass options to the linker 45 if cmd in ['ld', 'ld.gold', 'ld.bfd']: 46 flag_prefix = '' 47 else: 48 flag_prefix = '-Wl,' 49 50 rpath_filter = rpath_filter.split(',') 51 if rpath_filter: 52 rpath_filter = re.compile('^%s$' % '|'.join(rpath_filter)) 53 else: 54 rpath_filter = None 55 56 if rpath_include: 57 rpath_include = rpath_include.split(',') 58 else: 59 rpath_include = [] 60 61 version_mode = False 62 cmd_args, cmd_args_rpath = [], [] 63 64 # process list of original command line arguments 65 idx = 0 66 while idx < len(args): 67 68 arg = args[idx] 69 70 # if command is run in 'version check' mode, make sure we don't include *any* -rpath arguments 71 if arg in ['-v', '-V', '--version', '-dumpversion']: 72 version_mode = True 73 cmd_args.append(arg) 74 75 # FIXME: also consider $LIBRARY_PATH? 76 # FIXME: support to hard inject additional library paths? 77 # FIXME: support to specify list of path prefixes that should not be RPATH'ed into account? 78 # FIXME skip paths in /tmp, build dir, etc.? 79 80 # handle -L flags, inject corresponding -rpath flag 81 elif arg.startswith('-L'): 82 # take into account that argument to -L may be separated with one or more spaces... 83 if arg == '-L': 84 # actual library path is next argument when arg='-L' 85 idx += 1 86 lib_path = args[idx] 87 else: 88 lib_path = arg[2:] 89 90 if os.path.isabs(lib_path) and (rpath_filter is None or not rpath_filter.match(lib_path)): 91 # inject -rpath flag in front for every -L with an absolute path, 92 # also retain the -L flag (without reordering!) 93 cmd_args_rpath.append(flag_prefix + '-rpath=%s' % lib_path) 94 cmd_args.append('-L%s' % lib_path) 95 else: 96 # don't RPATH in relative paths; 97 # it doesn't make much sense, and it can also break the build because it may result in reordering lib paths 98 cmd_args.append('-L%s' % lib_path) 99 100 # replace --enable-new-dtags with --disable-new-dtags if it's used; 101 # --enable-new-dtags would result in copying rpath to runpath, 102 # meaning that $LD_LIBRARY_PATH is taken into account again; 103 # --enable-new-dtags is not removed but replaced to prevent issues when linker flag is forwarded from the compiler 104 # to the linker with an extra prefixed flag (either -Xlinker or -Wl,). 105 # In that case, the compiler would erroneously pass the next random argument to the linker. 106 elif arg == flag_prefix + '--enable-new-dtags': 107 cmd_args.append(flag_prefix + '--disable-new-dtags') 108 else: 109 cmd_args.append(arg) 110 111 idx += 1 112 113 # add -rpath flags in front 114 cmd_args = cmd_args_rpath + cmd_args 115 116 cmd_args_rpath = [flag_prefix + '-rpath=%s' % inc for inc in rpath_include] 117 118 if not version_mode: 119 cmd_args = cmd_args_rpath + [ 120 # try to make sure that RUNPATH is not used by always injecting --disable-new-dtags 121 flag_prefix + '--disable-new-dtags', 122 ] + cmd_args 123 124 # wrap all arguments into single quotes to avoid further bash expansion 125 cmd_args = ["'%s'" % a.replace("'", "''") for a in cmd_args] 126 127 # output: statement to define $CMD_ARGS and $RPATH_ARGS 128 print("CMD_ARGS=(%s)" % ' '.join(cmd_args)) 129 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/easybuild/scripts/rpath_args.py b/easybuild/scripts/rpath_args.py --- a/easybuild/scripts/rpath_args.py +++ b/easybuild/scripts/rpath_args.py @@ -41,7 +41,7 @@ rpath_include = sys.argv[3] args = sys.argv[4:] -# wheter or not to use -Wl to pass options to the linker +# determine whether or not to use -Wl to pass options to the linker based on name of command if cmd in ['ld', 'ld.gold', 'ld.bfd']: flag_prefix = '' else: @@ -58,7 +58,7 @@ else: rpath_include = [] -version_mode = False +add_rpath_args = True cmd_args, cmd_args_rpath = [], [] # process list of original command line arguments @@ -69,7 +69,16 @@ # if command is run in 'version check' mode, make sure we don't include *any* -rpath arguments if arg in ['-v', '-V', '--version', '-dumpversion']: - version_mode = True + add_rpath_args = False + cmd_args.append(arg) + + # compiler options like "-x c++header" imply no linking is done (similar to -c), + # so then we must not inject -Wl,-rpath option since they *enable* linking; + # see https://github.com/easybuilders/easybuild-framework/issues/3371 + elif arg == '-x': + idx_next = idx + 1 + if idx_next < len(args) and args[idx_next] in ['c-header', 'c++-header']: + add_rpath_args = False cmd_args.append(arg) # FIXME: also consider $LIBRARY_PATH? @@ -110,16 +119,15 @@ idx += 1 -# add -rpath flags in front -cmd_args = cmd_args_rpath + cmd_args +if add_rpath_args: + # try to make sure that RUNPATH is not used by always injecting --disable-new-dtags + cmd_args_rpath.insert(0, flag_prefix + '--disable-new-dtags') -cmd_args_rpath = [flag_prefix + '-rpath=%s' % inc for inc in rpath_include] + # add -rpath options for paths listed in rpath_include + cmd_args_rpath = [flag_prefix + '-rpath=%s' % inc for inc in rpath_include] + cmd_args_rpath -if not version_mode: - cmd_args = cmd_args_rpath + [ - # try to make sure that RUNPATH is not used by always injecting --disable-new-dtags - flag_prefix + '--disable-new-dtags', - ] + cmd_args + # add -rpath flags in front + cmd_args = cmd_args_rpath + cmd_args # wrap all arguments into single quotes to avoid further bash expansion cmd_args = ["'%s'" % a.replace("'", "''") for a in cmd_args]
{"golden_diff": "diff --git a/easybuild/scripts/rpath_args.py b/easybuild/scripts/rpath_args.py\n--- a/easybuild/scripts/rpath_args.py\n+++ b/easybuild/scripts/rpath_args.py\n@@ -41,7 +41,7 @@\n rpath_include = sys.argv[3]\n args = sys.argv[4:]\n \n-# wheter or not to use -Wl to pass options to the linker\n+# determine whether or not to use -Wl to pass options to the linker based on name of command\n if cmd in ['ld', 'ld.gold', 'ld.bfd']:\n flag_prefix = ''\n else:\n@@ -58,7 +58,7 @@\n else:\n rpath_include = []\n \n-version_mode = False\n+add_rpath_args = True\n cmd_args, cmd_args_rpath = [], []\n \n # process list of original command line arguments\n@@ -69,7 +69,16 @@\n \n # if command is run in 'version check' mode, make sure we don't include *any* -rpath arguments\n if arg in ['-v', '-V', '--version', '-dumpversion']:\n- version_mode = True\n+ add_rpath_args = False\n+ cmd_args.append(arg)\n+\n+ # compiler options like \"-x c++header\" imply no linking is done (similar to -c),\n+ # so then we must not inject -Wl,-rpath option since they *enable* linking;\n+ # see https://github.com/easybuilders/easybuild-framework/issues/3371\n+ elif arg == '-x':\n+ idx_next = idx + 1\n+ if idx_next < len(args) and args[idx_next] in ['c-header', 'c++-header']:\n+ add_rpath_args = False\n cmd_args.append(arg)\n \n # FIXME: also consider $LIBRARY_PATH?\n@@ -110,16 +119,15 @@\n \n idx += 1\n \n-# add -rpath flags in front\n-cmd_args = cmd_args_rpath + cmd_args\n+if add_rpath_args:\n+ # try to make sure that RUNPATH is not used by always injecting --disable-new-dtags\n+ cmd_args_rpath.insert(0, flag_prefix + '--disable-new-dtags')\n \n-cmd_args_rpath = [flag_prefix + '-rpath=%s' % inc for inc in rpath_include]\n+ # add -rpath options for paths listed in rpath_include\n+ cmd_args_rpath = [flag_prefix + '-rpath=%s' % inc for inc in rpath_include] + cmd_args_rpath\n \n-if not version_mode:\n- cmd_args = cmd_args_rpath + [\n- # try to make sure that RUNPATH is not used by always injecting --disable-new-dtags\n- flag_prefix + '--disable-new-dtags',\n- ] + cmd_args\n+ # add -rpath flags in front\n+ cmd_args = cmd_args_rpath + cmd_args\n \n # wrap all arguments into single quotes to avoid further bash expansion\n cmd_args = [\"'%s'\" % a.replace(\"'\", \"''\") for a in cmd_args]\n", "issue": "RPath compiler wrapper breaking precompiled header builds\nThis issue was found when building OpenCV-4.0.1-foss-2018b-Python-3.6.6.eb with EB 4.1.1. The build has multiple errors `undefined reference to main`. This is caused by the rpath wrappers adding `-Wl,-rpath=...` options to the g++ call, which turns otherwise compile-only commands to generate precompiled headers (`-x c++-header`) into commands that link. \r\n\r\nSee [easybuild-OpenCV-4.0.1-20200623.143513.mcEaZ.log.gz](https://github.com/easybuilders/easybuild-framework/files/4819395/easybuild-OpenCV-4.0.1-20200623.143513.mcEaZ.log.gz) for the full logs. But it can also be shown using only the generated rpath wrapper to compile a simple file:\r\n\r\n```\r\npaulm@tcn180 14:43 ~$ m list\r\nCurrently Loaded Modulefiles:\r\n 1) surfsara 2) 2019 3) EasyBuild/4.1.1 4) compilerwrappers 5) eb/4.1.1(default) 6) GCCcore/8.2.0 7) zlib/1.2.11-GCCcore-8.2.0 8) binutils/2.31.1-GCCcore-8.2.0 9) GCC/8.2.0-2.31.1 \r\n\r\npaulm@tcn180:~$ cat t.cc\r\n#include <cstdio>\r\n\r\n# Straight g++ call, no wrappers, succeeds\r\npaulm@tcn180 14:45 ~$ /sw/arch/RedHatEnterpriseServer7/EB_production/2019/software/GCCcore/8.2.0/bin/g++ -x c++-header t.cc\r\npaulm@tcn180 14:45 ~$ \r\n\r\n# Wrapped call, fails\r\npaulm@tcn180 14:45 ~$ /scratch-shared/paulm/eb-3OiLzK/tmpvR7vdm/rpath_wrappers/gxx_wrapper/g++ -x c++-header t.cc \r\n/lib/../lib64/crt1.o(.text+0x20): error: undefined reference to 'main'\r\ncollect2: error: ld returned 1 exit status\r\n\r\n# Straight g++ call with -Wl,-rpath addition, no wrappers, fails\r\npaulm@tcn180 14:46 ~$ /sw/arch/RedHatEnterpriseServer7/EB_production/2019/software/GCCcore/8.2.0/bin/g++ -x c++-header -Wl,-rpath=/tmp/doh t.cc\r\n/lib/../lib64/crt1.o(.text+0x20): error: undefined reference to 'main'\r\ncollect2: error: ld returned 1 exit status\r\n```\r\n\r\nWe (me and @casparvl) suggest that the rpath wrappers check for the `-x c++-header` (and similar) options to make sure an otherwise correct call of g++ is not turned into one that fails. Note that there is a bit of a mist as to the expected behaviour of GCC with various options, as the docs aren't entirely clear:\r\n\r\n* For example, calling `g++ t.cc` fails if `t.cc` does not contain a main function as it does linking of the generated object code. But calling `g++ -x c++-header t.cc` apparently turns on compile-only mode and succeeds (but the [gcc docs](https://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html) only say: \"To create a precompiled header file, simply compile it as you would any other file, if necessary using the -x option to make the driver treat it as a C or C++ header file\")\r\n* Adding `-Wl,-rpath=...` makes GCC link (although the docs do not specify this), unless `-c` is used, in which case linking is never done.\r\n* The [examples in the GCC docs of using precompiled headers](https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_headers.html) do not use `-c` when using `-x c++-header`. But upstream package could be more explicit by using `-c -x c++-header`, which would still work when the RPATH wrappers would add `-Wl,-rpath=...`, but it can't really be expected of those packages to handle that case of downstream building.\n", "before_files": [{"content": "#!/usr/bin/env python\n##\n# Copyright 2016-2020 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nUtility script used by RPATH wrapper script;\noutput is statements that define the following environment variables\n* $CMD_ARGS: new list of command line arguments to pass\n* $RPATH_ARGS: command line option to specify list of paths to RPATH\n\nauthor: Kenneth Hoste (HPC-UGent)\n\"\"\"\nimport os\nimport re\nimport sys\n\n\ncmd = sys.argv[1]\nrpath_filter = sys.argv[2]\nrpath_include = sys.argv[3]\nargs = sys.argv[4:]\n\n# wheter or not to use -Wl to pass options to the linker\nif cmd in ['ld', 'ld.gold', 'ld.bfd']:\n flag_prefix = ''\nelse:\n flag_prefix = '-Wl,'\n\nrpath_filter = rpath_filter.split(',')\nif rpath_filter:\n rpath_filter = re.compile('^%s$' % '|'.join(rpath_filter))\nelse:\n rpath_filter = None\n\nif rpath_include:\n rpath_include = rpath_include.split(',')\nelse:\n rpath_include = []\n\nversion_mode = False\ncmd_args, cmd_args_rpath = [], []\n\n# process list of original command line arguments\nidx = 0\nwhile idx < len(args):\n\n arg = args[idx]\n\n # if command is run in 'version check' mode, make sure we don't include *any* -rpath arguments\n if arg in ['-v', '-V', '--version', '-dumpversion']:\n version_mode = True\n cmd_args.append(arg)\n\n # FIXME: also consider $LIBRARY_PATH?\n # FIXME: support to hard inject additional library paths?\n # FIXME: support to specify list of path prefixes that should not be RPATH'ed into account?\n # FIXME skip paths in /tmp, build dir, etc.?\n\n # handle -L flags, inject corresponding -rpath flag\n elif arg.startswith('-L'):\n # take into account that argument to -L may be separated with one or more spaces...\n if arg == '-L':\n # actual library path is next argument when arg='-L'\n idx += 1\n lib_path = args[idx]\n else:\n lib_path = arg[2:]\n\n if os.path.isabs(lib_path) and (rpath_filter is None or not rpath_filter.match(lib_path)):\n # inject -rpath flag in front for every -L with an absolute path,\n # also retain the -L flag (without reordering!)\n cmd_args_rpath.append(flag_prefix + '-rpath=%s' % lib_path)\n cmd_args.append('-L%s' % lib_path)\n else:\n # don't RPATH in relative paths;\n # it doesn't make much sense, and it can also break the build because it may result in reordering lib paths\n cmd_args.append('-L%s' % lib_path)\n\n # replace --enable-new-dtags with --disable-new-dtags if it's used;\n # --enable-new-dtags would result in copying rpath to runpath,\n # meaning that $LD_LIBRARY_PATH is taken into account again;\n # --enable-new-dtags is not removed but replaced to prevent issues when linker flag is forwarded from the compiler\n # to the linker with an extra prefixed flag (either -Xlinker or -Wl,).\n # In that case, the compiler would erroneously pass the next random argument to the linker.\n elif arg == flag_prefix + '--enable-new-dtags':\n cmd_args.append(flag_prefix + '--disable-new-dtags')\n else:\n cmd_args.append(arg)\n\n idx += 1\n\n# add -rpath flags in front\ncmd_args = cmd_args_rpath + cmd_args\n\ncmd_args_rpath = [flag_prefix + '-rpath=%s' % inc for inc in rpath_include]\n\nif not version_mode:\n cmd_args = cmd_args_rpath + [\n # try to make sure that RUNPATH is not used by always injecting --disable-new-dtags\n flag_prefix + '--disable-new-dtags',\n ] + cmd_args\n\n# wrap all arguments into single quotes to avoid further bash expansion\ncmd_args = [\"'%s'\" % a.replace(\"'\", \"''\") for a in cmd_args]\n\n# output: statement to define $CMD_ARGS and $RPATH_ARGS\nprint(\"CMD_ARGS=(%s)\" % ' '.join(cmd_args))\n", "path": "easybuild/scripts/rpath_args.py"}], "after_files": [{"content": "#!/usr/bin/env python\n##\n# Copyright 2016-2020 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nUtility script used by RPATH wrapper script;\noutput is statements that define the following environment variables\n* $CMD_ARGS: new list of command line arguments to pass\n* $RPATH_ARGS: command line option to specify list of paths to RPATH\n\nauthor: Kenneth Hoste (HPC-UGent)\n\"\"\"\nimport os\nimport re\nimport sys\n\n\ncmd = sys.argv[1]\nrpath_filter = sys.argv[2]\nrpath_include = sys.argv[3]\nargs = sys.argv[4:]\n\n# determine whether or not to use -Wl to pass options to the linker based on name of command\nif cmd in ['ld', 'ld.gold', 'ld.bfd']:\n flag_prefix = ''\nelse:\n flag_prefix = '-Wl,'\n\nrpath_filter = rpath_filter.split(',')\nif rpath_filter:\n rpath_filter = re.compile('^%s$' % '|'.join(rpath_filter))\nelse:\n rpath_filter = None\n\nif rpath_include:\n rpath_include = rpath_include.split(',')\nelse:\n rpath_include = []\n\nadd_rpath_args = True\ncmd_args, cmd_args_rpath = [], []\n\n# process list of original command line arguments\nidx = 0\nwhile idx < len(args):\n\n arg = args[idx]\n\n # if command is run in 'version check' mode, make sure we don't include *any* -rpath arguments\n if arg in ['-v', '-V', '--version', '-dumpversion']:\n add_rpath_args = False\n cmd_args.append(arg)\n\n # compiler options like \"-x c++header\" imply no linking is done (similar to -c),\n # so then we must not inject -Wl,-rpath option since they *enable* linking;\n # see https://github.com/easybuilders/easybuild-framework/issues/3371\n elif arg == '-x':\n idx_next = idx + 1\n if idx_next < len(args) and args[idx_next] in ['c-header', 'c++-header']:\n add_rpath_args = False\n cmd_args.append(arg)\n\n # FIXME: also consider $LIBRARY_PATH?\n # FIXME: support to hard inject additional library paths?\n # FIXME: support to specify list of path prefixes that should not be RPATH'ed into account?\n # FIXME skip paths in /tmp, build dir, etc.?\n\n # handle -L flags, inject corresponding -rpath flag\n elif arg.startswith('-L'):\n # take into account that argument to -L may be separated with one or more spaces...\n if arg == '-L':\n # actual library path is next argument when arg='-L'\n idx += 1\n lib_path = args[idx]\n else:\n lib_path = arg[2:]\n\n if os.path.isabs(lib_path) and (rpath_filter is None or not rpath_filter.match(lib_path)):\n # inject -rpath flag in front for every -L with an absolute path,\n # also retain the -L flag (without reordering!)\n cmd_args_rpath.append(flag_prefix + '-rpath=%s' % lib_path)\n cmd_args.append('-L%s' % lib_path)\n else:\n # don't RPATH in relative paths;\n # it doesn't make much sense, and it can also break the build because it may result in reordering lib paths\n cmd_args.append('-L%s' % lib_path)\n\n # replace --enable-new-dtags with --disable-new-dtags if it's used;\n # --enable-new-dtags would result in copying rpath to runpath,\n # meaning that $LD_LIBRARY_PATH is taken into account again;\n # --enable-new-dtags is not removed but replaced to prevent issues when linker flag is forwarded from the compiler\n # to the linker with an extra prefixed flag (either -Xlinker or -Wl,).\n # In that case, the compiler would erroneously pass the next random argument to the linker.\n elif arg == flag_prefix + '--enable-new-dtags':\n cmd_args.append(flag_prefix + '--disable-new-dtags')\n else:\n cmd_args.append(arg)\n\n idx += 1\n\nif add_rpath_args:\n # try to make sure that RUNPATH is not used by always injecting --disable-new-dtags\n cmd_args_rpath.insert(0, flag_prefix + '--disable-new-dtags')\n\n # add -rpath options for paths listed in rpath_include\n cmd_args_rpath = [flag_prefix + '-rpath=%s' % inc for inc in rpath_include] + cmd_args_rpath\n\n # add -rpath flags in front\n cmd_args = cmd_args_rpath + cmd_args\n\n# wrap all arguments into single quotes to avoid further bash expansion\ncmd_args = [\"'%s'\" % a.replace(\"'\", \"''\") for a in cmd_args]\n\n# output: statement to define $CMD_ARGS and $RPATH_ARGS\nprint(\"CMD_ARGS=(%s)\" % ' '.join(cmd_args))\n", "path": "easybuild/scripts/rpath_args.py"}]}
2,791
689
gh_patches_debug_3221
rasdani/github-patches
git_diff
spack__spack-1961
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- `spack uninstall -f` is broken @alalazo It looks like PR #670 might have broken "spack uninstall -f". Can you please look into it? ``` [me@ankeli icebin]$ spack uninstall -f openblas ==> The following packages will be uninstalled : ==> Do you want to proceed ? [y/n] y [me@ankeli icebin]$ ls /home/rpfische/spack/opt/spack/linux-x86_64/gcc-4.9.3/openblas-0.2.17-lvi5fks7zompzclvc2lyzwilfefal4ld/lib blas.a cmake libblas.a liblapack.a libopenblas.a libopenblas_nehalemp-r0.2.17.a ``` `spack uninstall -f` is broken @alalazo It looks like PR #670 might have broken "spack uninstall -f". Can you please look into it? ``` [me@ankeli icebin]$ spack uninstall -f openblas ==> The following packages will be uninstalled : ==> Do you want to proceed ? [y/n] y [me@ankeli icebin]$ ls /home/rpfische/spack/opt/spack/linux-x86_64/gcc-4.9.3/openblas-0.2.17-lvi5fks7zompzclvc2lyzwilfefal4ld/lib blas.a cmake libblas.a liblapack.a libopenblas.a libopenblas_nehalemp-r0.2.17.a ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `lib/spack/spack/cmd/uninstall.py` Content: ``` 1 ############################################################################## 2 # Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC. 3 # Produced at the Lawrence Livermore National Laboratory. 4 # 5 # This file is part of Spack. 6 # Created by Todd Gamblin, [email protected], All rights reserved. 7 # LLNL-CODE-647188 8 # 9 # For details, see https://github.com/llnl/spack 10 # Please also see the LICENSE file for our notice and the LGPL. 11 # 12 # This program is free software; you can redistribute it and/or modify 13 # it under the terms of the GNU Lesser General Public License (as 14 # published by the Free Software Foundation) version 2.1, February 1999. 15 # 16 # This program is distributed in the hope that it will be useful, but 17 # WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF 18 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and 19 # conditions of the GNU Lesser General Public License for more details. 20 # 21 # You should have received a copy of the GNU Lesser General Public 22 # License along with this program; if not, write to the Free Software 23 # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 24 ############################################################################## 25 from __future__ import print_function 26 27 import argparse 28 29 import llnl.util.tty as tty 30 import spack 31 import spack.cmd 32 import spack.repository 33 34 description = "Remove an installed package" 35 36 error_message = """You can either: 37 a) Use a more specific spec, or 38 b) use spack uninstall -a to uninstall ALL matching specs. 39 """ 40 41 # Arguments for display_specs when we find ambiguity 42 display_args = { 43 'long': True, 44 'show_flags': True, 45 'variants': True 46 } 47 48 49 def setup_parser(subparser): 50 subparser.add_argument( 51 '-f', '--force', action='store_true', dest='force', 52 help="Remove regardless of whether other packages depend on this one.") 53 54 subparser.add_argument( 55 '-a', '--all', action='store_true', dest='all', 56 help="USE CAREFULLY. Remove ALL installed packages that match each " 57 "supplied spec. i.e., if you say uninstall libelf, ALL versions " 58 "of libelf are uninstalled. This is both useful and dangerous, " 59 "like rm -r.") 60 61 subparser.add_argument( 62 '-d', '--dependents', action='store_true', dest='dependents', 63 help='Also uninstall any packages that depend on the ones given ' 64 'via command line.') 65 66 subparser.add_argument( 67 '-y', '--yes-to-all', action='store_true', dest='yes_to_all', 68 help='Assume "yes" is the answer to every confirmation requested') 69 70 subparser.add_argument( 71 'packages', 72 nargs=argparse.REMAINDER, 73 help="specs of packages to uninstall") 74 75 76 def concretize_specs(specs, allow_multiple_matches=False, force=False): 77 """Returns a list of specs matching the non necessarily 78 concretized specs given from cli 79 80 Args: 81 specs: list of specs to be matched against installed packages 82 allow_multiple_matches : if True multiple matches are admitted 83 84 Return: 85 list of specs 86 """ 87 # List of specs that match expressions given via command line 88 specs_from_cli = [] 89 has_errors = False 90 for spec in specs: 91 matching = spack.installed_db.query(spec) 92 # For each spec provided, make sure it refers to only one package. 93 # Fail and ask user to be unambiguous if it doesn't 94 if not allow_multiple_matches and len(matching) > 1: 95 tty.error("%s matches multiple packages:" % spec) 96 print() 97 spack.cmd.display_specs(matching, **display_args) 98 print() 99 has_errors = True 100 101 # No installed package matches the query 102 if len(matching) == 0 and not force: 103 tty.error("%s does not match any installed packages." % spec) 104 has_errors = True 105 106 specs_from_cli.extend(matching) 107 if has_errors: 108 tty.die(error_message) 109 110 return specs_from_cli 111 112 113 def installed_dependents(specs): 114 """Returns a dictionary that maps a spec with a list of its 115 installed dependents 116 117 Args: 118 specs: list of specs to be checked for dependents 119 120 Returns: 121 dictionary of installed dependents 122 """ 123 dependents = {} 124 for item in specs: 125 lst = [x for x in item.package.installed_dependents if x not in specs] 126 if lst: 127 lst = list(set(lst)) 128 dependents[item] = lst 129 return dependents 130 131 132 def do_uninstall(specs, force): 133 """ 134 Uninstalls all the specs in a list. 135 136 Args: 137 specs: list of specs to be uninstalled 138 force: force uninstallation (boolean) 139 """ 140 packages = [] 141 for item in specs: 142 try: 143 # should work if package is known to spack 144 packages.append(item.package) 145 except spack.repository.UnknownPackageError: 146 # The package.py file has gone away -- but still 147 # want to uninstall. 148 spack.Package(item).do_uninstall(force=True) 149 150 # Sort packages to be uninstalled by the number of installed dependents 151 # This ensures we do things in the right order 152 def num_installed_deps(pkg): 153 return len(pkg.installed_dependents) 154 155 packages.sort(key=num_installed_deps) 156 for item in packages: 157 item.do_uninstall(force=force) 158 159 160 def uninstall(parser, args): 161 if not args.packages: 162 tty.die("uninstall requires at least one package argument.") 163 164 with spack.installed_db.write_transaction(): 165 specs = spack.cmd.parse_specs(args.packages) 166 # Gets the list of installed specs that match the ones give via cli 167 # takes care of '-a' is given in the cli 168 uninstall_list = concretize_specs(specs, args.all, args.force) 169 dependent_list = installed_dependents( 170 uninstall_list) # takes care of '-d' 171 172 # Process dependent_list and update uninstall_list 173 has_error = False 174 if dependent_list and not args.dependents and not args.force: 175 for spec, lst in dependent_list.items(): 176 tty.error("Will not uninstall %s" % 177 spec.format("$_$@$%@$#", color=True)) 178 print('') 179 print("The following packages depend on it:") 180 spack.cmd.display_specs(lst, **display_args) 181 print('') 182 has_error = True 183 elif args.dependents: 184 for key, lst in dependent_list.items(): 185 uninstall_list.extend(lst) 186 uninstall_list = list(set(uninstall_list)) 187 188 if has_error: 189 tty.die('You can use spack uninstall --dependents ' 190 'to uninstall these dependencies as well') 191 192 if not args.yes_to_all: 193 tty.msg("The following packages will be uninstalled : ") 194 print('') 195 spack.cmd.display_specs(uninstall_list, **display_args) 196 print('') 197 spack.cmd.ask_for_confirmation('Do you want to proceed ? ') 198 199 # Uninstall everything on the list 200 do_uninstall(uninstall_list, args.force) 201 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/lib/spack/spack/cmd/uninstall.py b/lib/spack/spack/cmd/uninstall.py --- a/lib/spack/spack/cmd/uninstall.py +++ b/lib/spack/spack/cmd/uninstall.py @@ -99,7 +99,7 @@ has_errors = True # No installed package matches the query - if len(matching) == 0 and not force: + if len(matching) == 0: tty.error("%s does not match any installed packages." % spec) has_errors = True
{"golden_diff": "diff --git a/lib/spack/spack/cmd/uninstall.py b/lib/spack/spack/cmd/uninstall.py\n--- a/lib/spack/spack/cmd/uninstall.py\n+++ b/lib/spack/spack/cmd/uninstall.py\n@@ -99,7 +99,7 @@\n has_errors = True\n \n # No installed package matches the query\n- if len(matching) == 0 and not force:\n+ if len(matching) == 0:\n tty.error(\"%s does not match any installed packages.\" % spec)\n has_errors = True\n", "issue": "`spack uninstall -f` is broken\n@alalazo It looks like PR #670 might have broken \"spack uninstall -f\". Can you please look into it?\n\n```\n[me@ankeli icebin]$ spack uninstall -f openblas\n==> The following packages will be uninstalled : \n\n\n==> Do you want to proceed ? [y/n]\ny\n[me@ankeli icebin]$ ls /home/rpfische/spack/opt/spack/linux-x86_64/gcc-4.9.3/openblas-0.2.17-lvi5fks7zompzclvc2lyzwilfefal4ld/lib\nblas.a cmake libblas.a liblapack.a libopenblas.a libopenblas_nehalemp-r0.2.17.a\n```\n\n`spack uninstall -f` is broken\n@alalazo It looks like PR #670 might have broken \"spack uninstall -f\". Can you please look into it?\n\n```\n[me@ankeli icebin]$ spack uninstall -f openblas\n==> The following packages will be uninstalled : \n\n\n==> Do you want to proceed ? [y/n]\ny\n[me@ankeli icebin]$ ls /home/rpfische/spack/opt/spack/linux-x86_64/gcc-4.9.3/openblas-0.2.17-lvi5fks7zompzclvc2lyzwilfefal4ld/lib\nblas.a cmake libblas.a liblapack.a libopenblas.a libopenblas_nehalemp-r0.2.17.a\n```\n\n", "before_files": [{"content": "##############################################################################\n# Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.\n# Produced at the Lawrence Livermore National Laboratory.\n#\n# This file is part of Spack.\n# Created by Todd Gamblin, [email protected], All rights reserved.\n# LLNL-CODE-647188\n#\n# For details, see https://github.com/llnl/spack\n# Please also see the LICENSE file for our notice and the LGPL.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU Lesser General Public License (as\n# published by the Free Software Foundation) version 2.1, February 1999.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and\n# conditions of the GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public\n# License along with this program; if not, write to the Free Software\n# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA\n##############################################################################\nfrom __future__ import print_function\n\nimport argparse\n\nimport llnl.util.tty as tty\nimport spack\nimport spack.cmd\nimport spack.repository\n\ndescription = \"Remove an installed package\"\n\nerror_message = \"\"\"You can either:\n a) Use a more specific spec, or\n b) use spack uninstall -a to uninstall ALL matching specs.\n\"\"\"\n\n# Arguments for display_specs when we find ambiguity\ndisplay_args = {\n 'long': True,\n 'show_flags': True,\n 'variants': True\n}\n\n\ndef setup_parser(subparser):\n subparser.add_argument(\n '-f', '--force', action='store_true', dest='force',\n help=\"Remove regardless of whether other packages depend on this one.\")\n\n subparser.add_argument(\n '-a', '--all', action='store_true', dest='all',\n help=\"USE CAREFULLY. Remove ALL installed packages that match each \"\n \"supplied spec. i.e., if you say uninstall libelf, ALL versions \"\n \"of libelf are uninstalled. This is both useful and dangerous, \"\n \"like rm -r.\")\n\n subparser.add_argument(\n '-d', '--dependents', action='store_true', dest='dependents',\n help='Also uninstall any packages that depend on the ones given '\n 'via command line.')\n\n subparser.add_argument(\n '-y', '--yes-to-all', action='store_true', dest='yes_to_all',\n help='Assume \"yes\" is the answer to every confirmation requested')\n\n subparser.add_argument(\n 'packages',\n nargs=argparse.REMAINDER,\n help=\"specs of packages to uninstall\")\n\n\ndef concretize_specs(specs, allow_multiple_matches=False, force=False):\n \"\"\"Returns a list of specs matching the non necessarily\n concretized specs given from cli\n\n Args:\n specs: list of specs to be matched against installed packages\n allow_multiple_matches : if True multiple matches are admitted\n\n Return:\n list of specs\n \"\"\"\n # List of specs that match expressions given via command line\n specs_from_cli = []\n has_errors = False\n for spec in specs:\n matching = spack.installed_db.query(spec)\n # For each spec provided, make sure it refers to only one package.\n # Fail and ask user to be unambiguous if it doesn't\n if not allow_multiple_matches and len(matching) > 1:\n tty.error(\"%s matches multiple packages:\" % spec)\n print()\n spack.cmd.display_specs(matching, **display_args)\n print()\n has_errors = True\n\n # No installed package matches the query\n if len(matching) == 0 and not force:\n tty.error(\"%s does not match any installed packages.\" % spec)\n has_errors = True\n\n specs_from_cli.extend(matching)\n if has_errors:\n tty.die(error_message)\n\n return specs_from_cli\n\n\ndef installed_dependents(specs):\n \"\"\"Returns a dictionary that maps a spec with a list of its\n installed dependents\n\n Args:\n specs: list of specs to be checked for dependents\n\n Returns:\n dictionary of installed dependents\n \"\"\"\n dependents = {}\n for item in specs:\n lst = [x for x in item.package.installed_dependents if x not in specs]\n if lst:\n lst = list(set(lst))\n dependents[item] = lst\n return dependents\n\n\ndef do_uninstall(specs, force):\n \"\"\"\n Uninstalls all the specs in a list.\n\n Args:\n specs: list of specs to be uninstalled\n force: force uninstallation (boolean)\n \"\"\"\n packages = []\n for item in specs:\n try:\n # should work if package is known to spack\n packages.append(item.package)\n except spack.repository.UnknownPackageError:\n # The package.py file has gone away -- but still\n # want to uninstall.\n spack.Package(item).do_uninstall(force=True)\n\n # Sort packages to be uninstalled by the number of installed dependents\n # This ensures we do things in the right order\n def num_installed_deps(pkg):\n return len(pkg.installed_dependents)\n\n packages.sort(key=num_installed_deps)\n for item in packages:\n item.do_uninstall(force=force)\n\n\ndef uninstall(parser, args):\n if not args.packages:\n tty.die(\"uninstall requires at least one package argument.\")\n\n with spack.installed_db.write_transaction():\n specs = spack.cmd.parse_specs(args.packages)\n # Gets the list of installed specs that match the ones give via cli\n # takes care of '-a' is given in the cli\n uninstall_list = concretize_specs(specs, args.all, args.force)\n dependent_list = installed_dependents(\n uninstall_list) # takes care of '-d'\n\n # Process dependent_list and update uninstall_list\n has_error = False\n if dependent_list and not args.dependents and not args.force:\n for spec, lst in dependent_list.items():\n tty.error(\"Will not uninstall %s\" %\n spec.format(\"$_$@$%@$#\", color=True))\n print('')\n print(\"The following packages depend on it:\")\n spack.cmd.display_specs(lst, **display_args)\n print('')\n has_error = True\n elif args.dependents:\n for key, lst in dependent_list.items():\n uninstall_list.extend(lst)\n uninstall_list = list(set(uninstall_list))\n\n if has_error:\n tty.die('You can use spack uninstall --dependents '\n 'to uninstall these dependencies as well')\n\n if not args.yes_to_all:\n tty.msg(\"The following packages will be uninstalled : \")\n print('')\n spack.cmd.display_specs(uninstall_list, **display_args)\n print('')\n spack.cmd.ask_for_confirmation('Do you want to proceed ? ')\n\n # Uninstall everything on the list\n do_uninstall(uninstall_list, args.force)\n", "path": "lib/spack/spack/cmd/uninstall.py"}], "after_files": [{"content": "##############################################################################\n# Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.\n# Produced at the Lawrence Livermore National Laboratory.\n#\n# This file is part of Spack.\n# Created by Todd Gamblin, [email protected], All rights reserved.\n# LLNL-CODE-647188\n#\n# For details, see https://github.com/llnl/spack\n# Please also see the LICENSE file for our notice and the LGPL.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU Lesser General Public License (as\n# published by the Free Software Foundation) version 2.1, February 1999.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and\n# conditions of the GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public\n# License along with this program; if not, write to the Free Software\n# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA\n##############################################################################\nfrom __future__ import print_function\n\nimport argparse\n\nimport llnl.util.tty as tty\nimport spack\nimport spack.cmd\nimport spack.repository\n\ndescription = \"Remove an installed package\"\n\nerror_message = \"\"\"You can either:\n a) Use a more specific spec, or\n b) use spack uninstall -a to uninstall ALL matching specs.\n\"\"\"\n\n# Arguments for display_specs when we find ambiguity\ndisplay_args = {\n 'long': True,\n 'show_flags': True,\n 'variants': True\n}\n\n\ndef setup_parser(subparser):\n subparser.add_argument(\n '-f', '--force', action='store_true', dest='force',\n help=\"Remove regardless of whether other packages depend on this one.\")\n\n subparser.add_argument(\n '-a', '--all', action='store_true', dest='all',\n help=\"USE CAREFULLY. Remove ALL installed packages that match each \"\n \"supplied spec. i.e., if you say uninstall libelf, ALL versions \"\n \"of libelf are uninstalled. This is both useful and dangerous, \"\n \"like rm -r.\")\n\n subparser.add_argument(\n '-d', '--dependents', action='store_true', dest='dependents',\n help='Also uninstall any packages that depend on the ones given '\n 'via command line.')\n\n subparser.add_argument(\n '-y', '--yes-to-all', action='store_true', dest='yes_to_all',\n help='Assume \"yes\" is the answer to every confirmation requested')\n\n subparser.add_argument(\n 'packages',\n nargs=argparse.REMAINDER,\n help=\"specs of packages to uninstall\")\n\n\ndef concretize_specs(specs, allow_multiple_matches=False, force=False):\n \"\"\"Returns a list of specs matching the non necessarily\n concretized specs given from cli\n\n Args:\n specs: list of specs to be matched against installed packages\n allow_multiple_matches : if True multiple matches are admitted\n\n Return:\n list of specs\n \"\"\"\n # List of specs that match expressions given via command line\n specs_from_cli = []\n has_errors = False\n for spec in specs:\n matching = spack.installed_db.query(spec)\n # For each spec provided, make sure it refers to only one package.\n # Fail and ask user to be unambiguous if it doesn't\n if not allow_multiple_matches and len(matching) > 1:\n tty.error(\"%s matches multiple packages:\" % spec)\n print()\n spack.cmd.display_specs(matching, **display_args)\n print()\n has_errors = True\n\n # No installed package matches the query\n if len(matching) == 0:\n tty.error(\"%s does not match any installed packages.\" % spec)\n has_errors = True\n\n specs_from_cli.extend(matching)\n if has_errors:\n tty.die(error_message)\n\n return specs_from_cli\n\n\ndef installed_dependents(specs):\n \"\"\"Returns a dictionary that maps a spec with a list of its\n installed dependents\n\n Args:\n specs: list of specs to be checked for dependents\n\n Returns:\n dictionary of installed dependents\n \"\"\"\n dependents = {}\n for item in specs:\n lst = [x for x in item.package.installed_dependents if x not in specs]\n if lst:\n lst = list(set(lst))\n dependents[item] = lst\n return dependents\n\n\ndef do_uninstall(specs, force):\n \"\"\"\n Uninstalls all the specs in a list.\n\n Args:\n specs: list of specs to be uninstalled\n force: force uninstallation (boolean)\n \"\"\"\n packages = []\n for item in specs:\n try:\n # should work if package is known to spack\n packages.append(item.package)\n except spack.repository.UnknownPackageError:\n # The package.py file has gone away -- but still\n # want to uninstall.\n spack.Package(item).do_uninstall(force=True)\n\n # Sort packages to be uninstalled by the number of installed dependents\n # This ensures we do things in the right order\n def num_installed_deps(pkg):\n return len(pkg.installed_dependents)\n\n packages.sort(key=num_installed_deps)\n for item in packages:\n item.do_uninstall(force=force)\n\n\ndef uninstall(parser, args):\n if not args.packages:\n tty.die(\"uninstall requires at least one package argument.\")\n\n with spack.installed_db.write_transaction():\n specs = spack.cmd.parse_specs(args.packages)\n # Gets the list of installed specs that match the ones give via cli\n # takes care of '-a' is given in the cli\n uninstall_list = concretize_specs(specs, args.all, args.force)\n dependent_list = installed_dependents(\n uninstall_list) # takes care of '-d'\n\n # Process dependent_list and update uninstall_list\n has_error = False\n if dependent_list and not args.dependents and not args.force:\n for spec, lst in dependent_list.items():\n tty.error(\"Will not uninstall %s\" %\n spec.format(\"$_$@$%@$#\", color=True))\n print('')\n print(\"The following packages depend on it:\")\n spack.cmd.display_specs(lst, **display_args)\n print('')\n has_error = True\n elif args.dependents:\n for key, lst in dependent_list.items():\n uninstall_list.extend(lst)\n uninstall_list = list(set(uninstall_list))\n\n if has_error:\n tty.die('You can use spack uninstall --dependents '\n 'to uninstall these dependencies as well')\n\n if not args.yes_to_all:\n tty.msg(\"The following packages will be uninstalled : \")\n print('')\n spack.cmd.display_specs(uninstall_list, **display_args)\n print('')\n spack.cmd.ask_for_confirmation('Do you want to proceed ? ')\n\n # Uninstall everything on the list\n do_uninstall(uninstall_list, args.force)\n", "path": "lib/spack/spack/cmd/uninstall.py"}]}
2,705
121
gh_patches_debug_62174
rasdani/github-patches
git_diff
pyro-ppl__pyro-2617
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [bug] Samples from LKJCorrCholesky have the wrong shape ### Issue Description The batch and sample shapes for `LKJCorrCholesky` are swapped when sampling. ### Environment OS: Ubuntu 16.04 Python: 3.7.2 Torch version: 1.6.0 Pyro version: 1.4.0 ### Code Snippet Minimal example: ```python >>> dist = pyro.distributions.LKJCorrCholesky(d=3, eta=torch.ones(())).expand(12) >>> # batch shape and event shape are as you'd expect >>> dist.batch_shape torch.Size([12]) >>> dist.event_shape torch.Size([3, 3]) >>> # samples have correct shape when sample_shape=() >>> dist.shape(()) torch.Size([12, 3, 3]) >>> dist.sample().shape torch.Size([12, 3, 3]) >>> # samples have the wrong shape when sample_shape is non-unit >>> dist.shape((4,)) # as expected torch.Size([4, 12, 3, 3]) >>> dist.sample((4,)).shape torch.Size([12, 4, 3, 3]) ``` I think this line is the culprit: https://github.com/pyro-ppl/pyro/blob/7c2c22c10dffda8a33ffbd593cc8d58819959e40/pyro/distributions/lkj.py#L67 ```python def sample(self, sample_shape=torch.Size()): y = self._gen.sample(sample_shape=self.batch_shape + sample_shape).detach() z = y.mul(2).add(-1.0) return _vector_to_l_cholesky(z) ``` should be ```python def sample(self, sample_shape=torch.Size()): y = self._gen.sample(sample_shape=sample_shape + self.batch_shape).detach() z = y.mul(2).add(-1.0) return _vector_to_l_cholesky(z) ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `pyro/distributions/lkj.py` Content: ``` 1 # Copyright (c) 2017-2019 Uber Technologies, Inc. 2 # SPDX-License-Identifier: Apache-2.0 3 4 import math 5 6 import torch 7 from torch.distributions import constraints 8 9 from pyro.distributions.constraints import corr_cholesky_constraint 10 from pyro.distributions.torch import Beta 11 from pyro.distributions.torch_distribution import TorchDistribution 12 from pyro.distributions.transforms.cholesky import _vector_to_l_cholesky 13 14 15 # TODO: Modify class to support more than one eta value at a time? 16 class LKJCorrCholesky(TorchDistribution): 17 """ 18 Generates cholesky factors of correlation matrices using an LKJ prior. 19 20 The expected use is to combine it with a vector of variances and pass it 21 to the scale_tril parameter of a multivariate distribution such as MultivariateNormal. 22 23 E.g., if theta is a (positive) vector of covariances with the same dimensionality 24 as this distribution, and Omega is sampled from this distribution, 25 scale_tril=torch.mm(torch.diag(sqrt(theta)), Omega) 26 27 Note that the `event_shape` of this distribution is `[d, d]` 28 29 .. note:: 30 31 When using this distribution with HMC/NUTS, it is important to 32 use a `step_size` such as 1e-4. If not, you are likely to experience LAPACK 33 errors regarding positive-definiteness. 34 35 For example usage, refer to 36 `pyro/examples/lkj.py <https://github.com/pyro-ppl/pyro/blob/dev/examples/lkj.py>`_. 37 38 :param int d: Dimensionality of the matrix 39 :param torch.Tensor eta: A single positive number parameterizing the distribution. 40 """ 41 arg_constraints = {"eta": constraints.positive} 42 support = corr_cholesky_constraint 43 has_rsample = False 44 45 def __init__(self, d, eta, validate_args=None): 46 if eta.numel() != 1: 47 raise ValueError("eta must be a single number; for a larger batch size, call expand") 48 if d <= 1: 49 raise ValueError("d must be > 1 in any correlation matrix") 50 eta = eta.squeeze() 51 vector_size = (d * (d - 1)) // 2 52 alpha = eta.add(0.5 * (d - 1.0)) 53 54 concentrations = torch.empty(vector_size, dtype=eta.dtype, device=eta.device) 55 i = 0 56 for k in range(d - 1): 57 alpha -= .5 58 concentrations[..., i:(i + d - k - 1)] = alpha 59 i += d - k - 1 60 self._gen = Beta(concentrations, concentrations) 61 self.eta = eta 62 self._d = d 63 self._lkj_constant = None 64 super().__init__(torch.Size(), torch.Size((d, d)), validate_args=validate_args) 65 66 def sample(self, sample_shape=torch.Size()): 67 y = self._gen.sample(sample_shape=self.batch_shape + sample_shape).detach() 68 z = y.mul(2).add(-1.0) 69 return _vector_to_l_cholesky(z) 70 71 def expand(self, batch_shape, _instance=None): 72 new = self._get_checked_instance(LKJCorrCholesky, _instance) 73 batch_shape = torch.Size(batch_shape) 74 new._gen = self._gen 75 new.eta = self.eta 76 new._d = self._d 77 new._lkj_constant = self._lkj_constant 78 super(LKJCorrCholesky, new).__init__(batch_shape, self.event_shape, validate_args=False) 79 new._validate_args = self._validate_args 80 return new 81 82 def lkj_constant(self, eta, K): 83 if self._lkj_constant is not None: 84 return self._lkj_constant 85 86 Km1 = K - 1 87 88 constant = torch.lgamma(eta.add(0.5 * Km1)).mul(Km1) 89 90 k = torch.linspace(start=1, end=Km1, steps=Km1, dtype=eta.dtype, device=eta.device) 91 constant -= (k.mul(math.log(math.pi) * 0.5) + torch.lgamma(eta.add(0.5 * (Km1 - k)))).sum() 92 93 self._lkj_constant = constant 94 return constant 95 96 def log_prob(self, x): 97 if self._validate_args: 98 self._validate_sample(x) 99 100 eta = self.eta 101 102 lp = self.lkj_constant(eta, self._d) 103 104 Km1 = self._d - 1 105 106 log_diagonals = x.diagonal(offset=0, dim1=-1, dim2=-2)[..., 1:].log() 107 # TODO: Figure out why the `device` kwarg to torch.linspace seems to not work in certain situations, 108 # and a seemingly redundant .to(x.device) is needed below. 109 values = log_diagonals * torch.linspace(start=Km1 - 1, end=0, steps=Km1, 110 dtype=x.dtype, 111 device=x.device).expand_as(log_diagonals).to(x.device) 112 113 values += log_diagonals.mul(eta.mul(2).add(-2.0)) 114 values = values.sum(-1) + lp 115 values, _ = torch.broadcast_tensors(values, torch.empty(self.batch_shape)) 116 return values 117 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/pyro/distributions/lkj.py b/pyro/distributions/lkj.py --- a/pyro/distributions/lkj.py +++ b/pyro/distributions/lkj.py @@ -64,7 +64,8 @@ super().__init__(torch.Size(), torch.Size((d, d)), validate_args=validate_args) def sample(self, sample_shape=torch.Size()): - y = self._gen.sample(sample_shape=self.batch_shape + sample_shape).detach() + with torch.no_grad(): + y = self._gen.sample(sample_shape=sample_shape + self.batch_shape) z = y.mul(2).add(-1.0) return _vector_to_l_cholesky(z)
{"golden_diff": "diff --git a/pyro/distributions/lkj.py b/pyro/distributions/lkj.py\n--- a/pyro/distributions/lkj.py\n+++ b/pyro/distributions/lkj.py\n@@ -64,7 +64,8 @@\n super().__init__(torch.Size(), torch.Size((d, d)), validate_args=validate_args)\n \n def sample(self, sample_shape=torch.Size()):\n- y = self._gen.sample(sample_shape=self.batch_shape + sample_shape).detach()\n+ with torch.no_grad():\n+ y = self._gen.sample(sample_shape=sample_shape + self.batch_shape)\n z = y.mul(2).add(-1.0)\n return _vector_to_l_cholesky(z)\n", "issue": "[bug] Samples from LKJCorrCholesky have the wrong shape\n### Issue Description\r\nThe batch and sample shapes for `LKJCorrCholesky` are swapped when sampling.\r\n\r\n### Environment\r\nOS: Ubuntu 16.04\r\nPython: 3.7.2\r\nTorch version: 1.6.0\r\nPyro version: 1.4.0\r\n\r\n### Code Snippet\r\nMinimal example:\r\n\r\n```python\r\n>>> dist = pyro.distributions.LKJCorrCholesky(d=3, eta=torch.ones(())).expand(12)\r\n>>> # batch shape and event shape are as you'd expect\r\n>>> dist.batch_shape\r\ntorch.Size([12])\r\n>>> dist.event_shape\r\ntorch.Size([3, 3])\r\n>>> # samples have correct shape when sample_shape=()\r\n>>> dist.shape(())\r\ntorch.Size([12, 3, 3])\r\n>>> dist.sample().shape\r\ntorch.Size([12, 3, 3])\r\n>>> # samples have the wrong shape when sample_shape is non-unit\r\n>>> dist.shape((4,)) # as expected\r\ntorch.Size([4, 12, 3, 3])\r\n>>> dist.sample((4,)).shape\r\ntorch.Size([12, 4, 3, 3])\r\n```\r\n\r\nI think this line is the culprit: https://github.com/pyro-ppl/pyro/blob/7c2c22c10dffda8a33ffbd593cc8d58819959e40/pyro/distributions/lkj.py#L67\r\n\r\n```python\r\n def sample(self, sample_shape=torch.Size()):\r\n y = self._gen.sample(sample_shape=self.batch_shape + sample_shape).detach()\r\n z = y.mul(2).add(-1.0)\r\n return _vector_to_l_cholesky(z)\r\n```\r\n\r\nshould be \r\n\r\n```python\r\n def sample(self, sample_shape=torch.Size()):\r\n y = self._gen.sample(sample_shape=sample_shape + self.batch_shape).detach()\r\n z = y.mul(2).add(-1.0)\r\n return _vector_to_l_cholesky(z)\r\n```\r\n\n", "before_files": [{"content": "# Copyright (c) 2017-2019 Uber Technologies, Inc.\n# SPDX-License-Identifier: Apache-2.0\n\nimport math\n\nimport torch\nfrom torch.distributions import constraints\n\nfrom pyro.distributions.constraints import corr_cholesky_constraint\nfrom pyro.distributions.torch import Beta\nfrom pyro.distributions.torch_distribution import TorchDistribution\nfrom pyro.distributions.transforms.cholesky import _vector_to_l_cholesky\n\n\n# TODO: Modify class to support more than one eta value at a time?\nclass LKJCorrCholesky(TorchDistribution):\n \"\"\"\n Generates cholesky factors of correlation matrices using an LKJ prior.\n\n The expected use is to combine it with a vector of variances and pass it\n to the scale_tril parameter of a multivariate distribution such as MultivariateNormal.\n\n E.g., if theta is a (positive) vector of covariances with the same dimensionality\n as this distribution, and Omega is sampled from this distribution,\n scale_tril=torch.mm(torch.diag(sqrt(theta)), Omega)\n\n Note that the `event_shape` of this distribution is `[d, d]`\n\n .. note::\n\n When using this distribution with HMC/NUTS, it is important to\n use a `step_size` such as 1e-4. If not, you are likely to experience LAPACK\n errors regarding positive-definiteness.\n\n For example usage, refer to\n `pyro/examples/lkj.py <https://github.com/pyro-ppl/pyro/blob/dev/examples/lkj.py>`_.\n\n :param int d: Dimensionality of the matrix\n :param torch.Tensor eta: A single positive number parameterizing the distribution.\n \"\"\"\n arg_constraints = {\"eta\": constraints.positive}\n support = corr_cholesky_constraint\n has_rsample = False\n\n def __init__(self, d, eta, validate_args=None):\n if eta.numel() != 1:\n raise ValueError(\"eta must be a single number; for a larger batch size, call expand\")\n if d <= 1:\n raise ValueError(\"d must be > 1 in any correlation matrix\")\n eta = eta.squeeze()\n vector_size = (d * (d - 1)) // 2\n alpha = eta.add(0.5 * (d - 1.0))\n\n concentrations = torch.empty(vector_size, dtype=eta.dtype, device=eta.device)\n i = 0\n for k in range(d - 1):\n alpha -= .5\n concentrations[..., i:(i + d - k - 1)] = alpha\n i += d - k - 1\n self._gen = Beta(concentrations, concentrations)\n self.eta = eta\n self._d = d\n self._lkj_constant = None\n super().__init__(torch.Size(), torch.Size((d, d)), validate_args=validate_args)\n\n def sample(self, sample_shape=torch.Size()):\n y = self._gen.sample(sample_shape=self.batch_shape + sample_shape).detach()\n z = y.mul(2).add(-1.0)\n return _vector_to_l_cholesky(z)\n\n def expand(self, batch_shape, _instance=None):\n new = self._get_checked_instance(LKJCorrCholesky, _instance)\n batch_shape = torch.Size(batch_shape)\n new._gen = self._gen\n new.eta = self.eta\n new._d = self._d\n new._lkj_constant = self._lkj_constant\n super(LKJCorrCholesky, new).__init__(batch_shape, self.event_shape, validate_args=False)\n new._validate_args = self._validate_args\n return new\n\n def lkj_constant(self, eta, K):\n if self._lkj_constant is not None:\n return self._lkj_constant\n\n Km1 = K - 1\n\n constant = torch.lgamma(eta.add(0.5 * Km1)).mul(Km1)\n\n k = torch.linspace(start=1, end=Km1, steps=Km1, dtype=eta.dtype, device=eta.device)\n constant -= (k.mul(math.log(math.pi) * 0.5) + torch.lgamma(eta.add(0.5 * (Km1 - k)))).sum()\n\n self._lkj_constant = constant\n return constant\n\n def log_prob(self, x):\n if self._validate_args:\n self._validate_sample(x)\n\n eta = self.eta\n\n lp = self.lkj_constant(eta, self._d)\n\n Km1 = self._d - 1\n\n log_diagonals = x.diagonal(offset=0, dim1=-1, dim2=-2)[..., 1:].log()\n # TODO: Figure out why the `device` kwarg to torch.linspace seems to not work in certain situations,\n # and a seemingly redundant .to(x.device) is needed below.\n values = log_diagonals * torch.linspace(start=Km1 - 1, end=0, steps=Km1,\n dtype=x.dtype,\n device=x.device).expand_as(log_diagonals).to(x.device)\n\n values += log_diagonals.mul(eta.mul(2).add(-2.0))\n values = values.sum(-1) + lp\n values, _ = torch.broadcast_tensors(values, torch.empty(self.batch_shape))\n return values\n", "path": "pyro/distributions/lkj.py"}], "after_files": [{"content": "# Copyright (c) 2017-2019 Uber Technologies, Inc.\n# SPDX-License-Identifier: Apache-2.0\n\nimport math\n\nimport torch\nfrom torch.distributions import constraints\n\nfrom pyro.distributions.constraints import corr_cholesky_constraint\nfrom pyro.distributions.torch import Beta\nfrom pyro.distributions.torch_distribution import TorchDistribution\nfrom pyro.distributions.transforms.cholesky import _vector_to_l_cholesky\n\n\n# TODO: Modify class to support more than one eta value at a time?\nclass LKJCorrCholesky(TorchDistribution):\n \"\"\"\n Generates cholesky factors of correlation matrices using an LKJ prior.\n\n The expected use is to combine it with a vector of variances and pass it\n to the scale_tril parameter of a multivariate distribution such as MultivariateNormal.\n\n E.g., if theta is a (positive) vector of covariances with the same dimensionality\n as this distribution, and Omega is sampled from this distribution,\n scale_tril=torch.mm(torch.diag(sqrt(theta)), Omega)\n\n Note that the `event_shape` of this distribution is `[d, d]`\n\n .. note::\n\n When using this distribution with HMC/NUTS, it is important to\n use a `step_size` such as 1e-4. If not, you are likely to experience LAPACK\n errors regarding positive-definiteness.\n\n For example usage, refer to\n `pyro/examples/lkj.py <https://github.com/pyro-ppl/pyro/blob/dev/examples/lkj.py>`_.\n\n :param int d: Dimensionality of the matrix\n :param torch.Tensor eta: A single positive number parameterizing the distribution.\n \"\"\"\n arg_constraints = {\"eta\": constraints.positive}\n support = corr_cholesky_constraint\n has_rsample = False\n\n def __init__(self, d, eta, validate_args=None):\n if eta.numel() != 1:\n raise ValueError(\"eta must be a single number; for a larger batch size, call expand\")\n if d <= 1:\n raise ValueError(\"d must be > 1 in any correlation matrix\")\n eta = eta.squeeze()\n vector_size = (d * (d - 1)) // 2\n alpha = eta.add(0.5 * (d - 1.0))\n\n concentrations = torch.empty(vector_size, dtype=eta.dtype, device=eta.device)\n i = 0\n for k in range(d - 1):\n alpha -= .5\n concentrations[..., i:(i + d - k - 1)] = alpha\n i += d - k - 1\n self._gen = Beta(concentrations, concentrations)\n self.eta = eta\n self._d = d\n self._lkj_constant = None\n super().__init__(torch.Size(), torch.Size((d, d)), validate_args=validate_args)\n\n def sample(self, sample_shape=torch.Size()):\n with torch.no_grad():\n y = self._gen.sample(sample_shape=sample_shape + self.batch_shape)\n z = y.mul(2).add(-1.0)\n return _vector_to_l_cholesky(z)\n\n def expand(self, batch_shape, _instance=None):\n new = self._get_checked_instance(LKJCorrCholesky, _instance)\n batch_shape = torch.Size(batch_shape)\n new._gen = self._gen\n new.eta = self.eta\n new._d = self._d\n new._lkj_constant = self._lkj_constant\n super(LKJCorrCholesky, new).__init__(batch_shape, self.event_shape, validate_args=False)\n new._validate_args = self._validate_args\n return new\n\n def lkj_constant(self, eta, K):\n if self._lkj_constant is not None:\n return self._lkj_constant\n\n Km1 = K - 1\n\n constant = torch.lgamma(eta.add(0.5 * Km1)).mul(Km1)\n\n k = torch.linspace(start=1, end=Km1, steps=Km1, dtype=eta.dtype, device=eta.device)\n constant -= (k.mul(math.log(math.pi) * 0.5) + torch.lgamma(eta.add(0.5 * (Km1 - k)))).sum()\n\n self._lkj_constant = constant\n return constant\n\n def log_prob(self, x):\n if self._validate_args:\n self._validate_sample(x)\n\n eta = self.eta\n\n lp = self.lkj_constant(eta, self._d)\n\n Km1 = self._d - 1\n\n log_diagonals = x.diagonal(offset=0, dim1=-1, dim2=-2)[..., 1:].log()\n # TODO: Figure out why the `device` kwarg to torch.linspace seems to not work in certain situations,\n # and a seemingly redundant .to(x.device) is needed below.\n values = log_diagonals * torch.linspace(start=Km1 - 1, end=0, steps=Km1,\n dtype=x.dtype,\n device=x.device).expand_as(log_diagonals).to(x.device)\n\n values += log_diagonals.mul(eta.mul(2).add(-2.0))\n values = values.sum(-1) + lp\n values, _ = torch.broadcast_tensors(values, torch.empty(self.batch_shape))\n return values\n", "path": "pyro/distributions/lkj.py"}]}
2,171
154
gh_patches_debug_11622
rasdani/github-patches
git_diff
aws-cloudformation__cfn-lint-1809
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- AWS::ApiGateway::Authorizer - E3008 Property "RestApiId" has no valid Fn::GetAtt options Given the following template: ```yaml AWSTemplateFormatVersion: '2010-09-09' Resources: ProvisionedProduct: Type: AWS::ServiceCatalog::CloudFormationProvisionedProduct Properties: ProductName: example ProvisioningArtifactName: v1 Authoriser: Type: AWS::ApiGateway::Authorizer Properties: RestApiId: !GetAtt ProvisionedProduct.Outputs.RestApiId Type: TOKEN ``` The following error is output when using cfn-lint 0.42.0: ``` E3008 Property "RestApiId" has no valid Fn::GetAtt options at Resources/Authoriser/Properties/RestApiId/Fn::GetAtt template.yml:13:7 ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `src/cfnlint/rules/resources/properties/ValueRefGetAtt.py` Content: ``` 1 """ 2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 3 SPDX-License-Identifier: MIT-0 4 """ 5 import six 6 from cfnlint.rules import CloudFormationLintRule 7 from cfnlint.rules import RuleMatch 8 from cfnlint.helpers import RESOURCE_SPECS 9 import cfnlint.helpers 10 11 12 class ValueRefGetAtt(CloudFormationLintRule): 13 """Check if Resource Properties are correct""" 14 id = 'E3008' 15 shortdesc = 'Check values of properties for valid Refs and GetAtts' 16 description = 'Checks resource properties for Ref and GetAtt values' 17 tags = ['resources', 'ref', 'getatt'] 18 19 def initialize(self, cfn): 20 """Initialize the rule""" 21 for resource_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes'): 22 self.resource_property_types.append(resource_type_spec) 23 for property_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes'): 24 self.resource_sub_property_types.append(property_type_spec) 25 26 def is_value_a_list(self, path, property_name): 27 """ 28 Determines if the value checked is a list or a value in a list 29 We need to handle conditions in the path that could be nested, etc. 30 ['Resources', 'LoadBalancer', 'Properties', 'Subnets', 'Fn::If', 2, 'Fn::If', 2] 31 Numbers preceeded by a Fn::If should be removed and check repeated. 32 """ 33 if path[-1] != property_name: 34 # Property doesn't match the property name 35 # Check if its a number and a condition 36 if isinstance(path[-1], int) and path[-2] == 'Fn::If': 37 return self.is_value_a_list(path[:-2], property_name) 38 39 return False 40 41 return True 42 43 def check_value_ref(self, value, path, **kwargs): 44 """Check Ref""" 45 matches = list() 46 cfn = kwargs.get('cfn') 47 value_specs = kwargs.get('value_specs', {}).get('Ref') 48 list_value_specs = kwargs.get('list_value_specs', {}).get('Ref') 49 property_type = kwargs.get('property_type') 50 property_name = kwargs.get('property_name') 51 if path[-1] == 'Ref' and property_type == 'List' and self.is_value_a_list(path[:-1], property_name): 52 specs = list_value_specs 53 else: 54 specs = value_specs 55 56 if not specs: 57 # If no Ref's are specified, just skip 58 # Opposite of GetAtt you will always have a Ref to a Parameter so if this is 59 # None it just hasn't been defined and we can skip 60 return matches 61 62 if value in cfn.template.get('Parameters', {}): 63 param = cfn.template.get('Parameters').get(value, {}) 64 parameter_type = param.get('Type') 65 valid_parameter_types = [] 66 for parameter in specs.get('Parameters'): 67 for param_type in RESOURCE_SPECS.get(cfn.regions[0]).get('ParameterTypes').get(parameter): 68 valid_parameter_types.append(param_type) 69 70 if not specs.get('Parameters'): 71 message = 'Property "{0}" has no valid Refs to Parameters at {1}' 72 matches.append(RuleMatch(path, message.format( 73 property_name, '/'.join(map(str, path))))) 74 elif parameter_type not in valid_parameter_types: 75 message = 'Property "{0}" can Ref to parameter of types [{1}] at {2}' 76 matches.append( 77 RuleMatch( 78 path, 79 message.format( 80 property_name, 81 ', '.join(map(str, valid_parameter_types)), 82 '/'.join(map(str, path))))) 83 if value in cfn.template.get('Resources', {}): 84 resource = cfn.template.get('Resources').get(value, {}) 85 resource_type = resource.get('Type') 86 if not specs.get('Resources'): 87 message = 'Property "{0}" has no valid Refs to Resources at {1}' 88 matches.append(RuleMatch(path, message.format( 89 property_name, '/'.join(map(str, path))))) 90 elif resource_type not in specs.get('Resources'): 91 message = 'Property "{0}" can Ref to resources of types [{1}] at {2}' 92 matches.append( 93 RuleMatch( 94 path, 95 message.format( 96 property_name, 97 ', '.join(map(str, specs.get('Resources'))), 98 '/'.join(map(str, path))))) 99 100 return matches 101 102 def check_value_getatt(self, value, path, **kwargs): 103 """Check GetAtt""" 104 matches = [] 105 cfn = kwargs.get('cfn') 106 value_specs = kwargs.get('value_specs', {}).get('GetAtt') 107 list_value_specs = kwargs.get('list_value_specs', {}).get('GetAtt') 108 property_type = kwargs.get('property_type') 109 property_name = kwargs.get('property_name') 110 # You can sometimes get a list or a string with . in it 111 if isinstance(value, list): 112 resource_name = value[0] 113 if len(value[1:]) == 1: 114 resource_attribute = value[1].split('.') 115 else: 116 resource_attribute = value[1:] 117 elif isinstance(value, six.string_types): 118 resource_name = value.split('.')[0] 119 resource_attribute = value.split('.')[1:] 120 is_value_a_list = self.is_value_a_list(path[:-1], property_name) 121 if path[-1] == 'Fn::GetAtt' and property_type == 'List' and is_value_a_list: 122 specs = list_value_specs 123 else: 124 specs = value_specs 125 126 resource_type = cfn.template.get('Resources', {}).get(resource_name, {}).get('Type') 127 128 if cfnlint.helpers.is_custom_resource(resource_type): 129 # A custom resource voids the spec. Move on 130 return matches 131 132 if resource_type == 'AWS::CloudFormation::Stack' and resource_attribute[0] == 'Outputs': 133 # Nested Stack Outputs 134 # if its a string type we are good and return matches 135 # if its a list its a failure as Outputs can only be strings 136 if is_value_a_list and property_type == 'List': 137 message = 'CloudFormation stack outputs need to be strings not lists at {0}' 138 matches.append(RuleMatch(path, message.format('/'.join(map(str, path))))) 139 140 return matches 141 142 if specs is None: 143 # GetAtt specs aren't specified skip 144 return matches 145 if not specs: 146 # GetAtt is specified but empty so there are no valid options 147 message = 'Property "{0}" has no valid Fn::GetAtt options at {1}' 148 matches.append(RuleMatch(path, message.format(property_name, '/'.join(map(str, path))))) 149 return matches 150 151 if resource_type not in specs: 152 message = 'Property "{0}" can Fn::GetAtt to a resource of types [{1}] at {2}' 153 matches.append( 154 RuleMatch( 155 path, 156 message.format( 157 property_name, 158 ', '.join(map(str, specs)), 159 '/'.join(map(str, path))))) 160 elif '.'.join(map(str, resource_attribute)) != specs[resource_type]: 161 message = 'Property "{0}" can Fn::GetAtt to a resource attribute "{1}" at {2}' 162 matches.append( 163 RuleMatch( 164 path, 165 message.format( 166 property_name, 167 specs[resource_type], 168 '/'.join(map(str, path))))) 169 170 return matches 171 172 def check(self, cfn, properties, value_specs, property_specs, path): 173 """Check itself""" 174 matches = list() 175 for p_value, p_path in properties.items_safe(path[:]): 176 for prop in p_value: 177 if prop in value_specs: 178 value = value_specs.get(prop).get('Value', {}) 179 if value: 180 value_type = value.get('ValueType', '') 181 list_value_type = value.get('ListValueType', '') 182 property_type = property_specs.get('Properties').get(prop).get('Type') 183 matches.extend( 184 cfn.check_value( 185 p_value, prop, p_path, 186 check_ref=self.check_value_ref, 187 check_get_att=self.check_value_getatt, 188 value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get( 189 'ValueTypes').get(value_type, {}), 190 list_value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get( 191 'ValueTypes').get(list_value_type, {}), 192 cfn=cfn, property_type=property_type, property_name=prop 193 ) 194 ) 195 196 return matches 197 198 def match_resource_sub_properties(self, properties, property_type, path, cfn): 199 """Match for sub properties""" 200 matches = list() 201 202 specs = RESOURCE_SPECS.get(cfn.regions[0]).get( 203 'PropertyTypes').get(property_type, {}).get('Properties', {}) 204 property_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type) 205 matches.extend(self.check(cfn, properties, specs, property_specs, path)) 206 207 return matches 208 209 def match_resource_properties(self, properties, resource_type, path, cfn): 210 """Check CloudFormation Properties""" 211 matches = list() 212 213 specs = RESOURCE_SPECS.get(cfn.regions[0]).get( 214 'ResourceTypes').get(resource_type, {}).get('Properties', {}) 215 resource_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type) 216 matches.extend(self.check(cfn, properties, specs, resource_specs, path)) 217 218 return matches 219 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/src/cfnlint/rules/resources/properties/ValueRefGetAtt.py b/src/cfnlint/rules/resources/properties/ValueRefGetAtt.py --- a/src/cfnlint/rules/resources/properties/ValueRefGetAtt.py +++ b/src/cfnlint/rules/resources/properties/ValueRefGetAtt.py @@ -129,7 +129,7 @@ # A custom resource voids the spec. Move on return matches - if resource_type == 'AWS::CloudFormation::Stack' and resource_attribute[0] == 'Outputs': + if resource_type in ['AWS::CloudFormation::Stack', 'AWS::ServiceCatalog::CloudFormationProvisionedProduct'] and resource_attribute[0] == 'Outputs': # Nested Stack Outputs # if its a string type we are good and return matches # if its a list its a failure as Outputs can only be strings
{"golden_diff": "diff --git a/src/cfnlint/rules/resources/properties/ValueRefGetAtt.py b/src/cfnlint/rules/resources/properties/ValueRefGetAtt.py\n--- a/src/cfnlint/rules/resources/properties/ValueRefGetAtt.py\n+++ b/src/cfnlint/rules/resources/properties/ValueRefGetAtt.py\n@@ -129,7 +129,7 @@\n # A custom resource voids the spec. Move on\n return matches\n \n- if resource_type == 'AWS::CloudFormation::Stack' and resource_attribute[0] == 'Outputs':\n+ if resource_type in ['AWS::CloudFormation::Stack', 'AWS::ServiceCatalog::CloudFormationProvisionedProduct'] and resource_attribute[0] == 'Outputs':\n # Nested Stack Outputs\n # if its a string type we are good and return matches\n # if its a list its a failure as Outputs can only be strings\n", "issue": "AWS::ApiGateway::Authorizer - E3008 Property \"RestApiId\" has no valid Fn::GetAtt options\nGiven the following template:\r\n\r\n```yaml\r\nAWSTemplateFormatVersion: '2010-09-09'\r\n\r\nResources:\r\n ProvisionedProduct:\r\n Type: AWS::ServiceCatalog::CloudFormationProvisionedProduct\r\n Properties:\r\n ProductName: example\r\n ProvisioningArtifactName: v1\r\n\r\n Authoriser:\r\n Type: AWS::ApiGateway::Authorizer\r\n Properties:\r\n RestApiId: !GetAtt ProvisionedProduct.Outputs.RestApiId\r\n Type: TOKEN\r\n```\r\n\r\nThe following error is output when using cfn-lint 0.42.0:\r\n\r\n```\r\nE3008 Property \"RestApiId\" has no valid Fn::GetAtt options at Resources/Authoriser/Properties/RestApiId/Fn::GetAtt\r\ntemplate.yml:13:7\r\n```\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport six\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\nfrom cfnlint.helpers import RESOURCE_SPECS\nimport cfnlint.helpers\n\n\nclass ValueRefGetAtt(CloudFormationLintRule):\n \"\"\"Check if Resource Properties are correct\"\"\"\n id = 'E3008'\n shortdesc = 'Check values of properties for valid Refs and GetAtts'\n description = 'Checks resource properties for Ref and GetAtt values'\n tags = ['resources', 'ref', 'getatt']\n\n def initialize(self, cfn):\n \"\"\"Initialize the rule\"\"\"\n for resource_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes'):\n self.resource_property_types.append(resource_type_spec)\n for property_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes'):\n self.resource_sub_property_types.append(property_type_spec)\n\n def is_value_a_list(self, path, property_name):\n \"\"\"\n Determines if the value checked is a list or a value in a list\n We need to handle conditions in the path that could be nested, etc.\n ['Resources', 'LoadBalancer', 'Properties', 'Subnets', 'Fn::If', 2, 'Fn::If', 2]\n Numbers preceeded by a Fn::If should be removed and check repeated.\n \"\"\"\n if path[-1] != property_name:\n # Property doesn't match the property name\n # Check if its a number and a condition\n if isinstance(path[-1], int) and path[-2] == 'Fn::If':\n return self.is_value_a_list(path[:-2], property_name)\n\n return False\n\n return True\n\n def check_value_ref(self, value, path, **kwargs):\n \"\"\"Check Ref\"\"\"\n matches = list()\n cfn = kwargs.get('cfn')\n value_specs = kwargs.get('value_specs', {}).get('Ref')\n list_value_specs = kwargs.get('list_value_specs', {}).get('Ref')\n property_type = kwargs.get('property_type')\n property_name = kwargs.get('property_name')\n if path[-1] == 'Ref' and property_type == 'List' and self.is_value_a_list(path[:-1], property_name):\n specs = list_value_specs\n else:\n specs = value_specs\n\n if not specs:\n # If no Ref's are specified, just skip\n # Opposite of GetAtt you will always have a Ref to a Parameter so if this is\n # None it just hasn't been defined and we can skip\n return matches\n\n if value in cfn.template.get('Parameters', {}):\n param = cfn.template.get('Parameters').get(value, {})\n parameter_type = param.get('Type')\n valid_parameter_types = []\n for parameter in specs.get('Parameters'):\n for param_type in RESOURCE_SPECS.get(cfn.regions[0]).get('ParameterTypes').get(parameter):\n valid_parameter_types.append(param_type)\n\n if not specs.get('Parameters'):\n message = 'Property \"{0}\" has no valid Refs to Parameters at {1}'\n matches.append(RuleMatch(path, message.format(\n property_name, '/'.join(map(str, path)))))\n elif parameter_type not in valid_parameter_types:\n message = 'Property \"{0}\" can Ref to parameter of types [{1}] at {2}'\n matches.append(\n RuleMatch(\n path,\n message.format(\n property_name,\n ', '.join(map(str, valid_parameter_types)),\n '/'.join(map(str, path)))))\n if value in cfn.template.get('Resources', {}):\n resource = cfn.template.get('Resources').get(value, {})\n resource_type = resource.get('Type')\n if not specs.get('Resources'):\n message = 'Property \"{0}\" has no valid Refs to Resources at {1}'\n matches.append(RuleMatch(path, message.format(\n property_name, '/'.join(map(str, path)))))\n elif resource_type not in specs.get('Resources'):\n message = 'Property \"{0}\" can Ref to resources of types [{1}] at {2}'\n matches.append(\n RuleMatch(\n path,\n message.format(\n property_name,\n ', '.join(map(str, specs.get('Resources'))),\n '/'.join(map(str, path)))))\n\n return matches\n\n def check_value_getatt(self, value, path, **kwargs):\n \"\"\"Check GetAtt\"\"\"\n matches = []\n cfn = kwargs.get('cfn')\n value_specs = kwargs.get('value_specs', {}).get('GetAtt')\n list_value_specs = kwargs.get('list_value_specs', {}).get('GetAtt')\n property_type = kwargs.get('property_type')\n property_name = kwargs.get('property_name')\n # You can sometimes get a list or a string with . in it\n if isinstance(value, list):\n resource_name = value[0]\n if len(value[1:]) == 1:\n resource_attribute = value[1].split('.')\n else:\n resource_attribute = value[1:]\n elif isinstance(value, six.string_types):\n resource_name = value.split('.')[0]\n resource_attribute = value.split('.')[1:]\n is_value_a_list = self.is_value_a_list(path[:-1], property_name)\n if path[-1] == 'Fn::GetAtt' and property_type == 'List' and is_value_a_list:\n specs = list_value_specs\n else:\n specs = value_specs\n\n resource_type = cfn.template.get('Resources', {}).get(resource_name, {}).get('Type')\n\n if cfnlint.helpers.is_custom_resource(resource_type):\n # A custom resource voids the spec. Move on\n return matches\n\n if resource_type == 'AWS::CloudFormation::Stack' and resource_attribute[0] == 'Outputs':\n # Nested Stack Outputs\n # if its a string type we are good and return matches\n # if its a list its a failure as Outputs can only be strings\n if is_value_a_list and property_type == 'List':\n message = 'CloudFormation stack outputs need to be strings not lists at {0}'\n matches.append(RuleMatch(path, message.format('/'.join(map(str, path)))))\n\n return matches\n\n if specs is None:\n # GetAtt specs aren't specified skip\n return matches\n if not specs:\n # GetAtt is specified but empty so there are no valid options\n message = 'Property \"{0}\" has no valid Fn::GetAtt options at {1}'\n matches.append(RuleMatch(path, message.format(property_name, '/'.join(map(str, path)))))\n return matches\n\n if resource_type not in specs:\n message = 'Property \"{0}\" can Fn::GetAtt to a resource of types [{1}] at {2}'\n matches.append(\n RuleMatch(\n path,\n message.format(\n property_name,\n ', '.join(map(str, specs)),\n '/'.join(map(str, path)))))\n elif '.'.join(map(str, resource_attribute)) != specs[resource_type]:\n message = 'Property \"{0}\" can Fn::GetAtt to a resource attribute \"{1}\" at {2}'\n matches.append(\n RuleMatch(\n path,\n message.format(\n property_name,\n specs[resource_type],\n '/'.join(map(str, path)))))\n\n return matches\n\n def check(self, cfn, properties, value_specs, property_specs, path):\n \"\"\"Check itself\"\"\"\n matches = list()\n for p_value, p_path in properties.items_safe(path[:]):\n for prop in p_value:\n if prop in value_specs:\n value = value_specs.get(prop).get('Value', {})\n if value:\n value_type = value.get('ValueType', '')\n list_value_type = value.get('ListValueType', '')\n property_type = property_specs.get('Properties').get(prop).get('Type')\n matches.extend(\n cfn.check_value(\n p_value, prop, p_path,\n check_ref=self.check_value_ref,\n check_get_att=self.check_value_getatt,\n value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get(\n 'ValueTypes').get(value_type, {}),\n list_value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get(\n 'ValueTypes').get(list_value_type, {}),\n cfn=cfn, property_type=property_type, property_name=prop\n )\n )\n\n return matches\n\n def match_resource_sub_properties(self, properties, property_type, path, cfn):\n \"\"\"Match for sub properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get(\n 'PropertyTypes').get(property_type, {}).get('Properties', {})\n property_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type)\n matches.extend(self.check(cfn, properties, specs, property_specs, path))\n\n return matches\n\n def match_resource_properties(self, properties, resource_type, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get(\n 'ResourceTypes').get(resource_type, {}).get('Properties', {})\n resource_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type)\n matches.extend(self.check(cfn, properties, specs, resource_specs, path))\n\n return matches\n", "path": "src/cfnlint/rules/resources/properties/ValueRefGetAtt.py"}], "after_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport six\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\nfrom cfnlint.helpers import RESOURCE_SPECS\nimport cfnlint.helpers\n\n\nclass ValueRefGetAtt(CloudFormationLintRule):\n \"\"\"Check if Resource Properties are correct\"\"\"\n id = 'E3008'\n shortdesc = 'Check values of properties for valid Refs and GetAtts'\n description = 'Checks resource properties for Ref and GetAtt values'\n tags = ['resources', 'ref', 'getatt']\n\n def initialize(self, cfn):\n \"\"\"Initialize the rule\"\"\"\n for resource_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes'):\n self.resource_property_types.append(resource_type_spec)\n for property_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes'):\n self.resource_sub_property_types.append(property_type_spec)\n\n def is_value_a_list(self, path, property_name):\n \"\"\"\n Determines if the value checked is a list or a value in a list\n We need to handle conditions in the path that could be nested, etc.\n ['Resources', 'LoadBalancer', 'Properties', 'Subnets', 'Fn::If', 2, 'Fn::If', 2]\n Numbers preceeded by a Fn::If should be removed and check repeated.\n \"\"\"\n if path[-1] != property_name:\n # Property doesn't match the property name\n # Check if its a number and a condition\n if isinstance(path[-1], int) and path[-2] == 'Fn::If':\n return self.is_value_a_list(path[:-2], property_name)\n\n return False\n\n return True\n\n def check_value_ref(self, value, path, **kwargs):\n \"\"\"Check Ref\"\"\"\n matches = list()\n cfn = kwargs.get('cfn')\n value_specs = kwargs.get('value_specs', {}).get('Ref')\n list_value_specs = kwargs.get('list_value_specs', {}).get('Ref')\n property_type = kwargs.get('property_type')\n property_name = kwargs.get('property_name')\n if path[-1] == 'Ref' and property_type == 'List' and self.is_value_a_list(path[:-1], property_name):\n specs = list_value_specs\n else:\n specs = value_specs\n\n if not specs:\n # If no Ref's are specified, just skip\n # Opposite of GetAtt you will always have a Ref to a Parameter so if this is\n # None it just hasn't been defined and we can skip\n return matches\n\n if value in cfn.template.get('Parameters', {}):\n param = cfn.template.get('Parameters').get(value, {})\n parameter_type = param.get('Type')\n valid_parameter_types = []\n for parameter in specs.get('Parameters'):\n for param_type in RESOURCE_SPECS.get(cfn.regions[0]).get('ParameterTypes').get(parameter):\n valid_parameter_types.append(param_type)\n\n if not specs.get('Parameters'):\n message = 'Property \"{0}\" has no valid Refs to Parameters at {1}'\n matches.append(RuleMatch(path, message.format(\n property_name, '/'.join(map(str, path)))))\n elif parameter_type not in valid_parameter_types:\n message = 'Property \"{0}\" can Ref to parameter of types [{1}] at {2}'\n matches.append(\n RuleMatch(\n path,\n message.format(\n property_name,\n ', '.join(map(str, valid_parameter_types)),\n '/'.join(map(str, path)))))\n if value in cfn.template.get('Resources', {}):\n resource = cfn.template.get('Resources').get(value, {})\n resource_type = resource.get('Type')\n if not specs.get('Resources'):\n message = 'Property \"{0}\" has no valid Refs to Resources at {1}'\n matches.append(RuleMatch(path, message.format(\n property_name, '/'.join(map(str, path)))))\n elif resource_type not in specs.get('Resources'):\n message = 'Property \"{0}\" can Ref to resources of types [{1}] at {2}'\n matches.append(\n RuleMatch(\n path,\n message.format(\n property_name,\n ', '.join(map(str, specs.get('Resources'))),\n '/'.join(map(str, path)))))\n\n return matches\n\n def check_value_getatt(self, value, path, **kwargs):\n \"\"\"Check GetAtt\"\"\"\n matches = []\n cfn = kwargs.get('cfn')\n value_specs = kwargs.get('value_specs', {}).get('GetAtt')\n list_value_specs = kwargs.get('list_value_specs', {}).get('GetAtt')\n property_type = kwargs.get('property_type')\n property_name = kwargs.get('property_name')\n # You can sometimes get a list or a string with . in it\n if isinstance(value, list):\n resource_name = value[0]\n if len(value[1:]) == 1:\n resource_attribute = value[1].split('.')\n else:\n resource_attribute = value[1:]\n elif isinstance(value, six.string_types):\n resource_name = value.split('.')[0]\n resource_attribute = value.split('.')[1:]\n is_value_a_list = self.is_value_a_list(path[:-1], property_name)\n if path[-1] == 'Fn::GetAtt' and property_type == 'List' and is_value_a_list:\n specs = list_value_specs\n else:\n specs = value_specs\n\n resource_type = cfn.template.get('Resources', {}).get(resource_name, {}).get('Type')\n\n if cfnlint.helpers.is_custom_resource(resource_type):\n # A custom resource voids the spec. Move on\n return matches\n\n if resource_type in ['AWS::CloudFormation::Stack', 'AWS::ServiceCatalog::CloudFormationProvisionedProduct'] and resource_attribute[0] == 'Outputs':\n # Nested Stack Outputs\n # if its a string type we are good and return matches\n # if its a list its a failure as Outputs can only be strings\n if is_value_a_list and property_type == 'List':\n message = 'CloudFormation stack outputs need to be strings not lists at {0}'\n matches.append(RuleMatch(path, message.format('/'.join(map(str, path)))))\n\n return matches\n\n if specs is None:\n # GetAtt specs aren't specified skip\n return matches\n if not specs:\n # GetAtt is specified but empty so there are no valid options\n message = 'Property \"{0}\" has no valid Fn::GetAtt options at {1}'\n matches.append(RuleMatch(path, message.format(property_name, '/'.join(map(str, path)))))\n return matches\n\n if resource_type not in specs:\n message = 'Property \"{0}\" can Fn::GetAtt to a resource of types [{1}] at {2}'\n matches.append(\n RuleMatch(\n path,\n message.format(\n property_name,\n ', '.join(map(str, specs)),\n '/'.join(map(str, path)))))\n elif '.'.join(map(str, resource_attribute)) != specs[resource_type]:\n message = 'Property \"{0}\" can Fn::GetAtt to a resource attribute \"{1}\" at {2}'\n matches.append(\n RuleMatch(\n path,\n message.format(\n property_name,\n specs[resource_type],\n '/'.join(map(str, path)))))\n\n return matches\n\n def check(self, cfn, properties, value_specs, property_specs, path):\n \"\"\"Check itself\"\"\"\n matches = list()\n for p_value, p_path in properties.items_safe(path[:]):\n for prop in p_value:\n if prop in value_specs:\n value = value_specs.get(prop).get('Value', {})\n if value:\n value_type = value.get('ValueType', '')\n list_value_type = value.get('ListValueType', '')\n property_type = property_specs.get('Properties').get(prop).get('Type')\n matches.extend(\n cfn.check_value(\n p_value, prop, p_path,\n check_ref=self.check_value_ref,\n check_get_att=self.check_value_getatt,\n value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get(\n 'ValueTypes').get(value_type, {}),\n list_value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get(\n 'ValueTypes').get(list_value_type, {}),\n cfn=cfn, property_type=property_type, property_name=prop\n )\n )\n\n return matches\n\n def match_resource_sub_properties(self, properties, property_type, path, cfn):\n \"\"\"Match for sub properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get(\n 'PropertyTypes').get(property_type, {}).get('Properties', {})\n property_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type)\n matches.extend(self.check(cfn, properties, specs, property_specs, path))\n\n return matches\n\n def match_resource_properties(self, properties, resource_type, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get(\n 'ResourceTypes').get(resource_type, {}).get('Properties', {})\n resource_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type)\n matches.extend(self.check(cfn, properties, specs, resource_specs, path))\n\n return matches\n", "path": "src/cfnlint/rules/resources/properties/ValueRefGetAtt.py"}]}
3,065
198
gh_patches_debug_12157
rasdani/github-patches
git_diff
pyro-ppl__pyro-198
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- pytorch broadcasting in various places in the codebase we have tensor ops like `expand_as()`. given the extended broadcasting functionality in the latest version of pytorch, some of these may be unnecessary and/or clunky. more generally, we should investigate and deal with any pytorch warnings that popped once once we switched pytorch versions. for example: _UserWarning: other is not broadcastable to self, but they have the same number of elements. Falling back to deprecated pointwise behavior._ --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `examples/categorical_bmm.py` Content: ``` 1 import argparse 2 3 import numpy as np 4 import torch 5 import torchvision.datasets as dset 6 import torchvision.transforms as transforms 7 import visdom 8 from torch.autograd import Variable 9 from torch.nn import Softmax 10 11 import pyro 12 from pyro.distributions import Bernoulli, Categorical 13 from pyro.infer.kl_qp import KL_QP 14 15 mnist = dset.MNIST( 16 root='./data', 17 train=True, 18 transform=None, 19 target_transform=None, 20 download=True) 21 print('dataset loaded') 22 23 softmax = Softmax() 24 25 train_loader = torch.utils.data.DataLoader( 26 dset.MNIST('../data', train=True, download=True, 27 transform=transforms.Compose([ 28 transforms.ToTensor(), 29 transforms.Normalize((0.1307,), (0.3081,)) 30 ])), 31 batch_size=128, shuffle=True) 32 test_loader = torch.utils.data.DataLoader( 33 dset.MNIST('../data', train=False, transform=transforms.Compose([ 34 transforms.ToTensor(), 35 transforms.Normalize((0.1307,), (0.3081,)) 36 ])), 37 batch_size=128, shuffle=True) 38 39 40 def local_model(i, datum): 41 beta = Variable(torch.ones(1, 10)) * 0.1 42 cll = pyro.sample("class_of_datum_" + str(i), Categorical(beta)) 43 mean_param = Variable(torch.zeros(1, 784), requires_grad=True) 44 # do MLE for class means 45 mu = pyro.param("mean_of_class_" + str(cll[0]), mean_param) 46 mu_param = softmax(mu) 47 pyro.observe("obs_" + str(i), Bernoulli(mu_param), datum) 48 return cll 49 50 51 def local_guide(i, datum): 52 alpha = torch.ones(1, 10) * 0.1 53 beta_q = Variable(alpha, requires_grad=True) 54 beta_param = pyro.param("class_posterior_", beta_q) 55 guide_params = softmax(beta_param) 56 cll = pyro.sample("class_of_datum_" + str(i), Categorical(guide_params)) 57 return cll 58 59 60 def inspect_posterior_samples(i): 61 cll = local_guide(i, None) 62 mean_param = Variable(torch.zeros(1, 784), requires_grad=True) 63 # do MLE for class means 64 mu = pyro.param("mean_of_class_" + str(cll[0]), mean_param) 65 dat = pyro.sample("obs_" + str(i), Bernoulli(mu)) 66 return dat 67 68 69 optim_fct = pyro.optim(torch.optim.Adam, {'lr': .0001}) 70 71 inference = KL_QP(local_model, local_guide, optim_fct) 72 73 vis = visdom.Visdom() 74 75 nr_epochs = 50 76 # apply it to minibatches of data by hand: 77 78 mnist_data = Variable(train_loader.dataset.train_data.float() / 255.) 79 mnist_labels = Variable(train_loader.dataset.train_labels) 80 mnist_size = mnist_data.size(0) 81 batch_size = 1 # 64 82 83 all_batches = np.arange(0, mnist_size, batch_size) 84 85 if all_batches[-1] != mnist_size: 86 all_batches = list(all_batches) + [mnist_size] 87 88 89 def main(): 90 parser = argparse.ArgumentParser(description="parse args") 91 parser.add_argument('-n', '--num-epochs', nargs='?', default=1000, type=int) 92 args = parser.parse_args() 93 for i in range(args.num_epochs): 94 epoch_loss = 0. 95 for ix, batch_start in enumerate(all_batches[:-1]): 96 batch_end = all_batches[ix + 1] 97 batch_data = mnist_data[batch_start:batch_end] 98 bs_size = batch_data.size(0) 99 batch_class_raw = mnist_labels[batch_start:batch_end] 100 batch_class = torch.zeros(bs_size, 10) # maybe it needs a FloatTensor 101 batch_class.scatter_(1, batch_class_raw.data.view(-1, 1), 1) 102 batch_class = Variable(batch_class) 103 epoch_loss += inference.step(ix, batch_data) 104 105 # optional visualization! 106 # vis.image(batch_data[0].view(28, 28).data.numpy()) 107 # vis.image(sample[0].view(28, 28).data.numpy()) 108 # vis.image(sample_mu[0].view(28, 28).data.numpy()) 109 print("epoch avg loss {}".format(epoch_loss / float(mnist_size))) 110 111 112 if __name__ == '__main__': 113 main() 114 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/examples/categorical_bmm.py b/examples/categorical_bmm.py --- a/examples/categorical_bmm.py +++ b/examples/categorical_bmm.py @@ -12,6 +12,7 @@ from pyro.distributions import Bernoulli, Categorical from pyro.infer.kl_qp import KL_QP + mnist = dset.MNIST( root='./data', train=True, @@ -44,7 +45,7 @@ # do MLE for class means mu = pyro.param("mean_of_class_" + str(cll[0]), mean_param) mu_param = softmax(mu) - pyro.observe("obs_" + str(i), Bernoulli(mu_param), datum) + pyro.observe("obs_" + str(i), Bernoulli(mu_param), datum.view(1, -1)) return cll
{"golden_diff": "diff --git a/examples/categorical_bmm.py b/examples/categorical_bmm.py\n--- a/examples/categorical_bmm.py\n+++ b/examples/categorical_bmm.py\n@@ -12,6 +12,7 @@\n from pyro.distributions import Bernoulli, Categorical\n from pyro.infer.kl_qp import KL_QP\n \n+\n mnist = dset.MNIST(\n root='./data',\n train=True,\n@@ -44,7 +45,7 @@\n # do MLE for class means\n mu = pyro.param(\"mean_of_class_\" + str(cll[0]), mean_param)\n mu_param = softmax(mu)\n- pyro.observe(\"obs_\" + str(i), Bernoulli(mu_param), datum)\n+ pyro.observe(\"obs_\" + str(i), Bernoulli(mu_param), datum.view(1, -1))\n return cll\n", "issue": "pytorch broadcasting\nin various places in the codebase we have tensor ops like `expand_as()`. given the extended broadcasting functionality in the latest version of pytorch, some of these may be unnecessary and/or clunky. more generally, we should investigate and deal with any pytorch warnings that popped once once we switched pytorch versions. for example: \r\n\r\n_UserWarning: other is not broadcastable to self, but they have the same number of elements. Falling back to deprecated pointwise behavior._\n", "before_files": [{"content": "import argparse\n\nimport numpy as np\nimport torch\nimport torchvision.datasets as dset\nimport torchvision.transforms as transforms\nimport visdom\nfrom torch.autograd import Variable\nfrom torch.nn import Softmax\n\nimport pyro\nfrom pyro.distributions import Bernoulli, Categorical\nfrom pyro.infer.kl_qp import KL_QP\n\nmnist = dset.MNIST(\n root='./data',\n train=True,\n transform=None,\n target_transform=None,\n download=True)\nprint('dataset loaded')\n\nsoftmax = Softmax()\n\ntrain_loader = torch.utils.data.DataLoader(\n dset.MNIST('../data', train=True, download=True,\n transform=transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.1307,), (0.3081,))\n ])),\n batch_size=128, shuffle=True)\ntest_loader = torch.utils.data.DataLoader(\n dset.MNIST('../data', train=False, transform=transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.1307,), (0.3081,))\n ])),\n batch_size=128, shuffle=True)\n\n\ndef local_model(i, datum):\n beta = Variable(torch.ones(1, 10)) * 0.1\n cll = pyro.sample(\"class_of_datum_\" + str(i), Categorical(beta))\n mean_param = Variable(torch.zeros(1, 784), requires_grad=True)\n # do MLE for class means\n mu = pyro.param(\"mean_of_class_\" + str(cll[0]), mean_param)\n mu_param = softmax(mu)\n pyro.observe(\"obs_\" + str(i), Bernoulli(mu_param), datum)\n return cll\n\n\ndef local_guide(i, datum):\n alpha = torch.ones(1, 10) * 0.1\n beta_q = Variable(alpha, requires_grad=True)\n beta_param = pyro.param(\"class_posterior_\", beta_q)\n guide_params = softmax(beta_param)\n cll = pyro.sample(\"class_of_datum_\" + str(i), Categorical(guide_params))\n return cll\n\n\ndef inspect_posterior_samples(i):\n cll = local_guide(i, None)\n mean_param = Variable(torch.zeros(1, 784), requires_grad=True)\n # do MLE for class means\n mu = pyro.param(\"mean_of_class_\" + str(cll[0]), mean_param)\n dat = pyro.sample(\"obs_\" + str(i), Bernoulli(mu))\n return dat\n\n\noptim_fct = pyro.optim(torch.optim.Adam, {'lr': .0001})\n\ninference = KL_QP(local_model, local_guide, optim_fct)\n\nvis = visdom.Visdom()\n\nnr_epochs = 50\n# apply it to minibatches of data by hand:\n\nmnist_data = Variable(train_loader.dataset.train_data.float() / 255.)\nmnist_labels = Variable(train_loader.dataset.train_labels)\nmnist_size = mnist_data.size(0)\nbatch_size = 1 # 64\n\nall_batches = np.arange(0, mnist_size, batch_size)\n\nif all_batches[-1] != mnist_size:\n all_batches = list(all_batches) + [mnist_size]\n\n\ndef main():\n parser = argparse.ArgumentParser(description=\"parse args\")\n parser.add_argument('-n', '--num-epochs', nargs='?', default=1000, type=int)\n args = parser.parse_args()\n for i in range(args.num_epochs):\n epoch_loss = 0.\n for ix, batch_start in enumerate(all_batches[:-1]):\n batch_end = all_batches[ix + 1]\n batch_data = mnist_data[batch_start:batch_end]\n bs_size = batch_data.size(0)\n batch_class_raw = mnist_labels[batch_start:batch_end]\n batch_class = torch.zeros(bs_size, 10) # maybe it needs a FloatTensor\n batch_class.scatter_(1, batch_class_raw.data.view(-1, 1), 1)\n batch_class = Variable(batch_class)\n epoch_loss += inference.step(ix, batch_data)\n\n # optional visualization!\n # vis.image(batch_data[0].view(28, 28).data.numpy())\n # vis.image(sample[0].view(28, 28).data.numpy())\n # vis.image(sample_mu[0].view(28, 28).data.numpy())\n print(\"epoch avg loss {}\".format(epoch_loss / float(mnist_size)))\n\n\nif __name__ == '__main__':\n main()\n", "path": "examples/categorical_bmm.py"}], "after_files": [{"content": "import argparse\n\nimport numpy as np\nimport torch\nimport torchvision.datasets as dset\nimport torchvision.transforms as transforms\nimport visdom\nfrom torch.autograd import Variable\nfrom torch.nn import Softmax\n\nimport pyro\nfrom pyro.distributions import Bernoulli, Categorical\nfrom pyro.infer.kl_qp import KL_QP\n\n\nmnist = dset.MNIST(\n root='./data',\n train=True,\n transform=None,\n target_transform=None,\n download=True)\nprint('dataset loaded')\n\nsoftmax = Softmax()\n\ntrain_loader = torch.utils.data.DataLoader(\n dset.MNIST('../data', train=True, download=True,\n transform=transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.1307,), (0.3081,))\n ])),\n batch_size=128, shuffle=True)\ntest_loader = torch.utils.data.DataLoader(\n dset.MNIST('../data', train=False, transform=transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.1307,), (0.3081,))\n ])),\n batch_size=128, shuffle=True)\n\n\ndef local_model(i, datum):\n beta = Variable(torch.ones(1, 10)) * 0.1\n cll = pyro.sample(\"class_of_datum_\" + str(i), Categorical(beta))\n mean_param = Variable(torch.zeros(1, 784), requires_grad=True)\n # do MLE for class means\n mu = pyro.param(\"mean_of_class_\" + str(cll[0]), mean_param)\n mu_param = softmax(mu)\n pyro.observe(\"obs_\" + str(i), Bernoulli(mu_param), datum.view(1, -1))\n return cll\n\n\ndef local_guide(i, datum):\n alpha = torch.ones(1, 10) * 0.1\n beta_q = Variable(alpha, requires_grad=True)\n beta_param = pyro.param(\"class_posterior_\", beta_q)\n guide_params = softmax(beta_param)\n cll = pyro.sample(\"class_of_datum_\" + str(i), Categorical(guide_params))\n return cll\n\n\ndef inspect_posterior_samples(i):\n cll = local_guide(i, None)\n mean_param = Variable(torch.zeros(1, 784), requires_grad=True)\n # do MLE for class means\n mu = pyro.param(\"mean_of_class_\" + str(cll[0]), mean_param)\n dat = pyro.sample(\"obs_\" + str(i), Bernoulli(mu))\n return dat\n\n\noptim_fct = pyro.optim(torch.optim.Adam, {'lr': .0001})\n\ninference = KL_QP(local_model, local_guide, optim_fct)\n\nvis = visdom.Visdom()\n\nnr_epochs = 50\n# apply it to minibatches of data by hand:\n\nmnist_data = Variable(train_loader.dataset.train_data.float() / 255.)\nmnist_labels = Variable(train_loader.dataset.train_labels)\nmnist_size = mnist_data.size(0)\nbatch_size = 1 # 64\n\nall_batches = np.arange(0, mnist_size, batch_size)\n\nif all_batches[-1] != mnist_size:\n all_batches = list(all_batches) + [mnist_size]\n\n\ndef main():\n parser = argparse.ArgumentParser(description=\"parse args\")\n parser.add_argument('-n', '--num-epochs', nargs='?', default=1000, type=int)\n args = parser.parse_args()\n for i in range(args.num_epochs):\n epoch_loss = 0.\n for ix, batch_start in enumerate(all_batches[:-1]):\n batch_end = all_batches[ix + 1]\n batch_data = mnist_data[batch_start:batch_end]\n bs_size = batch_data.size(0)\n batch_class_raw = mnist_labels[batch_start:batch_end]\n batch_class = torch.zeros(bs_size, 10) # maybe it needs a FloatTensor\n batch_class.scatter_(1, batch_class_raw.data.view(-1, 1), 1)\n batch_class = Variable(batch_class)\n epoch_loss += inference.step(ix, batch_data)\n\n # optional visualization!\n # vis.image(batch_data[0].view(28, 28).data.numpy())\n # vis.image(sample[0].view(28, 28).data.numpy())\n # vis.image(sample_mu[0].view(28, 28).data.numpy())\n print(\"epoch avg loss {}\".format(epoch_loss / float(mnist_size)))\n\n\nif __name__ == '__main__':\n main()\n", "path": "examples/categorical_bmm.py"}]}
1,593
192
gh_patches_debug_22576
rasdani/github-patches
git_diff
google__mobly-799
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Mobly Release 1.11.1 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `setup.py` Content: ``` 1 # Copyright 2016 Google Inc. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 import platform 16 import setuptools 17 from setuptools.command import test 18 import sys 19 20 install_requires = [ 21 'portpicker', 'pyserial', 'pyyaml', 'timeout_decorator', 'typing_extensions' 22 ] 23 24 if platform.system() == 'Windows': 25 install_requires.append('pywin32') 26 27 28 class PyTest(test.test): 29 """Class used to execute unit tests using PyTest. This allows us to execute 30 unit tests without having to install the package. 31 """ 32 33 def finalize_options(self): 34 test.test.finalize_options(self) 35 self.test_args = ['-x', "tests/mobly"] 36 self.test_suite = True 37 38 def run_tests(self): 39 import pytest 40 errno = pytest.main(self.test_args) 41 sys.exit(errno) 42 43 44 def main(): 45 setuptools.setup( 46 name='mobly', 47 version='1.11', 48 maintainer='Ang Li', 49 maintainer_email='[email protected]', 50 description='Automation framework for special end-to-end test cases', 51 license='Apache2.0', 52 url='https://github.com/google/mobly', 53 download_url='https://github.com/google/mobly/tarball/1.11', 54 packages=setuptools.find_packages(exclude=['tests']), 55 include_package_data=False, 56 scripts=['tools/sl4a_shell.py', 'tools/snippet_shell.py'], 57 tests_require=[ 58 'mock', 59 'pytest', 60 'pytz', 61 ], 62 install_requires=install_requires, 63 cmdclass={'test': PyTest}, 64 ) 65 66 67 if __name__ == '__main__': 68 main() 69 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -18,7 +18,7 @@ import sys install_requires = [ - 'portpicker', 'pyserial', 'pyyaml', 'timeout_decorator', 'typing_extensions' + 'portpicker', 'pyserial', 'pyyaml', 'timeout_decorator', 'typing_extensions>=4.1.1' ] if platform.system() == 'Windows': @@ -44,13 +44,13 @@ def main(): setuptools.setup( name='mobly', - version='1.11', + version='1.11.1', maintainer='Ang Li', maintainer_email='[email protected]', description='Automation framework for special end-to-end test cases', license='Apache2.0', url='https://github.com/google/mobly', - download_url='https://github.com/google/mobly/tarball/1.11', + download_url='https://github.com/google/mobly/tarball/1.11.1', packages=setuptools.find_packages(exclude=['tests']), include_package_data=False, scripts=['tools/sl4a_shell.py', 'tools/snippet_shell.py'],
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -18,7 +18,7 @@\n import sys\n \n install_requires = [\n- 'portpicker', 'pyserial', 'pyyaml', 'timeout_decorator', 'typing_extensions'\n+ 'portpicker', 'pyserial', 'pyyaml', 'timeout_decorator', 'typing_extensions>=4.1.1'\n ]\n \n if platform.system() == 'Windows':\n@@ -44,13 +44,13 @@\n def main():\n setuptools.setup(\n name='mobly',\n- version='1.11',\n+ version='1.11.1',\n maintainer='Ang Li',\n maintainer_email='[email protected]',\n description='Automation framework for special end-to-end test cases',\n license='Apache2.0',\n url='https://github.com/google/mobly',\n- download_url='https://github.com/google/mobly/tarball/1.11',\n+ download_url='https://github.com/google/mobly/tarball/1.11.1',\n packages=setuptools.find_packages(exclude=['tests']),\n include_package_data=False,\n scripts=['tools/sl4a_shell.py', 'tools/snippet_shell.py'],\n", "issue": "Mobly Release 1.11.1\n\n", "before_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport platform\nimport setuptools\nfrom setuptools.command import test\nimport sys\n\ninstall_requires = [\n 'portpicker', 'pyserial', 'pyyaml', 'timeout_decorator', 'typing_extensions'\n]\n\nif platform.system() == 'Windows':\n install_requires.append('pywin32')\n\n\nclass PyTest(test.test):\n \"\"\"Class used to execute unit tests using PyTest. This allows us to execute\n unit tests without having to install the package.\n \"\"\"\n\n def finalize_options(self):\n test.test.finalize_options(self)\n self.test_args = ['-x', \"tests/mobly\"]\n self.test_suite = True\n\n def run_tests(self):\n import pytest\n errno = pytest.main(self.test_args)\n sys.exit(errno)\n\n\ndef main():\n setuptools.setup(\n name='mobly',\n version='1.11',\n maintainer='Ang Li',\n maintainer_email='[email protected]',\n description='Automation framework for special end-to-end test cases',\n license='Apache2.0',\n url='https://github.com/google/mobly',\n download_url='https://github.com/google/mobly/tarball/1.11',\n packages=setuptools.find_packages(exclude=['tests']),\n include_package_data=False,\n scripts=['tools/sl4a_shell.py', 'tools/snippet_shell.py'],\n tests_require=[\n 'mock',\n 'pytest',\n 'pytz',\n ],\n install_requires=install_requires,\n cmdclass={'test': PyTest},\n )\n\n\nif __name__ == '__main__':\n main()\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport platform\nimport setuptools\nfrom setuptools.command import test\nimport sys\n\ninstall_requires = [\n 'portpicker', 'pyserial', 'pyyaml', 'timeout_decorator', 'typing_extensions>=4.1.1'\n]\n\nif platform.system() == 'Windows':\n install_requires.append('pywin32')\n\n\nclass PyTest(test.test):\n \"\"\"Class used to execute unit tests using PyTest. This allows us to execute\n unit tests without having to install the package.\n \"\"\"\n\n def finalize_options(self):\n test.test.finalize_options(self)\n self.test_args = ['-x', \"tests/mobly\"]\n self.test_suite = True\n\n def run_tests(self):\n import pytest\n errno = pytest.main(self.test_args)\n sys.exit(errno)\n\n\ndef main():\n setuptools.setup(\n name='mobly',\n version='1.11.1',\n maintainer='Ang Li',\n maintainer_email='[email protected]',\n description='Automation framework for special end-to-end test cases',\n license='Apache2.0',\n url='https://github.com/google/mobly',\n download_url='https://github.com/google/mobly/tarball/1.11.1',\n packages=setuptools.find_packages(exclude=['tests']),\n include_package_data=False,\n scripts=['tools/sl4a_shell.py', 'tools/snippet_shell.py'],\n tests_require=[\n 'mock',\n 'pytest',\n 'pytz',\n ],\n install_requires=install_requires,\n cmdclass={'test': PyTest},\n )\n\n\nif __name__ == '__main__':\n main()\n", "path": "setup.py"}]}
870
281
gh_patches_debug_22271
rasdani/github-patches
git_diff
pydantic__pydantic-299
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Сreating a child model causes a RecursionError exception <!-- Questions, Feature Requests, and Bug Reports are all welcome --> <!-- delete as applicable: --> # Bug * OS: **Ubuntu 14.04** * Python version `import sys; print(sys.version)`: **3.6.7** * Pydantic version `import pydantic; print(pydantic.VERSION)`: **0.14** I'm expecting, that I can use a classic inheritance for dataclass models: ```py import pydantic.dataclasses @pydantic.dataclasses.dataclass class A: a: str = None @pydantic.dataclasses.dataclass class B(A): b: str = None B(a='a', b='b') ``` But as a result I'm receiving this: ``` Traceback (most recent call last): File "/usr/local/pyenv/versions/3.6.7/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2881, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-9-10a7116ca691>", line 12, in <module> B(a='a', b='b') File "<string>", line 4, in __init__ File "/usr/local/pyenv/versions/3.6.7/lib/python3.6/site-packages/pydantic/dataclasses.py", line 13, in post_init self.__post_init_original__() File "/usr/local/pyenv/versions/3.6.7/lib/python3.6/site-packages/pydantic/dataclasses.py", line 13, in post_init self.__post_init_original__() File "/usr/local/pyenv/versions/3.6.7/lib/python3.6/site-packages/pydantic/dataclasses.py", line 13, in post_init self.__post_init_original__() [Previous line repeated 952 more times] File "/usr/local/pyenv/versions/3.6.7/lib/python3.6/site-packages/pydantic/dataclasses.py", line 9, in post_init d = validate_model(self.__pydantic_model__, self.__dict__) File "/usr/local/pyenv/versions/3.6.7/lib/python3.6/site-packages/pydantic/main.py", line 484, in validate_model v_, errors_ = field.validate(value, values, loc=field.alias, cls=model.__class__) File "/usr/local/pyenv/versions/3.6.7/lib/python3.6/site-packages/pydantic/fields.py", line 303, in validate v, errors = self._validate_singleton(v, values, loc, cls) File "/usr/local/pyenv/versions/3.6.7/lib/python3.6/site-packages/pydantic/fields.py", line 406, in _validate_singleton return self._apply_validators(v, values, loc, cls, self.validators) File "/usr/local/pyenv/versions/3.6.7/lib/python3.6/site-packages/pydantic/fields.py", line 412, in _apply_validators v = validator(v) File "/usr/local/pyenv/versions/3.6.7/lib/python3.6/site-packages/pydantic/validators.py", line 23, in str_validator if isinstance(v, (str, NoneType)): RecursionError: maximum recursion depth exceeded in __instancecheck__ ``` This line below causes this problem: https://github.com/samuelcolvin/pydantic/blob/master/pydantic/dataclasses.py#L13 UPD: defining `__post_init__` in child model fixes the problem, but this is workaround. ```py import pydantic.dataclasses @pydantic.dataclasses.dataclass class A: a: str = None @pydantic.dataclasses.dataclass class B(A): b: str = None def __post_init__(): pass B(a='a', b='b') ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `pydantic/dataclasses.py` Content: ``` 1 import dataclasses 2 3 from pydantic import ValidationError 4 5 from .main import create_model, validate_model 6 7 8 def post_init(self): 9 d = validate_model(self.__pydantic_model__, self.__dict__) 10 object.__setattr__(self, '__dict__', d) 11 object.__setattr__(self, '__initialised__', True) 12 if self.__post_init_original__: 13 self.__post_init_original__() 14 15 16 def setattr_validate_assignment(self, name, value): 17 if self.__initialised__: 18 d = dict(self.__dict__) 19 d.pop(name) 20 value, error_ = self.__pydantic_model__.__fields__[name].validate(value, d, loc=name) 21 if error_: 22 raise ValidationError([error_]) 23 24 object.__setattr__(self, name, value) 25 26 27 def _process_class(_cls, init, repr, eq, order, unsafe_hash, frozen, validate_assignment): 28 post_init_original = getattr(_cls, '__post_init__', None) 29 _cls.__post_init__ = post_init 30 cls = dataclasses._process_class(_cls, init, repr, eq, order, unsafe_hash, frozen) 31 32 fields = {name: (field.type, field.default) for name, field in cls.__dataclass_fields__.items()} 33 cls.__post_init_original__ = post_init_original 34 cls.__pydantic_model__ = create_model(cls.__name__, **fields) 35 cls.__initialised__ = False 36 37 if validate_assignment and not frozen: 38 cls.__setattr__ = setattr_validate_assignment 39 return cls 40 41 42 def dataclass( 43 _cls=None, *, init=True, repr=True, eq=True, order=False, unsafe_hash=False, frozen=False, validate_assignment=False 44 ): 45 """ 46 Like the python standard lib dataclasses but with type validation. 47 48 Arguments are the same as for standard dataclasses, except for validate_assignment which has the same meaning 49 as Config.validate_assignment. 50 """ 51 52 def wrap(cls): 53 return _process_class(cls, init, repr, eq, order, unsafe_hash, frozen, validate_assignment) 54 55 if _cls is None: 56 return wrap 57 58 return wrap(_cls) 59 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/pydantic/dataclasses.py b/pydantic/dataclasses.py --- a/pydantic/dataclasses.py +++ b/pydantic/dataclasses.py @@ -5,7 +5,7 @@ from .main import create_model, validate_model -def post_init(self): +def _pydantic_post_init(self): d = validate_model(self.__pydantic_model__, self.__dict__) object.__setattr__(self, '__dict__', d) object.__setattr__(self, '__initialised__', True) @@ -26,7 +26,9 @@ def _process_class(_cls, init, repr, eq, order, unsafe_hash, frozen, validate_assignment): post_init_original = getattr(_cls, '__post_init__', None) - _cls.__post_init__ = post_init + if post_init_original and post_init_original.__name__ == '_pydantic_post_init': + post_init_original = None + _cls.__post_init__ = _pydantic_post_init cls = dataclasses._process_class(_cls, init, repr, eq, order, unsafe_hash, frozen) fields = {name: (field.type, field.default) for name, field in cls.__dataclass_fields__.items()}
{"golden_diff": "diff --git a/pydantic/dataclasses.py b/pydantic/dataclasses.py\n--- a/pydantic/dataclasses.py\n+++ b/pydantic/dataclasses.py\n@@ -5,7 +5,7 @@\n from .main import create_model, validate_model\n \n \n-def post_init(self):\n+def _pydantic_post_init(self):\n d = validate_model(self.__pydantic_model__, self.__dict__)\n object.__setattr__(self, '__dict__', d)\n object.__setattr__(self, '__initialised__', True)\n@@ -26,7 +26,9 @@\n \n def _process_class(_cls, init, repr, eq, order, unsafe_hash, frozen, validate_assignment):\n post_init_original = getattr(_cls, '__post_init__', None)\n- _cls.__post_init__ = post_init\n+ if post_init_original and post_init_original.__name__ == '_pydantic_post_init':\n+ post_init_original = None\n+ _cls.__post_init__ = _pydantic_post_init\n cls = dataclasses._process_class(_cls, init, repr, eq, order, unsafe_hash, frozen)\n \n fields = {name: (field.type, field.default) for name, field in cls.__dataclass_fields__.items()}\n", "issue": "\u0421reating a child model causes a RecursionError exception\n<!-- Questions, Feature Requests, and Bug Reports are all welcome -->\r\n<!-- delete as applicable: -->\r\n# Bug\r\n\r\n* OS: **Ubuntu 14.04**\r\n* Python version `import sys; print(sys.version)`: **3.6.7**\r\n* Pydantic version `import pydantic; print(pydantic.VERSION)`: **0.14**\r\n\r\nI'm expecting, that I can use a classic inheritance for dataclass models:\r\n```py\r\nimport pydantic.dataclasses\r\n\r\[email protected]\r\nclass A:\r\n a: str = None\r\n\r\[email protected]\r\nclass B(A):\r\n b: str = None\r\n\r\nB(a='a', b='b')\r\n```\r\n\r\nBut as a result I'm receiving this:\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/pyenv/versions/3.6.7/lib/python3.6/site-packages/IPython/core/interactiveshell.py\", line 2881, in run_code\r\n exec(code_obj, self.user_global_ns, self.user_ns)\r\n File \"<ipython-input-9-10a7116ca691>\", line 12, in <module>\r\n B(a='a', b='b')\r\n File \"<string>\", line 4, in __init__\r\n File \"/usr/local/pyenv/versions/3.6.7/lib/python3.6/site-packages/pydantic/dataclasses.py\", line 13, in post_init\r\n self.__post_init_original__()\r\n File \"/usr/local/pyenv/versions/3.6.7/lib/python3.6/site-packages/pydantic/dataclasses.py\", line 13, in post_init\r\n self.__post_init_original__()\r\n File \"/usr/local/pyenv/versions/3.6.7/lib/python3.6/site-packages/pydantic/dataclasses.py\", line 13, in post_init\r\n self.__post_init_original__()\r\n [Previous line repeated 952 more times]\r\n File \"/usr/local/pyenv/versions/3.6.7/lib/python3.6/site-packages/pydantic/dataclasses.py\", line 9, in post_init\r\n d = validate_model(self.__pydantic_model__, self.__dict__)\r\n File \"/usr/local/pyenv/versions/3.6.7/lib/python3.6/site-packages/pydantic/main.py\", line 484, in validate_model\r\n v_, errors_ = field.validate(value, values, loc=field.alias, cls=model.__class__)\r\n File \"/usr/local/pyenv/versions/3.6.7/lib/python3.6/site-packages/pydantic/fields.py\", line 303, in validate\r\n v, errors = self._validate_singleton(v, values, loc, cls)\r\n File \"/usr/local/pyenv/versions/3.6.7/lib/python3.6/site-packages/pydantic/fields.py\", line 406, in _validate_singleton\r\n return self._apply_validators(v, values, loc, cls, self.validators)\r\n File \"/usr/local/pyenv/versions/3.6.7/lib/python3.6/site-packages/pydantic/fields.py\", line 412, in _apply_validators\r\n v = validator(v)\r\n File \"/usr/local/pyenv/versions/3.6.7/lib/python3.6/site-packages/pydantic/validators.py\", line 23, in str_validator\r\n if isinstance(v, (str, NoneType)):\r\nRecursionError: maximum recursion depth exceeded in __instancecheck__\r\n```\r\nThis line below causes this problem:\r\nhttps://github.com/samuelcolvin/pydantic/blob/master/pydantic/dataclasses.py#L13\r\n\r\nUPD: defining `__post_init__` in child model fixes the problem, but this is workaround.\r\n```py\r\nimport pydantic.dataclasses\r\n\r\[email protected]\r\nclass A:\r\n a: str = None\r\n\r\[email protected]\r\nclass B(A):\r\n b: str = None\r\n\r\n def __post_init__():\r\n pass\r\n\r\nB(a='a', b='b')\r\n```\n", "before_files": [{"content": "import dataclasses\n\nfrom pydantic import ValidationError\n\nfrom .main import create_model, validate_model\n\n\ndef post_init(self):\n d = validate_model(self.__pydantic_model__, self.__dict__)\n object.__setattr__(self, '__dict__', d)\n object.__setattr__(self, '__initialised__', True)\n if self.__post_init_original__:\n self.__post_init_original__()\n\n\ndef setattr_validate_assignment(self, name, value):\n if self.__initialised__:\n d = dict(self.__dict__)\n d.pop(name)\n value, error_ = self.__pydantic_model__.__fields__[name].validate(value, d, loc=name)\n if error_:\n raise ValidationError([error_])\n\n object.__setattr__(self, name, value)\n\n\ndef _process_class(_cls, init, repr, eq, order, unsafe_hash, frozen, validate_assignment):\n post_init_original = getattr(_cls, '__post_init__', None)\n _cls.__post_init__ = post_init\n cls = dataclasses._process_class(_cls, init, repr, eq, order, unsafe_hash, frozen)\n\n fields = {name: (field.type, field.default) for name, field in cls.__dataclass_fields__.items()}\n cls.__post_init_original__ = post_init_original\n cls.__pydantic_model__ = create_model(cls.__name__, **fields)\n cls.__initialised__ = False\n\n if validate_assignment and not frozen:\n cls.__setattr__ = setattr_validate_assignment\n return cls\n\n\ndef dataclass(\n _cls=None, *, init=True, repr=True, eq=True, order=False, unsafe_hash=False, frozen=False, validate_assignment=False\n):\n \"\"\"\n Like the python standard lib dataclasses but with type validation.\n\n Arguments are the same as for standard dataclasses, except for validate_assignment which has the same meaning\n as Config.validate_assignment.\n \"\"\"\n\n def wrap(cls):\n return _process_class(cls, init, repr, eq, order, unsafe_hash, frozen, validate_assignment)\n\n if _cls is None:\n return wrap\n\n return wrap(_cls)\n", "path": "pydantic/dataclasses.py"}], "after_files": [{"content": "import dataclasses\n\nfrom pydantic import ValidationError\n\nfrom .main import create_model, validate_model\n\n\ndef _pydantic_post_init(self):\n d = validate_model(self.__pydantic_model__, self.__dict__)\n object.__setattr__(self, '__dict__', d)\n object.__setattr__(self, '__initialised__', True)\n if self.__post_init_original__:\n self.__post_init_original__()\n\n\ndef setattr_validate_assignment(self, name, value):\n if self.__initialised__:\n d = dict(self.__dict__)\n d.pop(name)\n value, error_ = self.__pydantic_model__.__fields__[name].validate(value, d, loc=name)\n if error_:\n raise ValidationError([error_])\n\n object.__setattr__(self, name, value)\n\n\ndef _process_class(_cls, init, repr, eq, order, unsafe_hash, frozen, validate_assignment):\n post_init_original = getattr(_cls, '__post_init__', None)\n if post_init_original and post_init_original.__name__ == '_pydantic_post_init':\n post_init_original = None\n _cls.__post_init__ = _pydantic_post_init\n cls = dataclasses._process_class(_cls, init, repr, eq, order, unsafe_hash, frozen)\n\n fields = {name: (field.type, field.default) for name, field in cls.__dataclass_fields__.items()}\n cls.__post_init_original__ = post_init_original\n cls.__pydantic_model__ = create_model(cls.__name__, **fields)\n cls.__initialised__ = False\n\n if validate_assignment and not frozen:\n cls.__setattr__ = setattr_validate_assignment\n return cls\n\n\ndef dataclass(\n _cls=None, *, init=True, repr=True, eq=True, order=False, unsafe_hash=False, frozen=False, validate_assignment=False\n):\n \"\"\"\n Like the python standard lib dataclasses but with type validation.\n\n Arguments are the same as for standard dataclasses, except for validate_assignment which has the same meaning\n as Config.validate_assignment.\n \"\"\"\n\n def wrap(cls):\n return _process_class(cls, init, repr, eq, order, unsafe_hash, frozen, validate_assignment)\n\n if _cls is None:\n return wrap\n\n return wrap(_cls)\n", "path": "pydantic/dataclasses.py"}]}
1,739
272
gh_patches_debug_18778
rasdani/github-patches
git_diff
vacanza__python-holidays-1782
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- The calculation of the lunar start date throws a TypeError exception for Thailand and Cambodia version: 0.47 stack trace: "python39\lib\site-packages\holidays\calendars\thai.py", line 233, in _get_start_date return _ThaiLunisolar.START_DATE + td(days=delta_days) TypeError: unsupported type for timedelta days component: numpy.int32 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `holidays/helpers.py` Content: ``` 1 # holidays 2 # -------- 3 # A fast, efficient Python library for generating country, province and state 4 # specific sets of holidays on the fly. It aims to make determining whether a 5 # specific date is a holiday as fast and flexible as possible. 6 # 7 # Authors: Vacanza Team and individual contributors (see AUTHORS file) 8 # dr-prodigy <[email protected]> (c) 2017-2023 9 # ryanss <[email protected]> (c) 2014-2017 10 # Website: https://github.com/vacanza/python-holidays 11 # License: MIT (see LICENSE file) 12 13 14 def _normalize_arguments(cls, value): 15 """Normalize arguments. 16 17 :param cls: 18 A type of arguments to normalize. 19 20 :param value: 21 Either a single item or an iterable of `cls` type. 22 23 :return: 24 A set created from `value` argument. 25 26 """ 27 if isinstance(value, cls): 28 return {value} 29 30 return set(value) if value is not None else set() 31 32 33 def _normalize_tuple(data): 34 """Normalize tuple. 35 36 :param data: 37 Either a tuple or a tuple of tuples. 38 39 :return: 40 An unchanged object for tuple of tuples, e.g., ((JAN, 10), (DEC, 31)). 41 An object put into a tuple otherwise, e.g., ((JAN, 10),). 42 """ 43 return data if not data or isinstance(data[0], tuple) else (data,) 44 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/holidays/helpers.py b/holidays/helpers.py --- a/holidays/helpers.py +++ b/holidays/helpers.py @@ -24,13 +24,19 @@ A set created from `value` argument. """ + if value is None: + return set() + if isinstance(value, cls): return {value} - return set(value) if value is not None else set() + try: + return {v if isinstance(v, cls) else cls(v) for v in value} + except TypeError: # non-iterable + return {value if isinstance(value, cls) else cls(value)} -def _normalize_tuple(data): +def _normalize_tuple(value): """Normalize tuple. :param data: @@ -40,4 +46,4 @@ An unchanged object for tuple of tuples, e.g., ((JAN, 10), (DEC, 31)). An object put into a tuple otherwise, e.g., ((JAN, 10),). """ - return data if not data or isinstance(data[0], tuple) else (data,) + return value if not value or isinstance(value[0], tuple) else (value,)
{"golden_diff": "diff --git a/holidays/helpers.py b/holidays/helpers.py\n--- a/holidays/helpers.py\n+++ b/holidays/helpers.py\n@@ -24,13 +24,19 @@\n A set created from `value` argument.\n \n \"\"\"\n+ if value is None:\n+ return set()\n+\n if isinstance(value, cls):\n return {value}\n \n- return set(value) if value is not None else set()\n+ try:\n+ return {v if isinstance(v, cls) else cls(v) for v in value}\n+ except TypeError: # non-iterable\n+ return {value if isinstance(value, cls) else cls(value)}\n \n \n-def _normalize_tuple(data):\n+def _normalize_tuple(value):\n \"\"\"Normalize tuple.\n \n :param data:\n@@ -40,4 +46,4 @@\n An unchanged object for tuple of tuples, e.g., ((JAN, 10), (DEC, 31)).\n An object put into a tuple otherwise, e.g., ((JAN, 10),).\n \"\"\"\n- return data if not data or isinstance(data[0], tuple) else (data,)\n+ return value if not value or isinstance(value[0], tuple) else (value,)\n", "issue": "The calculation of the lunar start date throws a TypeError exception for Thailand and Cambodia\nversion: 0.47\r\nstack trace:\r\n\"python39\\lib\\site-packages\\holidays\\calendars\\thai.py\", line 233, in _get_start_date\r\n return _ThaiLunisolar.START_DATE + td(days=delta_days)\r\nTypeError: unsupported type for timedelta days component: numpy.int32\n", "before_files": [{"content": "# holidays\n# --------\n# A fast, efficient Python library for generating country, province and state\n# specific sets of holidays on the fly. It aims to make determining whether a\n# specific date is a holiday as fast and flexible as possible.\n#\n# Authors: Vacanza Team and individual contributors (see AUTHORS file)\n# dr-prodigy <[email protected]> (c) 2017-2023\n# ryanss <[email protected]> (c) 2014-2017\n# Website: https://github.com/vacanza/python-holidays\n# License: MIT (see LICENSE file)\n\n\ndef _normalize_arguments(cls, value):\n \"\"\"Normalize arguments.\n\n :param cls:\n A type of arguments to normalize.\n\n :param value:\n Either a single item or an iterable of `cls` type.\n\n :return:\n A set created from `value` argument.\n\n \"\"\"\n if isinstance(value, cls):\n return {value}\n\n return set(value) if value is not None else set()\n\n\ndef _normalize_tuple(data):\n \"\"\"Normalize tuple.\n\n :param data:\n Either a tuple or a tuple of tuples.\n\n :return:\n An unchanged object for tuple of tuples, e.g., ((JAN, 10), (DEC, 31)).\n An object put into a tuple otherwise, e.g., ((JAN, 10),).\n \"\"\"\n return data if not data or isinstance(data[0], tuple) else (data,)\n", "path": "holidays/helpers.py"}], "after_files": [{"content": "# holidays\n# --------\n# A fast, efficient Python library for generating country, province and state\n# specific sets of holidays on the fly. It aims to make determining whether a\n# specific date is a holiday as fast and flexible as possible.\n#\n# Authors: Vacanza Team and individual contributors (see AUTHORS file)\n# dr-prodigy <[email protected]> (c) 2017-2023\n# ryanss <[email protected]> (c) 2014-2017\n# Website: https://github.com/vacanza/python-holidays\n# License: MIT (see LICENSE file)\n\n\ndef _normalize_arguments(cls, value):\n \"\"\"Normalize arguments.\n\n :param cls:\n A type of arguments to normalize.\n\n :param value:\n Either a single item or an iterable of `cls` type.\n\n :return:\n A set created from `value` argument.\n\n \"\"\"\n if value is None:\n return set()\n\n if isinstance(value, cls):\n return {value}\n\n try:\n return {v if isinstance(v, cls) else cls(v) for v in value}\n except TypeError: # non-iterable\n return {value if isinstance(value, cls) else cls(value)}\n\n\ndef _normalize_tuple(value):\n \"\"\"Normalize tuple.\n\n :param data:\n Either a tuple or a tuple of tuples.\n\n :return:\n An unchanged object for tuple of tuples, e.g., ((JAN, 10), (DEC, 31)).\n An object put into a tuple otherwise, e.g., ((JAN, 10),).\n \"\"\"\n return value if not value or isinstance(value[0], tuple) else (value,)\n", "path": "holidays/helpers.py"}]}
776
271
gh_patches_debug_7912
rasdani/github-patches
git_diff
mars-project__mars-1502
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [BUG]Cannot pass zero-dimensional tensor as both arguments into `mt.arctan2` ## Describe the bug When I use the `mt.arctan2`, if i pass zero-dimensional tensors as both arguments, an error will be raised. If any argument is not zero-dimensional, the codes will work. ## To Reproduce ``` vx = mt.dot((1,0,0),(0,1,0)) vy = mt.dot((1,0,0),(0,0,1)) t = mt.arctan2(vx, vy) --------------------------------------------------------------------------- IndexError Traceback (most recent call last) ~/anaconda3/lib/python3.7/site-packages/mars/core.py in __len__(self) 533 try: --> 534 return self.shape[0] 535 except IndexError: IndexError: tuple index out of range During handling of the above exception, another exception occurred: TypeError Traceback (most recent call last) <ipython-input-23-09c63447ea86> in <module> ----> 1 mt.arctan2(vx, vy) ~/anaconda3/lib/python3.7/site-packages/mars/tensor/utils.py in h(*tensors, **kw) 256 kw['dtype'] = dtype 257 --> 258 ret = func(*tensors, **kw) 259 if ret is NotImplemented: 260 reverse_func = getattr(inspect.getmodule(func), 'r{0}'.format(func.__name__), None) \ ~/anaconda3/lib/python3.7/site-packages/mars/tensor/arithmetic/arctan2.py in arctan2(x1, x2, out, where, **kwargs) 125 """ 126 op = TensorArctan2(**kwargs) --> 127 return op(x1, x2, out=out, where=where) ~/anaconda3/lib/python3.7/site-packages/mars/tensor/arithmetic/core.py in __call__(self, x1, x2, out, where) 268 269 def __call__(self, x1, x2, out=None, where=None): --> 270 return self._call(x1, x2, out=out, where=where) 271 272 def rcall(self, x1, x2, out=None, where=None): ~/anaconda3/lib/python3.7/site-packages/mars/tensor/arithmetic/core.py in _call(self, x1, x2, out, where) 251 252 inputs = filter_inputs([x1, x2, out, where]) --> 253 t = self.new_tensor(inputs, shape, order=order) 254 255 if out is None: ~/anaconda3/lib/python3.7/site-packages/mars/tensor/operands.py in new_tensor(self, inputs, shape, dtype, order, **kw) 77 raise TypeError('cannot new tensor with more than 1 outputs') 78 ---> 79 return self.new_tensors(inputs, shape=shape, dtype=dtype, order=order, **kw)[0] 80 81 @classmethod ~/anaconda3/lib/python3.7/site-packages/mars/tensor/operands.py in new_tensors(self, inputs, shape, dtype, order, chunks, nsplits, output_limit, kws, **kw) 71 output_limit=None, kws=None, **kw): 72 return self.new_tileables(inputs, shape=shape, chunks=chunks, nsplits=nsplits, ---> 73 output_limit=output_limit, kws=kws, dtype=dtype, order=order, **kw) 74 75 def new_tensor(self, inputs, shape, dtype=None, order=None, **kw): ~/anaconda3/lib/python3.7/site-packages/mars/operands.py in new_tileables(self, inputs, kws, **kw) 352 """ 353 --> 354 tileables = self._new_tileables(inputs, kws=kws, **kw) 355 if is_eager_mode(): 356 ExecutableTuple(tileables).execute(fetch=False) ~/anaconda3/lib/python3.7/site-packages/mars/tensor/arithmetic/core.py in _new_tileables(self, inputs, kws, **kw) 70 71 def _new_tileables(self, inputs, kws=None, **kw): ---> 72 self._set_sparse(inputs) 73 return super()._new_tileables( 74 inputs, kws=kws, **kw) ~/anaconda3/lib/python3.7/site-packages/mars/tensor/arithmetic/core.py in _set_sparse(self, inputs) 188 x1 = self._lhs if np.isscalar(self._lhs) else next(inputs_iter) 189 x2 = self._rhs if np.isscalar(self._rhs) else next(inputs_iter) --> 190 setattr(self, '_sparse', self._is_sparse(x1, x2)) 191 192 def _set_inputs(self, inputs): ~/anaconda3/lib/python3.7/site-packages/mars/tensor/arithmetic/arctan2.py in _is_sparse(cls, x1, x2) 33 if hasattr(x1, 'issparse') and x1.issparse() and np.isscalar(x2): 34 return True ---> 35 elif x1 == 0: 36 return True 37 return False ~/anaconda3/lib/python3.7/site-packages/mars/tensor/core.py in __len__(self) 279 280 def __len__(self): --> 281 return len(self._data) 282 283 @property ~/anaconda3/lib/python3.7/site-packages/mars/core.py in __len__(self) 536 if build_mode().is_build_mode: 537 return 0 --> 538 raise TypeError('len() of unsized object') 539 540 @property TypeError: len() of unsized object ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `mars/tensor/arithmetic/arctan2.py` Content: ``` 1 #!/usr/bin/env python 2 # -*- coding: utf-8 -*- 3 # Copyright 1999-2020 Alibaba Group Holding Ltd. 4 # 5 # Licensed under the Apache License, Version 2.0 (the "License"); 6 # you may not use this file except in compliance with the License. 7 # You may obtain a copy of the License at 8 # 9 # http://www.apache.org/licenses/LICENSE-2.0 10 # 11 # Unless required by applicable law or agreed to in writing, software 12 # distributed under the License is distributed on an "AS IS" BASIS, 13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 # See the License for the specific language governing permissions and 15 # limitations under the License. 16 17 import numpy as np 18 19 from ... import opcodes as OperandDef 20 from ..utils import infer_dtype 21 from .core import TensorBinOp 22 from .utils import arithmetic_operand 23 24 25 @arithmetic_operand 26 class TensorArctan2(TensorBinOp): 27 _op_type_ = OperandDef.ARCTAN2 28 _func_name = 'arctan2' 29 30 @classmethod 31 def _is_sparse(cls, x1, x2): 32 # x2 is sparse or not does not matter 33 if hasattr(x1, 'issparse') and x1.issparse() and np.isscalar(x2): 34 return True 35 elif x1 == 0: 36 return True 37 return False 38 39 40 @infer_dtype(np.arctan2) 41 def arctan2(x1, x2, out=None, where=None, **kwargs): 42 """ 43 Element-wise arc tangent of ``x1/x2`` choosing the quadrant correctly. 44 45 The quadrant (i.e., branch) is chosen so that ``arctan2(x1, x2)`` is 46 the signed angle in radians between the ray ending at the origin and 47 passing through the point (1,0), and the ray ending at the origin and 48 passing through the point (`x2`, `x1`). (Note the role reversal: the 49 "`y`-coordinate" is the first function parameter, the "`x`-coordinate" 50 is the second.) By IEEE convention, this function is defined for 51 `x2` = +/-0 and for either or both of `x1` and `x2` = +/-inf (see 52 Notes for specific values). 53 54 This function is not defined for complex-valued arguments; for the 55 so-called argument of complex values, use `angle`. 56 57 Parameters 58 ---------- 59 x1 : array_like, real-valued 60 `y`-coordinates. 61 x2 : array_like, real-valued 62 `x`-coordinates. `x2` must be broadcastable to match the shape of 63 `x1` or vice versa. 64 out : Tensor, None, or tuple of Tensor and None, optional 65 A location into which the result is stored. If provided, it must have 66 a shape that the inputs broadcast to. If not provided or `None`, 67 a freshly-allocated tensor is returned. A tuple (possible only as a 68 keyword argument) must have length equal to the number of outputs. 69 where : array_like, optional 70 Values of True indicate to calculate the ufunc at that position, values 71 of False indicate to leave the value in the output alone. 72 **kwargs 73 74 Returns 75 ------- 76 angle : Tensor 77 Array of angles in radians, in the range ``[-pi, pi]``. 78 79 See Also 80 -------- 81 arctan, tan, angle 82 83 Notes 84 ----- 85 *arctan2* is identical to the `atan2` function of the underlying 86 C library. The following special values are defined in the C 87 standard: [1]_ 88 89 ====== ====== ================ 90 `x1` `x2` `arctan2(x1,x2)` 91 ====== ====== ================ 92 +/- 0 +0 +/- 0 93 +/- 0 -0 +/- pi 94 > 0 +/-inf +0 / +pi 95 < 0 +/-inf -0 / -pi 96 +/-inf +inf +/- (pi/4) 97 +/-inf -inf +/- (3*pi/4) 98 ====== ====== ================ 99 100 Note that +0 and -0 are distinct floating point numbers, as are +inf 101 and -inf. 102 103 References 104 ---------- 105 .. [1] ISO/IEC standard 9899:1999, "Programming language C." 106 107 Examples 108 -------- 109 Consider four points in different quadrants: 110 >>> import mars.tensor as mt 111 112 >>> x = mt.array([-1, +1, +1, -1]) 113 >>> y = mt.array([-1, -1, +1, +1]) 114 >>> (mt.arctan2(y, x) * 180 / mt.pi).execute() 115 array([-135., -45., 45., 135.]) 116 117 Note the order of the parameters. `arctan2` is defined also when `x2` = 0 118 and at several other special points, obtaining values in 119 the range ``[-pi, pi]``: 120 121 >>> mt.arctan2([1., -1.], [0., 0.]).execute() 122 array([ 1.57079633, -1.57079633]) 123 >>> mt.arctan2([0., 0., mt.inf], [+0., -0., mt.inf]).execute() 124 array([ 0. , 3.14159265, 0.78539816]) 125 """ 126 op = TensorArctan2(**kwargs) 127 return op(x1, x2, out=out, where=where) 128 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/mars/tensor/arithmetic/arctan2.py b/mars/tensor/arithmetic/arctan2.py --- a/mars/tensor/arithmetic/arctan2.py +++ b/mars/tensor/arithmetic/arctan2.py @@ -29,11 +29,12 @@ @classmethod def _is_sparse(cls, x1, x2): - # x2 is sparse or not does not matter - if hasattr(x1, 'issparse') and x1.issparse() and np.isscalar(x2): - return True - elif x1 == 0: + if hasattr(x1, 'issparse') and x1.issparse(): + # if x1 is sparse, will be sparse always return True + elif np.isscalar(x1) and x1 == 0: + # x1 == 0, return sparse if x2 is + return x2.issparse() if hasattr(x2, 'issparse') else False return False
{"golden_diff": "diff --git a/mars/tensor/arithmetic/arctan2.py b/mars/tensor/arithmetic/arctan2.py\n--- a/mars/tensor/arithmetic/arctan2.py\n+++ b/mars/tensor/arithmetic/arctan2.py\n@@ -29,11 +29,12 @@\n \n @classmethod\n def _is_sparse(cls, x1, x2):\n- # x2 is sparse or not does not matter\n- if hasattr(x1, 'issparse') and x1.issparse() and np.isscalar(x2):\n- return True\n- elif x1 == 0:\n+ if hasattr(x1, 'issparse') and x1.issparse():\n+ # if x1 is sparse, will be sparse always\n return True\n+ elif np.isscalar(x1) and x1 == 0:\n+ # x1 == 0, return sparse if x2 is\n+ return x2.issparse() if hasattr(x2, 'issparse') else False\n return False\n", "issue": "[BUG]Cannot pass zero-dimensional tensor as both arguments into `mt.arctan2`\n## Describe the bug\r\nWhen I use the `mt.arctan2`, if i pass zero-dimensional tensors as both arguments, an error will be raised. If any argument is not zero-dimensional, the codes will work.\r\n\r\n## To Reproduce\r\n```\r\nvx = mt.dot((1,0,0),(0,1,0))\r\nvy = mt.dot((1,0,0),(0,0,1))\r\nt = mt.arctan2(vx, vy)\r\n---------------------------------------------------------------------------\r\nIndexError Traceback (most recent call last)\r\n~/anaconda3/lib/python3.7/site-packages/mars/core.py in __len__(self)\r\n 533 try:\r\n--> 534 return self.shape[0]\r\n 535 except IndexError:\r\n\r\nIndexError: tuple index out of range\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-23-09c63447ea86> in <module>\r\n----> 1 mt.arctan2(vx, vy)\r\n\r\n~/anaconda3/lib/python3.7/site-packages/mars/tensor/utils.py in h(*tensors, **kw)\r\n 256 kw['dtype'] = dtype\r\n 257 \r\n--> 258 ret = func(*tensors, **kw)\r\n 259 if ret is NotImplemented:\r\n 260 reverse_func = getattr(inspect.getmodule(func), 'r{0}'.format(func.__name__), None) \\\r\n\r\n~/anaconda3/lib/python3.7/site-packages/mars/tensor/arithmetic/arctan2.py in arctan2(x1, x2, out, where, **kwargs)\r\n 125 \"\"\"\r\n 126 op = TensorArctan2(**kwargs)\r\n--> 127 return op(x1, x2, out=out, where=where)\r\n\r\n~/anaconda3/lib/python3.7/site-packages/mars/tensor/arithmetic/core.py in __call__(self, x1, x2, out, where)\r\n 268 \r\n 269 def __call__(self, x1, x2, out=None, where=None):\r\n--> 270 return self._call(x1, x2, out=out, where=where)\r\n 271 \r\n 272 def rcall(self, x1, x2, out=None, where=None):\r\n\r\n~/anaconda3/lib/python3.7/site-packages/mars/tensor/arithmetic/core.py in _call(self, x1, x2, out, where)\r\n 251 \r\n 252 inputs = filter_inputs([x1, x2, out, where])\r\n--> 253 t = self.new_tensor(inputs, shape, order=order)\r\n 254 \r\n 255 if out is None:\r\n\r\n~/anaconda3/lib/python3.7/site-packages/mars/tensor/operands.py in new_tensor(self, inputs, shape, dtype, order, **kw)\r\n 77 raise TypeError('cannot new tensor with more than 1 outputs')\r\n 78 \r\n---> 79 return self.new_tensors(inputs, shape=shape, dtype=dtype, order=order, **kw)[0]\r\n 80 \r\n 81 @classmethod\r\n\r\n~/anaconda3/lib/python3.7/site-packages/mars/tensor/operands.py in new_tensors(self, inputs, shape, dtype, order, chunks, nsplits, output_limit, kws, **kw)\r\n 71 output_limit=None, kws=None, **kw):\r\n 72 return self.new_tileables(inputs, shape=shape, chunks=chunks, nsplits=nsplits,\r\n---> 73 output_limit=output_limit, kws=kws, dtype=dtype, order=order, **kw)\r\n 74 \r\n 75 def new_tensor(self, inputs, shape, dtype=None, order=None, **kw):\r\n\r\n~/anaconda3/lib/python3.7/site-packages/mars/operands.py in new_tileables(self, inputs, kws, **kw)\r\n 352 \"\"\"\r\n 353 \r\n--> 354 tileables = self._new_tileables(inputs, kws=kws, **kw)\r\n 355 if is_eager_mode():\r\n 356 ExecutableTuple(tileables).execute(fetch=False)\r\n\r\n~/anaconda3/lib/python3.7/site-packages/mars/tensor/arithmetic/core.py in _new_tileables(self, inputs, kws, **kw)\r\n 70 \r\n 71 def _new_tileables(self, inputs, kws=None, **kw):\r\n---> 72 self._set_sparse(inputs)\r\n 73 return super()._new_tileables(\r\n 74 inputs, kws=kws, **kw)\r\n\r\n~/anaconda3/lib/python3.7/site-packages/mars/tensor/arithmetic/core.py in _set_sparse(self, inputs)\r\n 188 x1 = self._lhs if np.isscalar(self._lhs) else next(inputs_iter)\r\n 189 x2 = self._rhs if np.isscalar(self._rhs) else next(inputs_iter)\r\n--> 190 setattr(self, '_sparse', self._is_sparse(x1, x2))\r\n 191 \r\n 192 def _set_inputs(self, inputs):\r\n\r\n~/anaconda3/lib/python3.7/site-packages/mars/tensor/arithmetic/arctan2.py in _is_sparse(cls, x1, x2)\r\n 33 if hasattr(x1, 'issparse') and x1.issparse() and np.isscalar(x2):\r\n 34 return True\r\n---> 35 elif x1 == 0:\r\n 36 return True\r\n 37 return False\r\n\r\n~/anaconda3/lib/python3.7/site-packages/mars/tensor/core.py in __len__(self)\r\n 279 \r\n 280 def __len__(self):\r\n--> 281 return len(self._data)\r\n 282 \r\n 283 @property\r\n\r\n~/anaconda3/lib/python3.7/site-packages/mars/core.py in __len__(self)\r\n 536 if build_mode().is_build_mode:\r\n 537 return 0\r\n--> 538 raise TypeError('len() of unsized object')\r\n 539 \r\n 540 @property\r\n\r\nTypeError: len() of unsized object\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Copyright 1999-2020 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport numpy as np\n\nfrom ... import opcodes as OperandDef\nfrom ..utils import infer_dtype\nfrom .core import TensorBinOp\nfrom .utils import arithmetic_operand\n\n\n@arithmetic_operand\nclass TensorArctan2(TensorBinOp):\n _op_type_ = OperandDef.ARCTAN2\n _func_name = 'arctan2'\n\n @classmethod\n def _is_sparse(cls, x1, x2):\n # x2 is sparse or not does not matter\n if hasattr(x1, 'issparse') and x1.issparse() and np.isscalar(x2):\n return True\n elif x1 == 0:\n return True\n return False\n\n\n@infer_dtype(np.arctan2)\ndef arctan2(x1, x2, out=None, where=None, **kwargs):\n \"\"\"\n Element-wise arc tangent of ``x1/x2`` choosing the quadrant correctly.\n\n The quadrant (i.e., branch) is chosen so that ``arctan2(x1, x2)`` is\n the signed angle in radians between the ray ending at the origin and\n passing through the point (1,0), and the ray ending at the origin and\n passing through the point (`x2`, `x1`). (Note the role reversal: the\n \"`y`-coordinate\" is the first function parameter, the \"`x`-coordinate\"\n is the second.) By IEEE convention, this function is defined for\n `x2` = +/-0 and for either or both of `x1` and `x2` = +/-inf (see\n Notes for specific values).\n\n This function is not defined for complex-valued arguments; for the\n so-called argument of complex values, use `angle`.\n\n Parameters\n ----------\n x1 : array_like, real-valued\n `y`-coordinates.\n x2 : array_like, real-valued\n `x`-coordinates. `x2` must be broadcastable to match the shape of\n `x1` or vice versa.\n out : Tensor, None, or tuple of Tensor and None, optional\n A location into which the result is stored. If provided, it must have\n a shape that the inputs broadcast to. If not provided or `None`,\n a freshly-allocated tensor is returned. A tuple (possible only as a\n keyword argument) must have length equal to the number of outputs.\n where : array_like, optional\n Values of True indicate to calculate the ufunc at that position, values\n of False indicate to leave the value in the output alone.\n **kwargs\n\n Returns\n -------\n angle : Tensor\n Array of angles in radians, in the range ``[-pi, pi]``.\n\n See Also\n --------\n arctan, tan, angle\n\n Notes\n -----\n *arctan2* is identical to the `atan2` function of the underlying\n C library. The following special values are defined in the C\n standard: [1]_\n\n ====== ====== ================\n `x1` `x2` `arctan2(x1,x2)`\n ====== ====== ================\n +/- 0 +0 +/- 0\n +/- 0 -0 +/- pi\n > 0 +/-inf +0 / +pi\n < 0 +/-inf -0 / -pi\n +/-inf +inf +/- (pi/4)\n +/-inf -inf +/- (3*pi/4)\n ====== ====== ================\n\n Note that +0 and -0 are distinct floating point numbers, as are +inf\n and -inf.\n\n References\n ----------\n .. [1] ISO/IEC standard 9899:1999, \"Programming language C.\"\n\n Examples\n --------\n Consider four points in different quadrants:\n >>> import mars.tensor as mt\n\n >>> x = mt.array([-1, +1, +1, -1])\n >>> y = mt.array([-1, -1, +1, +1])\n >>> (mt.arctan2(y, x) * 180 / mt.pi).execute()\n array([-135., -45., 45., 135.])\n\n Note the order of the parameters. `arctan2` is defined also when `x2` = 0\n and at several other special points, obtaining values in\n the range ``[-pi, pi]``:\n\n >>> mt.arctan2([1., -1.], [0., 0.]).execute()\n array([ 1.57079633, -1.57079633])\n >>> mt.arctan2([0., 0., mt.inf], [+0., -0., mt.inf]).execute()\n array([ 0. , 3.14159265, 0.78539816])\n \"\"\"\n op = TensorArctan2(**kwargs)\n return op(x1, x2, out=out, where=where)\n", "path": "mars/tensor/arithmetic/arctan2.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Copyright 1999-2020 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport numpy as np\n\nfrom ... import opcodes as OperandDef\nfrom ..utils import infer_dtype\nfrom .core import TensorBinOp\nfrom .utils import arithmetic_operand\n\n\n@arithmetic_operand\nclass TensorArctan2(TensorBinOp):\n _op_type_ = OperandDef.ARCTAN2\n _func_name = 'arctan2'\n\n @classmethod\n def _is_sparse(cls, x1, x2):\n if hasattr(x1, 'issparse') and x1.issparse():\n # if x1 is sparse, will be sparse always\n return True\n elif np.isscalar(x1) and x1 == 0:\n # x1 == 0, return sparse if x2 is\n return x2.issparse() if hasattr(x2, 'issparse') else False\n return False\n\n\n@infer_dtype(np.arctan2)\ndef arctan2(x1, x2, out=None, where=None, **kwargs):\n \"\"\"\n Element-wise arc tangent of ``x1/x2`` choosing the quadrant correctly.\n\n The quadrant (i.e., branch) is chosen so that ``arctan2(x1, x2)`` is\n the signed angle in radians between the ray ending at the origin and\n passing through the point (1,0), and the ray ending at the origin and\n passing through the point (`x2`, `x1`). (Note the role reversal: the\n \"`y`-coordinate\" is the first function parameter, the \"`x`-coordinate\"\n is the second.) By IEEE convention, this function is defined for\n `x2` = +/-0 and for either or both of `x1` and `x2` = +/-inf (see\n Notes for specific values).\n\n This function is not defined for complex-valued arguments; for the\n so-called argument of complex values, use `angle`.\n\n Parameters\n ----------\n x1 : array_like, real-valued\n `y`-coordinates.\n x2 : array_like, real-valued\n `x`-coordinates. `x2` must be broadcastable to match the shape of\n `x1` or vice versa.\n out : Tensor, None, or tuple of Tensor and None, optional\n A location into which the result is stored. If provided, it must have\n a shape that the inputs broadcast to. If not provided or `None`,\n a freshly-allocated tensor is returned. A tuple (possible only as a\n keyword argument) must have length equal to the number of outputs.\n where : array_like, optional\n Values of True indicate to calculate the ufunc at that position, values\n of False indicate to leave the value in the output alone.\n **kwargs\n\n Returns\n -------\n angle : Tensor\n Array of angles in radians, in the range ``[-pi, pi]``.\n\n See Also\n --------\n arctan, tan, angle\n\n Notes\n -----\n *arctan2* is identical to the `atan2` function of the underlying\n C library. The following special values are defined in the C\n standard: [1]_\n\n ====== ====== ================\n `x1` `x2` `arctan2(x1,x2)`\n ====== ====== ================\n +/- 0 +0 +/- 0\n +/- 0 -0 +/- pi\n > 0 +/-inf +0 / +pi\n < 0 +/-inf -0 / -pi\n +/-inf +inf +/- (pi/4)\n +/-inf -inf +/- (3*pi/4)\n ====== ====== ================\n\n Note that +0 and -0 are distinct floating point numbers, as are +inf\n and -inf.\n\n References\n ----------\n .. [1] ISO/IEC standard 9899:1999, \"Programming language C.\"\n\n Examples\n --------\n Consider four points in different quadrants:\n >>> import mars.tensor as mt\n\n >>> x = mt.array([-1, +1, +1, -1])\n >>> y = mt.array([-1, -1, +1, +1])\n >>> (mt.arctan2(y, x) * 180 / mt.pi).execute()\n array([-135., -45., 45., 135.])\n\n Note the order of the parameters. `arctan2` is defined also when `x2` = 0\n and at several other special points, obtaining values in\n the range ``[-pi, pi]``:\n\n >>> mt.arctan2([1., -1.], [0., 0.]).execute()\n array([ 1.57079633, -1.57079633])\n >>> mt.arctan2([0., 0., mt.inf], [+0., -0., mt.inf]).execute()\n array([ 0. , 3.14159265, 0.78539816])\n \"\"\"\n op = TensorArctan2(**kwargs)\n return op(x1, x2, out=out, where=where)\n", "path": "mars/tensor/arithmetic/arctan2.py"}]}
3,290
235
gh_patches_debug_19646
rasdani/github-patches
git_diff
netbox-community__netbox-16351
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Staged Changes: InventoryItem Fails Sync ### NetBox version v3.5.1 ### Python version 3.10 ### Steps to Reproduce Currently when working with staged changes there is a edge-case where the staged changes failed on the merge. I believe this is because of the serialiser removing key parts of the data required to recreate the device when `merge()` is called. https://github.com/netbox-community/netbox/blob/f5a1f83f9fa9d98c945d21eb0f7ccb8cd37fbf59/netbox/netbox/staging.py#L119 And when `serialize_object` is called it removes all MPTT model fields: https://github.com/netbox-community/netbox/blob/f5a1f83f9fa9d98c945d21eb0f7ccb8cd37fbf59/netbox/utilities/utils.py#L157 This causes the merge to fail because the MPTT fields are now null. In the staged changes table. Error message: ``` >>> Branch.objects.all()[0].merge() TEST <RestrictedQuerySet [<StagedChange: Create dcim.manufacturer (182)>, <StagedChange: Create dcim.devicetype (177)>, <StagedChange: Create dcim.devicerole (120)>, <StagedChange: Update dcim.site (124)>, <StagedChange: Create dcim.device (9608)>, <StagedChange: Create dcim.device (9609)>, <StagedChange: Create dcim.inventoryitem (848)>, <StagedChange: Create dcim.inventoryitem (849)>]> Create dcim.manufacturer (182) Create dcim.devicetype (177) Create dcim.devicerole (120) Update dcim.site (124) Create dcim.device (9608) Create dcim.device (9609) Create dcim.inventoryitem (848) Traceback (most recent call last): File "/opt/netbox/venv/lib/python3.10/site-packages/django/db/backends/utils.py", line 89, in _execute return self.cursor.execute(sql, params) psycopg2.errors.NotNullViolation: null value in column "lft" of relation "dcim_inventoryitem" violates not-null constraint DETAIL: Failing row contains (2023-08-09 12:54:07.062+00, 2023-08-09 12:54:07.062+00, {}, 848, Inventory Item 1, , , , , , null, f, null, null, null, null, 9608, null, null, null, null, null). The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<console>", line 1, in <module> File "/opt/netbox/netbox/extras/models/staging.py", line 52, in merge change.apply() File "/opt/netbox/netbox/extras/models/staging.py", line 107, in apply instance.save() File "/opt/netbox/venv/lib/python3.10/site-packages/django/core/serializers/base.py", line 288, in save models.Model.save_base(self.object, using=using, raw=True, **kwargs) File "/opt/netbox/venv/lib/python3.10/site-packages/django/db/models/base.py", line 863, in save_base updated = self._save_table( File "/opt/netbox/venv/lib/python3.10/site-packages/django/db/models/base.py", line 1006, in _save_table results = self._do_insert( File "/opt/netbox/venv/lib/python3.10/site-packages/django/db/models/base.py", line 1047, in _do_insert return manager._insert( File "/opt/netbox/venv/lib/python3.10/site-packages/django/db/models/manager.py", line 85, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/opt/netbox/venv/lib/python3.10/site-packages/django/db/models/query.py", line 1791, in _insert return query.get_compiler(using=using).execute_sql(returning_fields) File "/opt/netbox/venv/lib/python3.10/site-packages/django/db/models/sql/compiler.py", line 1660, in execute_sql cursor.execute(sql, params) File "/opt/netbox/venv/lib/python3.10/site-packages/django/db/backends/utils.py", line 102, in execute return super().execute(sql, params) File "/opt/netbox/venv/lib/python3.10/site-packages/django/db/backends/utils.py", line 67, in execute return self._execute_with_wrappers( File "/opt/netbox/venv/lib/python3.10/site-packages/django/db/backends/utils.py", line 80, in _execute_with_wrappers return executor(sql, params, many, context) File "/opt/netbox/venv/lib/python3.10/site-packages/django/db/backends/utils.py", line 84, in _execute with self.db.wrap_database_errors: File "/opt/netbox/venv/lib/python3.10/site-packages/django/db/utils.py", line 91, in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value File "/opt/netbox/venv/lib/python3.10/site-packages/django/db/backends/utils.py", line 89, in _execute return self.cursor.execute(sql, params) django.db.utils.IntegrityError: null value in column "lft" of relation "dcim_inventoryitem" violates not-null constraint DETAIL: Failing row contains (2023-08-09 12:54:07.062+00, 2023-08-09 12:54:07.062+00, {}, 848, Inventory Item 1, , , , , , null, f, null, null, null, null, 9608, null, null, null, null, null). ``` Script to recreate: ```python from netbox.staging import checkout from extras.models import Branch from dcim.models import Device, VirtualChassis, DeviceType, Manufacturer, Site, DeviceRole, InventoryItem Branch.objects.all().delete() branch = Branch.objects.create(name='Branch 1') Device.objects.filter(name="test device").delete() Device.objects.filter(name="test device1").delete() VirtualChassis.objects.filter(name='VC1').delete() DeviceType.objects.filter(slug='device-type-1').delete() Manufacturer.objects.filter(slug='manufacturer-1').delete() DeviceRole.objects.filter(slug='device-role-1').delete() with checkout(branch): manufacturer = Manufacturer.objects.create( name='Manufacturer', slug='manufacturer-1') device_type = DeviceType.objects.create( manufacturer=manufacturer, model='Device Type 1', slug='device-type-1' ) device_role = DeviceRole.objects.create( name='Device Role', slug='device-role-1' ) site, _ = Site.objects.update_or_create( defaults={'name': "test"}, slug="test") device, _ = Device.objects.update_or_create(defaults={ "name": "test device", "device_role": device_role, "device_type": device_type, "site": site}, serial="123") device1, _ = Device.objects.update_or_create(defaults={ "name": "test device1", "device_role": device_role, "device_type": device_type, "site": site}, serial="543") inv1 = InventoryItem.objects.create(device=device, name='Inventory Item 1'), inv2 = InventoryItem.objects.create(device=device1, name='Inventory Item 2'), branch.merge() print("DONE") ``` ### Expected Behavior If there is a serializer or argument to pass to the serializer that does not remove key fields to recreate objects from the staged changes table. ### Observed Behavior The serialized object has key fields removed which are required to save the object. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `netbox/extras/models/staging.py` Content: ``` 1 import logging 2 3 from django.contrib.auth import get_user_model 4 from django.contrib.contenttypes.fields import GenericForeignKey 5 from django.db import models, transaction 6 from django.utils.translation import gettext_lazy as _ 7 8 from extras.choices import ChangeActionChoices 9 from netbox.models import ChangeLoggedModel 10 from netbox.models.features import * 11 from utilities.serialization import deserialize_object 12 13 __all__ = ( 14 'Branch', 15 'StagedChange', 16 ) 17 18 logger = logging.getLogger('netbox.staging') 19 20 21 class Branch(ChangeLoggedModel): 22 """ 23 A collection of related StagedChanges. 24 """ 25 name = models.CharField( 26 verbose_name=_('name'), 27 max_length=100, 28 unique=True 29 ) 30 description = models.CharField( 31 verbose_name=_('description'), 32 max_length=200, 33 blank=True 34 ) 35 user = models.ForeignKey( 36 to=get_user_model(), 37 on_delete=models.SET_NULL, 38 blank=True, 39 null=True 40 ) 41 42 class Meta: 43 ordering = ('name',) 44 verbose_name = _('branch') 45 verbose_name_plural = _('branches') 46 47 def __str__(self): 48 return f'{self.name} ({self.pk})' 49 50 def merge(self): 51 logger.info(f'Merging changes in branch {self}') 52 with transaction.atomic(): 53 for change in self.staged_changes.all(): 54 change.apply() 55 self.staged_changes.all().delete() 56 57 58 class StagedChange(CustomValidationMixin, EventRulesMixin, models.Model): 59 """ 60 The prepared creation, modification, or deletion of an object to be applied to the active database at a 61 future point. 62 """ 63 branch = models.ForeignKey( 64 to=Branch, 65 on_delete=models.CASCADE, 66 related_name='staged_changes' 67 ) 68 action = models.CharField( 69 verbose_name=_('action'), 70 max_length=20, 71 choices=ChangeActionChoices 72 ) 73 object_type = models.ForeignKey( 74 to='contenttypes.ContentType', 75 on_delete=models.CASCADE, 76 related_name='+' 77 ) 78 object_id = models.PositiveBigIntegerField( 79 blank=True, 80 null=True 81 ) 82 object = GenericForeignKey( 83 ct_field='object_type', 84 fk_field='object_id' 85 ) 86 data = models.JSONField( 87 verbose_name=_('data'), 88 blank=True, 89 null=True 90 ) 91 92 class Meta: 93 ordering = ('pk',) 94 indexes = ( 95 models.Index(fields=('object_type', 'object_id')), 96 ) 97 verbose_name = _('staged change') 98 verbose_name_plural = _('staged changes') 99 100 def __str__(self): 101 action = self.get_action_display() 102 app_label, model_name = self.object_type.natural_key() 103 return f"{action} {app_label}.{model_name} ({self.object_id})" 104 105 @property 106 def model(self): 107 return self.object_type.model_class() 108 109 def apply(self): 110 """ 111 Apply the staged create/update/delete action to the database. 112 """ 113 if self.action == ChangeActionChoices.ACTION_CREATE: 114 instance = deserialize_object(self.model, self.data, pk=self.object_id) 115 logger.info(f'Creating {self.model._meta.verbose_name} {instance}') 116 instance.save() 117 118 if self.action == ChangeActionChoices.ACTION_UPDATE: 119 instance = deserialize_object(self.model, self.data, pk=self.object_id) 120 logger.info(f'Updating {self.model._meta.verbose_name} {instance}') 121 instance.save() 122 123 if self.action == ChangeActionChoices.ACTION_DELETE: 124 instance = self.model.objects.get(pk=self.object_id) 125 logger.info(f'Deleting {self.model._meta.verbose_name} {instance}') 126 instance.delete() 127 apply.alters_data = True 128 129 def get_action_color(self): 130 return ChangeActionChoices.colors.get(self.action) 131 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/netbox/extras/models/staging.py b/netbox/extras/models/staging.py --- a/netbox/extras/models/staging.py +++ b/netbox/extras/models/staging.py @@ -4,6 +4,7 @@ from django.contrib.contenttypes.fields import GenericForeignKey from django.db import models, transaction from django.utils.translation import gettext_lazy as _ +from mptt.models import MPTTModel from extras.choices import ChangeActionChoices from netbox.models import ChangeLoggedModel @@ -124,6 +125,11 @@ instance = self.model.objects.get(pk=self.object_id) logger.info(f'Deleting {self.model._meta.verbose_name} {instance}') instance.delete() + + # Rebuild the MPTT tree where applicable + if issubclass(self.model, MPTTModel): + self.model.objects.rebuild() + apply.alters_data = True def get_action_color(self):
{"golden_diff": "diff --git a/netbox/extras/models/staging.py b/netbox/extras/models/staging.py\n--- a/netbox/extras/models/staging.py\n+++ b/netbox/extras/models/staging.py\n@@ -4,6 +4,7 @@\n from django.contrib.contenttypes.fields import GenericForeignKey\n from django.db import models, transaction\n from django.utils.translation import gettext_lazy as _\n+from mptt.models import MPTTModel\n \n from extras.choices import ChangeActionChoices\n from netbox.models import ChangeLoggedModel\n@@ -124,6 +125,11 @@\n instance = self.model.objects.get(pk=self.object_id)\n logger.info(f'Deleting {self.model._meta.verbose_name} {instance}')\n instance.delete()\n+\n+ # Rebuild the MPTT tree where applicable\n+ if issubclass(self.model, MPTTModel):\n+ self.model.objects.rebuild()\n+\n apply.alters_data = True\n \n def get_action_color(self):\n", "issue": "Staged Changes: InventoryItem Fails Sync\n### NetBox version\n\nv3.5.1\n\n### Python version\n\n3.10\n\n### Steps to Reproduce\n\nCurrently when working with staged changes there is a edge-case where the staged changes failed on the merge. \r\n\r\nI believe this is because of the serialiser removing key parts of the data required to recreate the device when `merge()` is called. \r\n\r\nhttps://github.com/netbox-community/netbox/blob/f5a1f83f9fa9d98c945d21eb0f7ccb8cd37fbf59/netbox/netbox/staging.py#L119\r\n\r\nAnd when `serialize_object` is called it removes all MPTT model fields: https://github.com/netbox-community/netbox/blob/f5a1f83f9fa9d98c945d21eb0f7ccb8cd37fbf59/netbox/utilities/utils.py#L157\r\nThis causes the merge to fail because the MPTT fields are now null. In the staged changes table. \r\n\r\nError message:\r\n\r\n```\r\n>>> Branch.objects.all()[0].merge()\r\nTEST\r\n<RestrictedQuerySet [<StagedChange: Create dcim.manufacturer (182)>, <StagedChange: Create dcim.devicetype (177)>, <StagedChange: Create dcim.devicerole (120)>, <StagedChange: Update dcim.site (124)>, <StagedChange: Create dcim.device (9608)>, <StagedChange: Create dcim.device (9609)>, <StagedChange: Create dcim.inventoryitem (848)>, <StagedChange: Create dcim.inventoryitem (849)>]>\r\nCreate dcim.manufacturer (182)\r\nCreate dcim.devicetype (177)\r\nCreate dcim.devicerole (120)\r\nUpdate dcim.site (124)\r\nCreate dcim.device (9608)\r\nCreate dcim.device (9609)\r\nCreate dcim.inventoryitem (848)\r\nTraceback (most recent call last):\r\n File \"/opt/netbox/venv/lib/python3.10/site-packages/django/db/backends/utils.py\", line 89, in _execute\r\n return self.cursor.execute(sql, params)\r\npsycopg2.errors.NotNullViolation: null value in column \"lft\" of relation \"dcim_inventoryitem\" violates not-null constraint\r\nDETAIL: Failing row contains (2023-08-09 12:54:07.062+00, 2023-08-09 12:54:07.062+00, {}, 848, Inventory Item 1, , , , , , null, f, null, null, null, null, 9608, null, null, null, null, null).\r\n\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<console>\", line 1, in <module>\r\n File \"/opt/netbox/netbox/extras/models/staging.py\", line 52, in merge\r\n change.apply()\r\n File \"/opt/netbox/netbox/extras/models/staging.py\", line 107, in apply\r\n instance.save()\r\n File \"/opt/netbox/venv/lib/python3.10/site-packages/django/core/serializers/base.py\", line 288, in save\r\n models.Model.save_base(self.object, using=using, raw=True, **kwargs)\r\n File \"/opt/netbox/venv/lib/python3.10/site-packages/django/db/models/base.py\", line 863, in save_base\r\n updated = self._save_table(\r\n File \"/opt/netbox/venv/lib/python3.10/site-packages/django/db/models/base.py\", line 1006, in _save_table\r\n results = self._do_insert(\r\n File \"/opt/netbox/venv/lib/python3.10/site-packages/django/db/models/base.py\", line 1047, in _do_insert\r\n return manager._insert(\r\n File \"/opt/netbox/venv/lib/python3.10/site-packages/django/db/models/manager.py\", line 85, in manager_method\r\n return getattr(self.get_queryset(), name)(*args, **kwargs)\r\n File \"/opt/netbox/venv/lib/python3.10/site-packages/django/db/models/query.py\", line 1791, in _insert\r\n return query.get_compiler(using=using).execute_sql(returning_fields)\r\n File \"/opt/netbox/venv/lib/python3.10/site-packages/django/db/models/sql/compiler.py\", line 1660, in execute_sql\r\n cursor.execute(sql, params)\r\n File \"/opt/netbox/venv/lib/python3.10/site-packages/django/db/backends/utils.py\", line 102, in execute\r\n return super().execute(sql, params)\r\n File \"/opt/netbox/venv/lib/python3.10/site-packages/django/db/backends/utils.py\", line 67, in execute\r\n return self._execute_with_wrappers(\r\n File \"/opt/netbox/venv/lib/python3.10/site-packages/django/db/backends/utils.py\", line 80, in _execute_with_wrappers\r\n return executor(sql, params, many, context)\r\n File \"/opt/netbox/venv/lib/python3.10/site-packages/django/db/backends/utils.py\", line 84, in _execute\r\n with self.db.wrap_database_errors:\r\n File \"/opt/netbox/venv/lib/python3.10/site-packages/django/db/utils.py\", line 91, in __exit__\r\n raise dj_exc_value.with_traceback(traceback) from exc_value\r\n File \"/opt/netbox/venv/lib/python3.10/site-packages/django/db/backends/utils.py\", line 89, in _execute\r\n return self.cursor.execute(sql, params)\r\ndjango.db.utils.IntegrityError: null value in column \"lft\" of relation \"dcim_inventoryitem\" violates not-null constraint\r\nDETAIL: Failing row contains (2023-08-09 12:54:07.062+00, 2023-08-09 12:54:07.062+00, {}, 848, Inventory Item 1, , , , , , null, f, null, null, null, null, 9608, null, null, null, null, null).\r\n\r\n```\r\n\r\nScript to recreate:\r\n\r\n```python\r\nfrom netbox.staging import checkout\r\nfrom extras.models import Branch\r\nfrom dcim.models import Device, VirtualChassis, DeviceType, Manufacturer, Site, DeviceRole, InventoryItem\r\n\r\nBranch.objects.all().delete()\r\nbranch = Branch.objects.create(name='Branch 1')\r\n\r\nDevice.objects.filter(name=\"test device\").delete()\r\nDevice.objects.filter(name=\"test device1\").delete()\r\nVirtualChassis.objects.filter(name='VC1').delete()\r\nDeviceType.objects.filter(slug='device-type-1').delete()\r\nManufacturer.objects.filter(slug='manufacturer-1').delete()\r\nDeviceRole.objects.filter(slug='device-role-1').delete()\r\n\r\n\r\nwith checkout(branch):\r\n\r\n manufacturer = Manufacturer.objects.create(\r\n name='Manufacturer', slug='manufacturer-1')\r\n device_type = DeviceType.objects.create(\r\n manufacturer=manufacturer, model='Device Type 1', slug='device-type-1'\r\n )\r\n device_role = DeviceRole.objects.create(\r\n name='Device Role', slug='device-role-1'\r\n )\r\n site, _ = Site.objects.update_or_create(\r\n defaults={'name': \"test\"}, slug=\"test\")\r\n device, _ = Device.objects.update_or_create(defaults={\r\n \"name\": \"test device\", \"device_role\": device_role, \"device_type\": device_type, \"site\": site}, serial=\"123\")\r\n device1, _ = Device.objects.update_or_create(defaults={\r\n \"name\": \"test device1\", \"device_role\": device_role, \"device_type\": device_type, \"site\": site}, serial=\"543\")\r\n inv1 = InventoryItem.objects.create(device=device, name='Inventory Item 1'),\r\n inv2 = InventoryItem.objects.create(device=device1, name='Inventory Item 2'),\r\n\r\n\r\n\r\nbranch.merge()\r\n\r\nprint(\"DONE\")\r\n\r\n```\r\n\n\n### Expected Behavior\n\nIf there is a serializer or argument to pass to the serializer that does not remove key fields to recreate objects from the staged changes table. \n\n### Observed Behavior\n\nThe serialized object has key fields removed which are required to save the object. \n", "before_files": [{"content": "import logging\n\nfrom django.contrib.auth import get_user_model\nfrom django.contrib.contenttypes.fields import GenericForeignKey\nfrom django.db import models, transaction\nfrom django.utils.translation import gettext_lazy as _\n\nfrom extras.choices import ChangeActionChoices\nfrom netbox.models import ChangeLoggedModel\nfrom netbox.models.features import *\nfrom utilities.serialization import deserialize_object\n\n__all__ = (\n 'Branch',\n 'StagedChange',\n)\n\nlogger = logging.getLogger('netbox.staging')\n\n\nclass Branch(ChangeLoggedModel):\n \"\"\"\n A collection of related StagedChanges.\n \"\"\"\n name = models.CharField(\n verbose_name=_('name'),\n max_length=100,\n unique=True\n )\n description = models.CharField(\n verbose_name=_('description'),\n max_length=200,\n blank=True\n )\n user = models.ForeignKey(\n to=get_user_model(),\n on_delete=models.SET_NULL,\n blank=True,\n null=True\n )\n\n class Meta:\n ordering = ('name',)\n verbose_name = _('branch')\n verbose_name_plural = _('branches')\n\n def __str__(self):\n return f'{self.name} ({self.pk})'\n\n def merge(self):\n logger.info(f'Merging changes in branch {self}')\n with transaction.atomic():\n for change in self.staged_changes.all():\n change.apply()\n self.staged_changes.all().delete()\n\n\nclass StagedChange(CustomValidationMixin, EventRulesMixin, models.Model):\n \"\"\"\n The prepared creation, modification, or deletion of an object to be applied to the active database at a\n future point.\n \"\"\"\n branch = models.ForeignKey(\n to=Branch,\n on_delete=models.CASCADE,\n related_name='staged_changes'\n )\n action = models.CharField(\n verbose_name=_('action'),\n max_length=20,\n choices=ChangeActionChoices\n )\n object_type = models.ForeignKey(\n to='contenttypes.ContentType',\n on_delete=models.CASCADE,\n related_name='+'\n )\n object_id = models.PositiveBigIntegerField(\n blank=True,\n null=True\n )\n object = GenericForeignKey(\n ct_field='object_type',\n fk_field='object_id'\n )\n data = models.JSONField(\n verbose_name=_('data'),\n blank=True,\n null=True\n )\n\n class Meta:\n ordering = ('pk',)\n indexes = (\n models.Index(fields=('object_type', 'object_id')),\n )\n verbose_name = _('staged change')\n verbose_name_plural = _('staged changes')\n\n def __str__(self):\n action = self.get_action_display()\n app_label, model_name = self.object_type.natural_key()\n return f\"{action} {app_label}.{model_name} ({self.object_id})\"\n\n @property\n def model(self):\n return self.object_type.model_class()\n\n def apply(self):\n \"\"\"\n Apply the staged create/update/delete action to the database.\n \"\"\"\n if self.action == ChangeActionChoices.ACTION_CREATE:\n instance = deserialize_object(self.model, self.data, pk=self.object_id)\n logger.info(f'Creating {self.model._meta.verbose_name} {instance}')\n instance.save()\n\n if self.action == ChangeActionChoices.ACTION_UPDATE:\n instance = deserialize_object(self.model, self.data, pk=self.object_id)\n logger.info(f'Updating {self.model._meta.verbose_name} {instance}')\n instance.save()\n\n if self.action == ChangeActionChoices.ACTION_DELETE:\n instance = self.model.objects.get(pk=self.object_id)\n logger.info(f'Deleting {self.model._meta.verbose_name} {instance}')\n instance.delete()\n apply.alters_data = True\n\n def get_action_color(self):\n return ChangeActionChoices.colors.get(self.action)\n", "path": "netbox/extras/models/staging.py"}], "after_files": [{"content": "import logging\n\nfrom django.contrib.auth import get_user_model\nfrom django.contrib.contenttypes.fields import GenericForeignKey\nfrom django.db import models, transaction\nfrom django.utils.translation import gettext_lazy as _\nfrom mptt.models import MPTTModel\n\nfrom extras.choices import ChangeActionChoices\nfrom netbox.models import ChangeLoggedModel\nfrom netbox.models.features import *\nfrom utilities.serialization import deserialize_object\n\n__all__ = (\n 'Branch',\n 'StagedChange',\n)\n\nlogger = logging.getLogger('netbox.staging')\n\n\nclass Branch(ChangeLoggedModel):\n \"\"\"\n A collection of related StagedChanges.\n \"\"\"\n name = models.CharField(\n verbose_name=_('name'),\n max_length=100,\n unique=True\n )\n description = models.CharField(\n verbose_name=_('description'),\n max_length=200,\n blank=True\n )\n user = models.ForeignKey(\n to=get_user_model(),\n on_delete=models.SET_NULL,\n blank=True,\n null=True\n )\n\n class Meta:\n ordering = ('name',)\n verbose_name = _('branch')\n verbose_name_plural = _('branches')\n\n def __str__(self):\n return f'{self.name} ({self.pk})'\n\n def merge(self):\n logger.info(f'Merging changes in branch {self}')\n with transaction.atomic():\n for change in self.staged_changes.all():\n change.apply()\n self.staged_changes.all().delete()\n\n\nclass StagedChange(CustomValidationMixin, EventRulesMixin, models.Model):\n \"\"\"\n The prepared creation, modification, or deletion of an object to be applied to the active database at a\n future point.\n \"\"\"\n branch = models.ForeignKey(\n to=Branch,\n on_delete=models.CASCADE,\n related_name='staged_changes'\n )\n action = models.CharField(\n verbose_name=_('action'),\n max_length=20,\n choices=ChangeActionChoices\n )\n object_type = models.ForeignKey(\n to='contenttypes.ContentType',\n on_delete=models.CASCADE,\n related_name='+'\n )\n object_id = models.PositiveBigIntegerField(\n blank=True,\n null=True\n )\n object = GenericForeignKey(\n ct_field='object_type',\n fk_field='object_id'\n )\n data = models.JSONField(\n verbose_name=_('data'),\n blank=True,\n null=True\n )\n\n class Meta:\n ordering = ('pk',)\n indexes = (\n models.Index(fields=('object_type', 'object_id')),\n )\n verbose_name = _('staged change')\n verbose_name_plural = _('staged changes')\n\n def __str__(self):\n action = self.get_action_display()\n app_label, model_name = self.object_type.natural_key()\n return f\"{action} {app_label}.{model_name} ({self.object_id})\"\n\n @property\n def model(self):\n return self.object_type.model_class()\n\n def apply(self):\n \"\"\"\n Apply the staged create/update/delete action to the database.\n \"\"\"\n if self.action == ChangeActionChoices.ACTION_CREATE:\n instance = deserialize_object(self.model, self.data, pk=self.object_id)\n logger.info(f'Creating {self.model._meta.verbose_name} {instance}')\n instance.save()\n\n if self.action == ChangeActionChoices.ACTION_UPDATE:\n instance = deserialize_object(self.model, self.data, pk=self.object_id)\n logger.info(f'Updating {self.model._meta.verbose_name} {instance}')\n instance.save()\n\n if self.action == ChangeActionChoices.ACTION_DELETE:\n instance = self.model.objects.get(pk=self.object_id)\n logger.info(f'Deleting {self.model._meta.verbose_name} {instance}')\n instance.delete()\n\n # Rebuild the MPTT tree where applicable\n if issubclass(self.model, MPTTModel):\n self.model.objects.rebuild()\n\n apply.alters_data = True\n\n def get_action_color(self):\n return ChangeActionChoices.colors.get(self.action)\n", "path": "netbox/extras/models/staging.py"}]}
3,229
210
gh_patches_debug_4285
rasdani/github-patches
git_diff
conda__conda-build-1397
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- --output reports wrong folder for noarch packages In linux-64 packages build with the noarch flag end up in the appropriate directory, but the --output function does not return the correct path. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `conda_build/render.py` Content: ``` 1 # (c) Continuum Analytics, Inc. / http://continuum.io 2 # All Rights Reserved 3 # 4 # conda is distributed under the terms of the BSD 3-clause license. 5 # Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause. 6 7 from __future__ import absolute_import, division, print_function 8 9 from locale import getpreferredencoding 10 import os 11 from os.path import isdir, isfile, abspath 12 import subprocess 13 import sys 14 import tarfile 15 import tempfile 16 17 import yaml 18 19 from .conda_interface import PY3 20 21 from conda_build import exceptions, utils 22 from conda_build.metadata import MetaData 23 import conda_build.source as source 24 from conda_build.completers import all_versions, conda_version 25 from conda_build.utils import rm_rf 26 27 28 def set_language_env_vars(args, parser, config, execute=None): 29 """Given args passed into conda command, set language env vars""" 30 for lang in all_versions: 31 versions = getattr(args, lang) 32 if not versions: 33 continue 34 if versions == ['all']: 35 if all_versions[lang]: 36 versions = all_versions[lang] 37 else: 38 parser.error("'all' is not supported for --%s" % lang) 39 if len(versions) > 1: 40 for ver in versions[:]: 41 setattr(args, lang, [str(ver)]) 42 if execute: 43 execute(args, parser, config) 44 # This is necessary to make all combinations build. 45 setattr(args, lang, versions) 46 return 47 else: 48 version = versions[0] 49 if lang in ('python', 'numpy'): 50 version = int(version.replace('.', '')) 51 setattr(config, conda_version[lang], version) 52 if not len(str(version)) in (2, 3) and lang in ['python', 'numpy']: 53 if all_versions[lang]: 54 raise RuntimeError("%s must be major.minor, like %s, not %s" % 55 (conda_version[lang], all_versions[lang][-1] / 10, version)) 56 else: 57 raise RuntimeError("%s must be major.minor, not %s" % 58 (conda_version[lang], version)) 59 60 # Using --python, --numpy etc. is equivalent to using CONDA_PY, CONDA_NPY, etc. 61 # Auto-set those env variables 62 for var in conda_version.values(): 63 if hasattr(config, var) and getattr(config, var): 64 # Set the env variable. 65 os.environ[var] = str(getattr(config, var)) 66 67 68 def bldpkg_path(m, config): 69 ''' 70 Returns path to built package's tarball given its ``Metadata``. 71 ''' 72 return os.path.join(config.bldpkgs_dir, '%s.tar.bz2' % m.dist()) 73 74 75 def parse_or_try_download(metadata, no_download_source, config, 76 force_download=False): 77 78 need_reparse_in_env = False 79 if (force_download or (not no_download_source and (metadata.uses_vcs_in_meta or 80 metadata.uses_setup_py_in_meta))): 81 82 # this try/catch is for when the tool to download source is actually in 83 # meta.yaml, and not previously installed in builder env. 84 try: 85 if not config.dirty: 86 source.provide(metadata.path, metadata.get_section('source'), config=config) 87 need_source_download = False 88 try: 89 metadata.parse_again(config=config, permit_undefined_jinja=False) 90 except (ImportError, exceptions.UnableToParseMissingSetuptoolsDependencies): 91 need_reparse_in_env = True 92 except subprocess.CalledProcessError as error: 93 print("Warning: failed to download source. If building, will try " 94 "again after downloading recipe dependencies.") 95 print("Error was: ") 96 print(error) 97 need_source_download = True 98 99 elif not metadata.get_section('source'): 100 need_source_download = False 101 if not os.path.isdir(config.work_dir): 102 os.makedirs(config.work_dir) 103 else: 104 # we have not downloaded source in the render phase. Download it in 105 # the build phase 106 need_source_download = not no_download_source 107 if not need_reparse_in_env: 108 try: 109 metadata.parse_until_resolved(config=config) 110 except exceptions.UnableToParseMissingSetuptoolsDependencies: 111 need_reparse_in_env = True 112 return metadata, need_source_download, need_reparse_in_env 113 114 115 def reparse(metadata, config): 116 """Some things need to be parsed again after the build environment has been created 117 and activated.""" 118 sys.path.insert(0, config.build_prefix) 119 sys.path.insert(0, utils.get_site_packages(config.build_prefix)) 120 metadata.parse_again(config=config, permit_undefined_jinja=False) 121 122 123 def render_recipe(recipe_path, config, no_download_source=False): 124 arg = recipe_path 125 # Don't use byte literals for paths in Python 2 126 if not PY3: 127 arg = arg.decode(getpreferredencoding() or 'utf-8') 128 if isfile(arg): 129 if arg.endswith(('.tar', '.tar.gz', '.tgz', '.tar.bz2')): 130 recipe_dir = tempfile.mkdtemp() 131 t = tarfile.open(arg, 'r:*') 132 t.extractall(path=recipe_dir) 133 t.close() 134 need_cleanup = True 135 else: 136 print("Ignoring non-recipe: %s" % arg) 137 return 138 else: 139 recipe_dir = abspath(arg) 140 need_cleanup = False 141 142 if not isdir(recipe_dir): 143 sys.exit("Error: no such directory: %s" % recipe_dir) 144 145 if config.set_build_id: 146 # updates a unique build id if not already computed 147 config.compute_build_id(os.path.basename(recipe_dir)) 148 try: 149 m = MetaData(recipe_dir, config=config) 150 except exceptions.YamlParsingError as e: 151 sys.stderr.write(e.error_msg()) 152 sys.exit(1) 153 154 config.noarch = m.get_value('build/noarch') 155 m, need_download, need_reparse_in_env = parse_or_try_download(m, 156 no_download_source=no_download_source, 157 config=config) 158 159 if need_cleanup: 160 rm_rf(recipe_dir) 161 162 return m, need_download, need_reparse_in_env 163 164 165 # Next bit of stuff is to support YAML output in the order we expect. 166 # http://stackoverflow.com/a/17310199/1170370 167 class _MetaYaml(dict): 168 fields = ["package", "source", "build", "requirements", "test", "about", "extra"] 169 170 def to_omap(self): 171 return [(field, self[field]) for field in _MetaYaml.fields if field in self] 172 173 174 def _represent_omap(dumper, data): 175 return dumper.represent_mapping(u'tag:yaml.org,2002:map', data.to_omap()) 176 177 178 def _unicode_representer(dumper, uni): 179 node = yaml.ScalarNode(tag=u'tag:yaml.org,2002:str', value=uni) 180 return node 181 182 183 class _IndentDumper(yaml.Dumper): 184 def increase_indent(self, flow=False, indentless=False): 185 return super(_IndentDumper, self).increase_indent(flow, False) 186 187 yaml.add_representer(_MetaYaml, _represent_omap) 188 if PY3: 189 yaml.add_representer(str, _unicode_representer) 190 unicode = None # silence pyflakes about unicode not existing in py3 191 else: 192 yaml.add_representer(unicode, _unicode_representer) 193 194 195 def output_yaml(metadata, filename=None): 196 output = yaml.dump(_MetaYaml(metadata.meta), Dumper=_IndentDumper, 197 default_flow_style=False, indent=4) 198 if filename: 199 with open(filename, "w") as f: 200 f.write(output) 201 return "Wrote yaml to %s" % filename 202 else: 203 return output 204 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/conda_build/render.py b/conda_build/render.py --- a/conda_build/render.py +++ b/conda_build/render.py @@ -69,7 +69,8 @@ ''' Returns path to built package's tarball given its ``Metadata``. ''' - return os.path.join(config.bldpkgs_dir, '%s.tar.bz2' % m.dist()) + output_dir = m.info_index()['subdir'] + return os.path.join(os.path.dirname(config.bldpkgs_dir), output_dir, '%s.tar.bz2' % m.dist()) def parse_or_try_download(metadata, no_download_source, config,
{"golden_diff": "diff --git a/conda_build/render.py b/conda_build/render.py\n--- a/conda_build/render.py\n+++ b/conda_build/render.py\n@@ -69,7 +69,8 @@\n '''\n Returns path to built package's tarball given its ``Metadata``.\n '''\n- return os.path.join(config.bldpkgs_dir, '%s.tar.bz2' % m.dist())\n+ output_dir = m.info_index()['subdir']\n+ return os.path.join(os.path.dirname(config.bldpkgs_dir), output_dir, '%s.tar.bz2' % m.dist())\n \n \n def parse_or_try_download(metadata, no_download_source, config,\n", "issue": "--output reports wrong folder for noarch packages\nIn linux-64 packages build with the noarch flag end up in the appropriate directory, but the --output function does not return the correct path. \n\n", "before_files": [{"content": "# (c) Continuum Analytics, Inc. / http://continuum.io\n# All Rights Reserved\n#\n# conda is distributed under the terms of the BSD 3-clause license.\n# Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.\n\nfrom __future__ import absolute_import, division, print_function\n\nfrom locale import getpreferredencoding\nimport os\nfrom os.path import isdir, isfile, abspath\nimport subprocess\nimport sys\nimport tarfile\nimport tempfile\n\nimport yaml\n\nfrom .conda_interface import PY3\n\nfrom conda_build import exceptions, utils\nfrom conda_build.metadata import MetaData\nimport conda_build.source as source\nfrom conda_build.completers import all_versions, conda_version\nfrom conda_build.utils import rm_rf\n\n\ndef set_language_env_vars(args, parser, config, execute=None):\n \"\"\"Given args passed into conda command, set language env vars\"\"\"\n for lang in all_versions:\n versions = getattr(args, lang)\n if not versions:\n continue\n if versions == ['all']:\n if all_versions[lang]:\n versions = all_versions[lang]\n else:\n parser.error(\"'all' is not supported for --%s\" % lang)\n if len(versions) > 1:\n for ver in versions[:]:\n setattr(args, lang, [str(ver)])\n if execute:\n execute(args, parser, config)\n # This is necessary to make all combinations build.\n setattr(args, lang, versions)\n return\n else:\n version = versions[0]\n if lang in ('python', 'numpy'):\n version = int(version.replace('.', ''))\n setattr(config, conda_version[lang], version)\n if not len(str(version)) in (2, 3) and lang in ['python', 'numpy']:\n if all_versions[lang]:\n raise RuntimeError(\"%s must be major.minor, like %s, not %s\" %\n (conda_version[lang], all_versions[lang][-1] / 10, version))\n else:\n raise RuntimeError(\"%s must be major.minor, not %s\" %\n (conda_version[lang], version))\n\n # Using --python, --numpy etc. is equivalent to using CONDA_PY, CONDA_NPY, etc.\n # Auto-set those env variables\n for var in conda_version.values():\n if hasattr(config, var) and getattr(config, var):\n # Set the env variable.\n os.environ[var] = str(getattr(config, var))\n\n\ndef bldpkg_path(m, config):\n '''\n Returns path to built package's tarball given its ``Metadata``.\n '''\n return os.path.join(config.bldpkgs_dir, '%s.tar.bz2' % m.dist())\n\n\ndef parse_or_try_download(metadata, no_download_source, config,\n force_download=False):\n\n need_reparse_in_env = False\n if (force_download or (not no_download_source and (metadata.uses_vcs_in_meta or\n metadata.uses_setup_py_in_meta))):\n\n # this try/catch is for when the tool to download source is actually in\n # meta.yaml, and not previously installed in builder env.\n try:\n if not config.dirty:\n source.provide(metadata.path, metadata.get_section('source'), config=config)\n need_source_download = False\n try:\n metadata.parse_again(config=config, permit_undefined_jinja=False)\n except (ImportError, exceptions.UnableToParseMissingSetuptoolsDependencies):\n need_reparse_in_env = True\n except subprocess.CalledProcessError as error:\n print(\"Warning: failed to download source. If building, will try \"\n \"again after downloading recipe dependencies.\")\n print(\"Error was: \")\n print(error)\n need_source_download = True\n\n elif not metadata.get_section('source'):\n need_source_download = False\n if not os.path.isdir(config.work_dir):\n os.makedirs(config.work_dir)\n else:\n # we have not downloaded source in the render phase. Download it in\n # the build phase\n need_source_download = not no_download_source\n if not need_reparse_in_env:\n try:\n metadata.parse_until_resolved(config=config)\n except exceptions.UnableToParseMissingSetuptoolsDependencies:\n need_reparse_in_env = True\n return metadata, need_source_download, need_reparse_in_env\n\n\ndef reparse(metadata, config):\n \"\"\"Some things need to be parsed again after the build environment has been created\n and activated.\"\"\"\n sys.path.insert(0, config.build_prefix)\n sys.path.insert(0, utils.get_site_packages(config.build_prefix))\n metadata.parse_again(config=config, permit_undefined_jinja=False)\n\n\ndef render_recipe(recipe_path, config, no_download_source=False):\n arg = recipe_path\n # Don't use byte literals for paths in Python 2\n if not PY3:\n arg = arg.decode(getpreferredencoding() or 'utf-8')\n if isfile(arg):\n if arg.endswith(('.tar', '.tar.gz', '.tgz', '.tar.bz2')):\n recipe_dir = tempfile.mkdtemp()\n t = tarfile.open(arg, 'r:*')\n t.extractall(path=recipe_dir)\n t.close()\n need_cleanup = True\n else:\n print(\"Ignoring non-recipe: %s\" % arg)\n return\n else:\n recipe_dir = abspath(arg)\n need_cleanup = False\n\n if not isdir(recipe_dir):\n sys.exit(\"Error: no such directory: %s\" % recipe_dir)\n\n if config.set_build_id:\n # updates a unique build id if not already computed\n config.compute_build_id(os.path.basename(recipe_dir))\n try:\n m = MetaData(recipe_dir, config=config)\n except exceptions.YamlParsingError as e:\n sys.stderr.write(e.error_msg())\n sys.exit(1)\n\n config.noarch = m.get_value('build/noarch')\n m, need_download, need_reparse_in_env = parse_or_try_download(m,\n no_download_source=no_download_source,\n config=config)\n\n if need_cleanup:\n rm_rf(recipe_dir)\n\n return m, need_download, need_reparse_in_env\n\n\n# Next bit of stuff is to support YAML output in the order we expect.\n# http://stackoverflow.com/a/17310199/1170370\nclass _MetaYaml(dict):\n fields = [\"package\", \"source\", \"build\", \"requirements\", \"test\", \"about\", \"extra\"]\n\n def to_omap(self):\n return [(field, self[field]) for field in _MetaYaml.fields if field in self]\n\n\ndef _represent_omap(dumper, data):\n return dumper.represent_mapping(u'tag:yaml.org,2002:map', data.to_omap())\n\n\ndef _unicode_representer(dumper, uni):\n node = yaml.ScalarNode(tag=u'tag:yaml.org,2002:str', value=uni)\n return node\n\n\nclass _IndentDumper(yaml.Dumper):\n def increase_indent(self, flow=False, indentless=False):\n return super(_IndentDumper, self).increase_indent(flow, False)\n\nyaml.add_representer(_MetaYaml, _represent_omap)\nif PY3:\n yaml.add_representer(str, _unicode_representer)\n unicode = None # silence pyflakes about unicode not existing in py3\nelse:\n yaml.add_representer(unicode, _unicode_representer)\n\n\ndef output_yaml(metadata, filename=None):\n output = yaml.dump(_MetaYaml(metadata.meta), Dumper=_IndentDumper,\n default_flow_style=False, indent=4)\n if filename:\n with open(filename, \"w\") as f:\n f.write(output)\n return \"Wrote yaml to %s\" % filename\n else:\n return output\n", "path": "conda_build/render.py"}], "after_files": [{"content": "# (c) Continuum Analytics, Inc. / http://continuum.io\n# All Rights Reserved\n#\n# conda is distributed under the terms of the BSD 3-clause license.\n# Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.\n\nfrom __future__ import absolute_import, division, print_function\n\nfrom locale import getpreferredencoding\nimport os\nfrom os.path import isdir, isfile, abspath\nimport subprocess\nimport sys\nimport tarfile\nimport tempfile\n\nimport yaml\n\nfrom .conda_interface import PY3\n\nfrom conda_build import exceptions, utils\nfrom conda_build.metadata import MetaData\nimport conda_build.source as source\nfrom conda_build.completers import all_versions, conda_version\nfrom conda_build.utils import rm_rf\n\n\ndef set_language_env_vars(args, parser, config, execute=None):\n \"\"\"Given args passed into conda command, set language env vars\"\"\"\n for lang in all_versions:\n versions = getattr(args, lang)\n if not versions:\n continue\n if versions == ['all']:\n if all_versions[lang]:\n versions = all_versions[lang]\n else:\n parser.error(\"'all' is not supported for --%s\" % lang)\n if len(versions) > 1:\n for ver in versions[:]:\n setattr(args, lang, [str(ver)])\n if execute:\n execute(args, parser, config)\n # This is necessary to make all combinations build.\n setattr(args, lang, versions)\n return\n else:\n version = versions[0]\n if lang in ('python', 'numpy'):\n version = int(version.replace('.', ''))\n setattr(config, conda_version[lang], version)\n if not len(str(version)) in (2, 3) and lang in ['python', 'numpy']:\n if all_versions[lang]:\n raise RuntimeError(\"%s must be major.minor, like %s, not %s\" %\n (conda_version[lang], all_versions[lang][-1] / 10, version))\n else:\n raise RuntimeError(\"%s must be major.minor, not %s\" %\n (conda_version[lang], version))\n\n # Using --python, --numpy etc. is equivalent to using CONDA_PY, CONDA_NPY, etc.\n # Auto-set those env variables\n for var in conda_version.values():\n if hasattr(config, var) and getattr(config, var):\n # Set the env variable.\n os.environ[var] = str(getattr(config, var))\n\n\ndef bldpkg_path(m, config):\n '''\n Returns path to built package's tarball given its ``Metadata``.\n '''\n output_dir = m.info_index()['subdir']\n return os.path.join(os.path.dirname(config.bldpkgs_dir), output_dir, '%s.tar.bz2' % m.dist())\n\n\ndef parse_or_try_download(metadata, no_download_source, config,\n force_download=False):\n\n need_reparse_in_env = False\n if (force_download or (not no_download_source and (metadata.uses_vcs_in_meta or\n metadata.uses_setup_py_in_meta))):\n\n # this try/catch is for when the tool to download source is actually in\n # meta.yaml, and not previously installed in builder env.\n try:\n if not config.dirty:\n source.provide(metadata.path, metadata.get_section('source'), config=config)\n need_source_download = False\n try:\n metadata.parse_again(config=config, permit_undefined_jinja=False)\n except (ImportError, exceptions.UnableToParseMissingSetuptoolsDependencies):\n need_reparse_in_env = True\n except subprocess.CalledProcessError as error:\n print(\"Warning: failed to download source. If building, will try \"\n \"again after downloading recipe dependencies.\")\n print(\"Error was: \")\n print(error)\n need_source_download = True\n\n elif not metadata.get_section('source'):\n need_source_download = False\n if not os.path.isdir(config.work_dir):\n os.makedirs(config.work_dir)\n else:\n # we have not downloaded source in the render phase. Download it in\n # the build phase\n need_source_download = not no_download_source\n if not need_reparse_in_env:\n try:\n metadata.parse_until_resolved(config=config)\n except exceptions.UnableToParseMissingSetuptoolsDependencies:\n need_reparse_in_env = True\n return metadata, need_source_download, need_reparse_in_env\n\n\ndef reparse(metadata, config):\n \"\"\"Some things need to be parsed again after the build environment has been created\n and activated.\"\"\"\n sys.path.insert(0, config.build_prefix)\n sys.path.insert(0, utils.get_site_packages(config.build_prefix))\n metadata.parse_again(config=config, permit_undefined_jinja=False)\n\n\ndef render_recipe(recipe_path, config, no_download_source=False):\n arg = recipe_path\n # Don't use byte literals for paths in Python 2\n if not PY3:\n arg = arg.decode(getpreferredencoding() or 'utf-8')\n if isfile(arg):\n if arg.endswith(('.tar', '.tar.gz', '.tgz', '.tar.bz2')):\n recipe_dir = tempfile.mkdtemp()\n t = tarfile.open(arg, 'r:*')\n t.extractall(path=recipe_dir)\n t.close()\n need_cleanup = True\n else:\n print(\"Ignoring non-recipe: %s\" % arg)\n return\n else:\n recipe_dir = abspath(arg)\n need_cleanup = False\n\n if not isdir(recipe_dir):\n sys.exit(\"Error: no such directory: %s\" % recipe_dir)\n\n if config.set_build_id:\n # updates a unique build id if not already computed\n config.compute_build_id(os.path.basename(recipe_dir))\n try:\n m = MetaData(recipe_dir, config=config)\n except exceptions.YamlParsingError as e:\n sys.stderr.write(e.error_msg())\n sys.exit(1)\n\n config.noarch = m.get_value('build/noarch')\n m, need_download, need_reparse_in_env = parse_or_try_download(m,\n no_download_source=no_download_source,\n config=config)\n\n if need_cleanup:\n rm_rf(recipe_dir)\n\n return m, need_download, need_reparse_in_env\n\n\n# Next bit of stuff is to support YAML output in the order we expect.\n# http://stackoverflow.com/a/17310199/1170370\nclass _MetaYaml(dict):\n fields = [\"package\", \"source\", \"build\", \"requirements\", \"test\", \"about\", \"extra\"]\n\n def to_omap(self):\n return [(field, self[field]) for field in _MetaYaml.fields if field in self]\n\n\ndef _represent_omap(dumper, data):\n return dumper.represent_mapping(u'tag:yaml.org,2002:map', data.to_omap())\n\n\ndef _unicode_representer(dumper, uni):\n node = yaml.ScalarNode(tag=u'tag:yaml.org,2002:str', value=uni)\n return node\n\n\nclass _IndentDumper(yaml.Dumper):\n def increase_indent(self, flow=False, indentless=False):\n return super(_IndentDumper, self).increase_indent(flow, False)\n\nyaml.add_representer(_MetaYaml, _represent_omap)\nif PY3:\n yaml.add_representer(str, _unicode_representer)\n unicode = None # silence pyflakes about unicode not existing in py3\nelse:\n yaml.add_representer(unicode, _unicode_representer)\n\n\ndef output_yaml(metadata, filename=None):\n output = yaml.dump(_MetaYaml(metadata.meta), Dumper=_IndentDumper,\n default_flow_style=False, indent=4)\n if filename:\n with open(filename, \"w\") as f:\n f.write(output)\n return \"Wrote yaml to %s\" % filename\n else:\n return output\n", "path": "conda_build/render.py"}]}
2,515
141
gh_patches_debug_35455
rasdani/github-patches
git_diff
DataDog__dd-agent-773
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Support for varnish instances by name We have several varnish instances on one server. Each has a name and can be referenced with `-n name` via the command line tools. ``` -n Specifies the name of the varnishd instance to get logs from. If -n is not specified, the host name is used. ``` (from https://www.varnish-cache.org/docs/3.0/reference/varnishstat.html) It would be nice if the name could be specified in the yaml and the check would use it automatically. I've tried adding `-n name` to the bin location but that fails. I'll probably try to add this later when I have a chance, unless someone else can do it quickly. <!--- @huboard:{"order":770.0} --> --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `checks.d/varnish.py` Content: ``` 1 import xml.parsers.expat # python 2.4 compatible 2 import re 3 import subprocess 4 5 from checks import AgentCheck 6 7 class Varnish(AgentCheck): 8 # XML parsing bits, a.k.a. Kafka in Code 9 def _reset(self): 10 self._current_element = "" 11 self._current_metric = "varnish" 12 self._current_value = 0 13 self._current_str = "" 14 self._current_type = "" 15 16 def _start_element(self, name, attrs): 17 self._current_element = name 18 19 def _end_element(self, name): 20 if name == "stat": 21 m_name = self.normalize(self._current_metric) 22 if self._current_type in ("a", "c"): 23 self.rate(m_name, long(self._current_value)) 24 elif self._current_type in ("i", "g"): 25 self.gauge(m_name, long(self._current_value)) 26 else: 27 # Unsupported data type, ignore 28 self._reset() 29 return # don't save 30 31 # reset for next stat element 32 self._reset() 33 elif name in ("type", "ident", "name"): 34 self._current_metric += "." + self._current_str 35 36 def _char_data(self, data): 37 self.log.debug("Data %s [%s]" % (data, self._current_element)) 38 data = data.strip() 39 if len(data) > 0 and self._current_element != "": 40 if self._current_element == "value": 41 self._current_value = long(data) 42 elif self._current_element == "flag": 43 self._current_type = data 44 else: 45 self._current_str = data 46 47 def check(self, instance): 48 """Extract stats from varnishstat -x 49 50 The text option (-1) is not reliable enough when counters get large. 51 VBE.media_video_prd_services_01(10.93.67.16,,8080).happy18446744073709551615 52 53 2 types of data, "a" for counter ("c" in newer versions of varnish), "i" for gauge ("g") 54 https://github.com/varnish/Varnish-Cache/blob/master/include/tbl/vsc_fields.h 55 56 Bitmaps are not supported. 57 58 <varnishstat> 59 <stat> 60 <name>fetch_304</name> 61 <value>0</value> 62 <flag>a</flag> 63 <description>Fetch no body (304)</description> 64 </stat> 65 <stat> 66 <name>n_sess_mem</name> 67 <value>334</value> 68 <flag>i</flag> 69 <description>N struct sess_mem</description> 70 </stat> 71 <stat> 72 <type>LCK</type> 73 <ident>vcl</ident> 74 <name>creat</name> 75 <value>1</value> 76 <flag>a</flag> 77 <description>Created locks</description> 78 </stat> 79 </varnishstat> 80 """ 81 # Not configured? Not a problem. 82 if instance.get("varnishstat", None) is None: 83 raise Exception("varnishstat is not configured") 84 tags = instance.get('tags', []) 85 86 # Get the varnish version from varnishstat 87 output, error = subprocess.Popen([instance.get("varnishstat"), "-V"], 88 stdout=subprocess.PIPE, 89 stderr=subprocess.PIPE).communicate() 90 91 # Assumptions regarding varnish's version 92 use_xml = True 93 arg = "-x" # varnishstat argument 94 version = 3 95 96 m1 = re.search(r"varnish-(\d+)", output, re.MULTILINE) 97 # v2 prints the version on stderr, v3 on stdout 98 m2 = re.search(r"varnish-(\d+)", error, re.MULTILINE) 99 100 if m1 is None and m2 is None: 101 self.log.warn("Cannot determine the version of varnishstat, assuming 3 or greater") 102 self.warning("Cannot determine the version of varnishstat, assuming 3 or greater") 103 else: 104 if m1 is not None: 105 version = int(m1.group(1)) 106 elif m2 is not None: 107 version = int(m2.group(1)) 108 109 self.log.debug("Varnish version: %d" % version) 110 111 # Location of varnishstat 112 if version <= 2: 113 use_xml = False 114 arg = "-1" 115 116 output, error = subprocess.Popen([instance.get("varnishstat"), arg], 117 stdout=subprocess.PIPE, 118 stderr=subprocess.PIPE).communicate() 119 if error and len(error) > 0: 120 self.log.error(error) 121 self._parse_varnishstat(output, use_xml, tags) 122 123 def _parse_varnishstat(self, output, use_xml, tags=None): 124 tags = tags or [] 125 if use_xml: 126 p = xml.parsers.expat.ParserCreate() 127 p.StartElementHandler = self._start_element 128 p.EndElementHandler = self._end_element 129 p.CharacterDataHandler = self._char_data 130 self._reset() 131 p.Parse(output, True) 132 else: 133 for line in output.split("\n"): 134 self.log.debug("Parsing varnish results: %s" % line) 135 fields = line.split() 136 if len(fields) < 3: 137 break 138 name, gauge_val, rate_val = fields[0], fields[1], fields[2] 139 metric_name = self.normalize(name, prefix="varnish") 140 141 # Now figure out which value to pick 142 if rate_val.lower() in ("nan", "."): 143 # col 2 matters 144 self.log.debug("Varnish (gauge) %s %d" % (metric_name, int(gauge_val))) 145 self.gauge(metric_name, int(gauge_val)) 146 else: 147 # col 3 has a rate (since restart) 148 self.log.debug("Varnish (rate) %s %d" % (metric_name, int(gauge_val))) 149 self.rate(metric_name, float(gauge_val)) 150 151 @staticmethod 152 def parse_agent_config(agentConfig): 153 if not agentConfig.get('varnishstat'): 154 return False 155 156 return { 157 'instances': [{'varnishstat': agentConfig.get('varnishstat')}] 158 } 159 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/checks.d/varnish.py b/checks.d/varnish.py --- a/checks.d/varnish.py +++ b/checks.d/varnish.py @@ -82,6 +82,7 @@ if instance.get("varnishstat", None) is None: raise Exception("varnishstat is not configured") tags = instance.get('tags', []) + name = instance.get('name') # Get the varnish version from varnishstat output, error = subprocess.Popen([instance.get("varnishstat"), "-V"], @@ -113,9 +114,19 @@ use_xml = False arg = "-1" - output, error = subprocess.Popen([instance.get("varnishstat"), arg], - stdout=subprocess.PIPE, - stderr=subprocess.PIPE).communicate() + cmd = [instance.get("varnishstat"), arg] + if name is not None: + cmd.extend(['-n', name]) + tags += [u'varnish_name:%s' % name] + else: + tags += [u'varnish_name:default'] + try: + proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, + stderr=subprocess.PIPE) + output, error = proc.communicate() + except Exception: + self.log.error(u"Failed to run %s" % repr(cmd)) + raise if error and len(error) > 0: self.log.error(error) self._parse_varnishstat(output, use_xml, tags) @@ -142,11 +153,11 @@ if rate_val.lower() in ("nan", "."): # col 2 matters self.log.debug("Varnish (gauge) %s %d" % (metric_name, int(gauge_val))) - self.gauge(metric_name, int(gauge_val)) + self.gauge(metric_name, int(gauge_val), tags=tags) else: # col 3 has a rate (since restart) self.log.debug("Varnish (rate) %s %d" % (metric_name, int(gauge_val))) - self.rate(metric_name, float(gauge_val)) + self.rate(metric_name, float(gauge_val), tags=tags) @staticmethod def parse_agent_config(agentConfig):
{"golden_diff": "diff --git a/checks.d/varnish.py b/checks.d/varnish.py\n--- a/checks.d/varnish.py\n+++ b/checks.d/varnish.py\n@@ -82,6 +82,7 @@\n if instance.get(\"varnishstat\", None) is None:\n raise Exception(\"varnishstat is not configured\")\n tags = instance.get('tags', [])\n+ name = instance.get('name')\n \n # Get the varnish version from varnishstat\n output, error = subprocess.Popen([instance.get(\"varnishstat\"), \"-V\"],\n@@ -113,9 +114,19 @@\n use_xml = False\n arg = \"-1\"\n \n- output, error = subprocess.Popen([instance.get(\"varnishstat\"), arg],\n- stdout=subprocess.PIPE,\n- stderr=subprocess.PIPE).communicate()\n+ cmd = [instance.get(\"varnishstat\"), arg]\n+ if name is not None:\n+ cmd.extend(['-n', name])\n+ tags += [u'varnish_name:%s' % name]\n+ else:\n+ tags += [u'varnish_name:default']\n+ try:\n+ proc = subprocess.Popen(cmd, stdout=subprocess.PIPE,\n+ stderr=subprocess.PIPE)\n+ output, error = proc.communicate()\n+ except Exception:\n+ self.log.error(u\"Failed to run %s\" % repr(cmd))\n+ raise\n if error and len(error) > 0:\n self.log.error(error)\n self._parse_varnishstat(output, use_xml, tags)\n@@ -142,11 +153,11 @@\n if rate_val.lower() in (\"nan\", \".\"):\n # col 2 matters\n self.log.debug(\"Varnish (gauge) %s %d\" % (metric_name, int(gauge_val)))\n- self.gauge(metric_name, int(gauge_val))\n+ self.gauge(metric_name, int(gauge_val), tags=tags)\n else:\n # col 3 has a rate (since restart)\n self.log.debug(\"Varnish (rate) %s %d\" % (metric_name, int(gauge_val)))\n- self.rate(metric_name, float(gauge_val))\n+ self.rate(metric_name, float(gauge_val), tags=tags)\n \n @staticmethod\n def parse_agent_config(agentConfig):\n", "issue": "Support for varnish instances by name\nWe have several varnish instances on one server. Each has a name and can be referenced with `-n name` via the command line tools.\n\n```\n-n Specifies the name of the varnishd instance to get logs from. If -n is not specified, the host name is used.\n```\n\n(from https://www.varnish-cache.org/docs/3.0/reference/varnishstat.html)\n\nIt would be nice if the name could be specified in the yaml and the check would use it automatically. I've tried adding `-n name` to the bin location but that fails.\n\nI'll probably try to add this later when I have a chance, unless someone else can do it quickly.\n\n<!---\n@huboard:{\"order\":770.0}\n-->\n\n", "before_files": [{"content": "import xml.parsers.expat # python 2.4 compatible\nimport re\nimport subprocess\n\nfrom checks import AgentCheck\n\nclass Varnish(AgentCheck):\n # XML parsing bits, a.k.a. Kafka in Code\n def _reset(self):\n self._current_element = \"\"\n self._current_metric = \"varnish\"\n self._current_value = 0\n self._current_str = \"\"\n self._current_type = \"\"\n\n def _start_element(self, name, attrs):\n self._current_element = name\n\n def _end_element(self, name):\n if name == \"stat\":\n m_name = self.normalize(self._current_metric)\n if self._current_type in (\"a\", \"c\"):\n self.rate(m_name, long(self._current_value))\n elif self._current_type in (\"i\", \"g\"):\n self.gauge(m_name, long(self._current_value))\n else:\n # Unsupported data type, ignore\n self._reset()\n return # don't save\n\n # reset for next stat element\n self._reset()\n elif name in (\"type\", \"ident\", \"name\"):\n self._current_metric += \".\" + self._current_str\n\n def _char_data(self, data):\n self.log.debug(\"Data %s [%s]\" % (data, self._current_element))\n data = data.strip()\n if len(data) > 0 and self._current_element != \"\":\n if self._current_element == \"value\":\n self._current_value = long(data)\n elif self._current_element == \"flag\":\n self._current_type = data\n else:\n self._current_str = data\n\n def check(self, instance):\n \"\"\"Extract stats from varnishstat -x\n\n The text option (-1) is not reliable enough when counters get large.\n VBE.media_video_prd_services_01(10.93.67.16,,8080).happy18446744073709551615\n\n 2 types of data, \"a\" for counter (\"c\" in newer versions of varnish), \"i\" for gauge (\"g\")\n https://github.com/varnish/Varnish-Cache/blob/master/include/tbl/vsc_fields.h\n\n Bitmaps are not supported.\n\n <varnishstat>\n <stat>\n <name>fetch_304</name>\n <value>0</value>\n <flag>a</flag>\n <description>Fetch no body (304)</description>\n </stat>\n <stat>\n <name>n_sess_mem</name>\n <value>334</value>\n <flag>i</flag>\n <description>N struct sess_mem</description>\n </stat>\n <stat>\n <type>LCK</type>\n <ident>vcl</ident>\n <name>creat</name>\n <value>1</value>\n <flag>a</flag>\n <description>Created locks</description>\n </stat>\n </varnishstat>\n \"\"\"\n # Not configured? Not a problem.\n if instance.get(\"varnishstat\", None) is None:\n raise Exception(\"varnishstat is not configured\")\n tags = instance.get('tags', [])\n\n # Get the varnish version from varnishstat\n output, error = subprocess.Popen([instance.get(\"varnishstat\"), \"-V\"],\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE).communicate()\n\n # Assumptions regarding varnish's version\n use_xml = True\n arg = \"-x\" # varnishstat argument\n version = 3\n\n m1 = re.search(r\"varnish-(\\d+)\", output, re.MULTILINE)\n # v2 prints the version on stderr, v3 on stdout\n m2 = re.search(r\"varnish-(\\d+)\", error, re.MULTILINE)\n\n if m1 is None and m2 is None:\n self.log.warn(\"Cannot determine the version of varnishstat, assuming 3 or greater\")\n self.warning(\"Cannot determine the version of varnishstat, assuming 3 or greater\")\n else:\n if m1 is not None:\n version = int(m1.group(1))\n elif m2 is not None:\n version = int(m2.group(1))\n\n self.log.debug(\"Varnish version: %d\" % version)\n\n # Location of varnishstat\n if version <= 2:\n use_xml = False\n arg = \"-1\"\n\n output, error = subprocess.Popen([instance.get(\"varnishstat\"), arg],\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE).communicate()\n if error and len(error) > 0:\n self.log.error(error)\n self._parse_varnishstat(output, use_xml, tags)\n\n def _parse_varnishstat(self, output, use_xml, tags=None):\n tags = tags or []\n if use_xml:\n p = xml.parsers.expat.ParserCreate()\n p.StartElementHandler = self._start_element\n p.EndElementHandler = self._end_element\n p.CharacterDataHandler = self._char_data\n self._reset()\n p.Parse(output, True)\n else:\n for line in output.split(\"\\n\"):\n self.log.debug(\"Parsing varnish results: %s\" % line)\n fields = line.split()\n if len(fields) < 3:\n break\n name, gauge_val, rate_val = fields[0], fields[1], fields[2]\n metric_name = self.normalize(name, prefix=\"varnish\")\n\n # Now figure out which value to pick\n if rate_val.lower() in (\"nan\", \".\"):\n # col 2 matters\n self.log.debug(\"Varnish (gauge) %s %d\" % (metric_name, int(gauge_val)))\n self.gauge(metric_name, int(gauge_val))\n else:\n # col 3 has a rate (since restart)\n self.log.debug(\"Varnish (rate) %s %d\" % (metric_name, int(gauge_val)))\n self.rate(metric_name, float(gauge_val))\n\n @staticmethod\n def parse_agent_config(agentConfig):\n if not agentConfig.get('varnishstat'):\n return False\n\n return {\n 'instances': [{'varnishstat': agentConfig.get('varnishstat')}]\n }\n", "path": "checks.d/varnish.py"}], "after_files": [{"content": "import xml.parsers.expat # python 2.4 compatible\nimport re\nimport subprocess\n\nfrom checks import AgentCheck\n\nclass Varnish(AgentCheck):\n # XML parsing bits, a.k.a. Kafka in Code\n def _reset(self):\n self._current_element = \"\"\n self._current_metric = \"varnish\"\n self._current_value = 0\n self._current_str = \"\"\n self._current_type = \"\"\n\n def _start_element(self, name, attrs):\n self._current_element = name\n\n def _end_element(self, name):\n if name == \"stat\":\n m_name = self.normalize(self._current_metric)\n if self._current_type in (\"a\", \"c\"):\n self.rate(m_name, long(self._current_value))\n elif self._current_type in (\"i\", \"g\"):\n self.gauge(m_name, long(self._current_value))\n else:\n # Unsupported data type, ignore\n self._reset()\n return # don't save\n\n # reset for next stat element\n self._reset()\n elif name in (\"type\", \"ident\", \"name\"):\n self._current_metric += \".\" + self._current_str\n\n def _char_data(self, data):\n self.log.debug(\"Data %s [%s]\" % (data, self._current_element))\n data = data.strip()\n if len(data) > 0 and self._current_element != \"\":\n if self._current_element == \"value\":\n self._current_value = long(data)\n elif self._current_element == \"flag\":\n self._current_type = data\n else:\n self._current_str = data\n\n def check(self, instance):\n \"\"\"Extract stats from varnishstat -x\n\n The text option (-1) is not reliable enough when counters get large.\n VBE.media_video_prd_services_01(10.93.67.16,,8080).happy18446744073709551615\n\n 2 types of data, \"a\" for counter (\"c\" in newer versions of varnish), \"i\" for gauge (\"g\")\n https://github.com/varnish/Varnish-Cache/blob/master/include/tbl/vsc_fields.h\n\n Bitmaps are not supported.\n\n <varnishstat>\n <stat>\n <name>fetch_304</name>\n <value>0</value>\n <flag>a</flag>\n <description>Fetch no body (304)</description>\n </stat>\n <stat>\n <name>n_sess_mem</name>\n <value>334</value>\n <flag>i</flag>\n <description>N struct sess_mem</description>\n </stat>\n <stat>\n <type>LCK</type>\n <ident>vcl</ident>\n <name>creat</name>\n <value>1</value>\n <flag>a</flag>\n <description>Created locks</description>\n </stat>\n </varnishstat>\n \"\"\"\n # Not configured? Not a problem.\n if instance.get(\"varnishstat\", None) is None:\n raise Exception(\"varnishstat is not configured\")\n tags = instance.get('tags', [])\n name = instance.get('name')\n\n # Get the varnish version from varnishstat\n output, error = subprocess.Popen([instance.get(\"varnishstat\"), \"-V\"],\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE).communicate()\n\n # Assumptions regarding varnish's version\n use_xml = True\n arg = \"-x\" # varnishstat argument\n version = 3\n\n m1 = re.search(r\"varnish-(\\d+)\", output, re.MULTILINE)\n # v2 prints the version on stderr, v3 on stdout\n m2 = re.search(r\"varnish-(\\d+)\", error, re.MULTILINE)\n\n if m1 is None and m2 is None:\n self.log.warn(\"Cannot determine the version of varnishstat, assuming 3 or greater\")\n self.warning(\"Cannot determine the version of varnishstat, assuming 3 or greater\")\n else:\n if m1 is not None:\n version = int(m1.group(1))\n elif m2 is not None:\n version = int(m2.group(1))\n\n self.log.debug(\"Varnish version: %d\" % version)\n\n # Location of varnishstat\n if version <= 2:\n use_xml = False\n arg = \"-1\"\n\n cmd = [instance.get(\"varnishstat\"), arg]\n if name is not None:\n cmd.extend(['-n', name])\n tags += [u'varnish_name:%s' % name]\n else:\n tags += [u'varnish_name:default']\n try:\n proc = subprocess.Popen(cmd, stdout=subprocess.PIPE,\n stderr=subprocess.PIPE)\n output, error = proc.communicate()\n except Exception:\n self.log.error(u\"Failed to run %s\" % repr(cmd))\n raise\n if error and len(error) > 0:\n self.log.error(error)\n self._parse_varnishstat(output, use_xml, tags)\n\n def _parse_varnishstat(self, output, use_xml, tags=None):\n tags = tags or []\n if use_xml:\n p = xml.parsers.expat.ParserCreate()\n p.StartElementHandler = self._start_element\n p.EndElementHandler = self._end_element\n p.CharacterDataHandler = self._char_data\n self._reset()\n p.Parse(output, True)\n else:\n for line in output.split(\"\\n\"):\n self.log.debug(\"Parsing varnish results: %s\" % line)\n fields = line.split()\n if len(fields) < 3:\n break\n name, gauge_val, rate_val = fields[0], fields[1], fields[2]\n metric_name = self.normalize(name, prefix=\"varnish\")\n\n # Now figure out which value to pick\n if rate_val.lower() in (\"nan\", \".\"):\n # col 2 matters\n self.log.debug(\"Varnish (gauge) %s %d\" % (metric_name, int(gauge_val)))\n self.gauge(metric_name, int(gauge_val), tags=tags)\n else:\n # col 3 has a rate (since restart)\n self.log.debug(\"Varnish (rate) %s %d\" % (metric_name, int(gauge_val)))\n self.rate(metric_name, float(gauge_val), tags=tags)\n\n @staticmethod\n def parse_agent_config(agentConfig):\n if not agentConfig.get('varnishstat'):\n return False\n\n return {\n 'instances': [{'varnishstat': agentConfig.get('varnishstat')}]\n }\n", "path": "checks.d/varnish.py"}]}
2,177
511
gh_patches_debug_13296
rasdani/github-patches
git_diff
qtile__qtile-1687
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Ampersands need to be escaped in WindowName widget # Issue description Ampersands in window names need to be changed in WindowName widget to "&amp;" before being passed to Pango/Cairo # Qtile version 0.15.1 # Stack traces Exception: parse_markup() failed for b'Seth Lakeman - King & Country' 2020-04-27 19:12:00,744 ERROR libqtile hook.py:fire():L373 Error in hook focus_change Traceback (most recent call last): File "/usr/lib/python3.8/site-packages/libqtile/hook.py", line 371, in fire i(*args, **kwargs) File "/usr/lib/python3.8/site-packages/libqtile/widget/windowname.py", line 67, in update self.text = "%s%s" % (state, w.name if w and w.name else " ") File "/usr/lib/python3.8/site-packages/libqtile/widget/base.py", line 323, in text self.layout.text = self.formatted_text File "/usr/lib/python3.8/site-packages/libqtile/drawer.py", line 70, in text attrlist, value, accel_char = pangocffi.parse_markup(value) File "/usr/lib/python3.8/site-packages/libqtile/pangocffi.py", line 173, in parse_markup raise Exception("parse_markup() failed for %s" % value) Exception: parse_markup() failed for b'Seth Lakeman - King & Country' # Configuration N/A --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `libqtile/widget/windowname.py` Content: ``` 1 # Copyright (c) 2008, 2010 Aldo Cortesi 2 # Copyright (c) 2010 matt 3 # Copyright (c) 2011 Mounier Florian 4 # Copyright (c) 2012 Tim Neumann 5 # Copyright (c) 2013 Craig Barnes 6 # Copyright (c) 2014 Sean Vig 7 # Copyright (c) 2014 Tycho Andersen 8 # 9 # Permission is hereby granted, free of charge, to any person obtaining a copy 10 # of this software and associated documentation files (the "Software"), to deal 11 # in the Software without restriction, including without limitation the rights 12 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 13 # copies of the Software, and to permit persons to whom the Software is 14 # furnished to do so, subject to the following conditions: 15 # 16 # The above copyright notice and this permission notice shall be included in 17 # all copies or substantial portions of the Software. 18 # 19 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 20 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 21 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 22 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 23 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 24 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 25 # SOFTWARE. 26 27 from libqtile import bar, hook 28 from libqtile.widget import base 29 30 31 class WindowName(base._TextBox): 32 """Displays the name of the window that currently has focus""" 33 orientations = base.ORIENTATION_HORIZONTAL 34 defaults = [ 35 ('show_state', True, 'show window status before window name'), 36 ('for_current_screen', False, 'instead of this bars screen use currently active screen') 37 ] 38 39 def __init__(self, width=bar.STRETCH, **config): 40 base._TextBox.__init__(self, width=width, **config) 41 self.add_defaults(WindowName.defaults) 42 43 def _configure(self, qtile, bar): 44 base._TextBox._configure(self, qtile, bar) 45 hook.subscribe.client_name_updated(self.update) 46 hook.subscribe.focus_change(self.update) 47 hook.subscribe.float_change(self.update) 48 49 @hook.subscribe.current_screen_change 50 def on_screen_changed(): 51 if self.for_current_screen: 52 self.update() 53 54 def update(self, *args): 55 if self.for_current_screen: 56 w = self.qtile.current_screen.group.current_window 57 else: 58 w = self.bar.screen.group.current_window 59 state = '' 60 if self.show_state and w is not None: 61 if w.maximized: 62 state = '[] ' 63 elif w.minimized: 64 state = '_ ' 65 elif w.floating: 66 state = 'V ' 67 self.text = "%s%s" % (state, w.name if w and w.name else " ") 68 self.bar.draw() 69 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/libqtile/widget/windowname.py b/libqtile/widget/windowname.py --- a/libqtile/widget/windowname.py +++ b/libqtile/widget/windowname.py @@ -24,7 +24,7 @@ # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. -from libqtile import bar, hook +from libqtile import bar, hook, pangocffi from libqtile.widget import base @@ -64,5 +64,6 @@ state = '_ ' elif w.floating: state = 'V ' - self.text = "%s%s" % (state, w.name if w and w.name else " ") + unescaped = "%s%s" % (state, w.name if w and w.name else " ") + self.text = pangocffi.markup_escape_text(unescaped) self.bar.draw()
{"golden_diff": "diff --git a/libqtile/widget/windowname.py b/libqtile/widget/windowname.py\n--- a/libqtile/widget/windowname.py\n+++ b/libqtile/widget/windowname.py\n@@ -24,7 +24,7 @@\n # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n # SOFTWARE.\n \n-from libqtile import bar, hook\n+from libqtile import bar, hook, pangocffi\n from libqtile.widget import base\n \n \n@@ -64,5 +64,6 @@\n state = '_ '\n elif w.floating:\n state = 'V '\n- self.text = \"%s%s\" % (state, w.name if w and w.name else \" \")\n+ unescaped = \"%s%s\" % (state, w.name if w and w.name else \" \")\n+ self.text = pangocffi.markup_escape_text(unescaped)\n self.bar.draw()\n", "issue": "Ampersands need to be escaped in WindowName widget\n# Issue description\r\nAmpersands in window names need to be changed in WindowName widget to \"&amp;\" before being passed to Pango/Cairo\r\n\r\n# Qtile version\r\n0.15.1\r\n\r\n# Stack traces\r\nException: parse_markup() failed for b'Seth Lakeman - King & Country'\r\n2020-04-27 19:12:00,744 ERROR libqtile hook.py:fire():L373 Error in hook focus_change\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.8/site-packages/libqtile/hook.py\", line 371, in fire\r\n i(*args, **kwargs)\r\n File \"/usr/lib/python3.8/site-packages/libqtile/widget/windowname.py\", line 67, in update\r\n self.text = \"%s%s\" % (state, w.name if w and w.name else \" \")\r\n File \"/usr/lib/python3.8/site-packages/libqtile/widget/base.py\", line 323, in text\r\n self.layout.text = self.formatted_text\r\n File \"/usr/lib/python3.8/site-packages/libqtile/drawer.py\", line 70, in text\r\n attrlist, value, accel_char = pangocffi.parse_markup(value)\r\n File \"/usr/lib/python3.8/site-packages/libqtile/pangocffi.py\", line 173, in parse_markup\r\n raise Exception(\"parse_markup() failed for %s\" % value)\r\nException: parse_markup() failed for b'Seth Lakeman - King & Country'\r\n\r\n# Configuration\r\nN/A\n", "before_files": [{"content": "# Copyright (c) 2008, 2010 Aldo Cortesi\n# Copyright (c) 2010 matt\n# Copyright (c) 2011 Mounier Florian\n# Copyright (c) 2012 Tim Neumann\n# Copyright (c) 2013 Craig Barnes\n# Copyright (c) 2014 Sean Vig\n# Copyright (c) 2014 Tycho Andersen\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nfrom libqtile import bar, hook\nfrom libqtile.widget import base\n\n\nclass WindowName(base._TextBox):\n \"\"\"Displays the name of the window that currently has focus\"\"\"\n orientations = base.ORIENTATION_HORIZONTAL\n defaults = [\n ('show_state', True, 'show window status before window name'),\n ('for_current_screen', False, 'instead of this bars screen use currently active screen')\n ]\n\n def __init__(self, width=bar.STRETCH, **config):\n base._TextBox.__init__(self, width=width, **config)\n self.add_defaults(WindowName.defaults)\n\n def _configure(self, qtile, bar):\n base._TextBox._configure(self, qtile, bar)\n hook.subscribe.client_name_updated(self.update)\n hook.subscribe.focus_change(self.update)\n hook.subscribe.float_change(self.update)\n\n @hook.subscribe.current_screen_change\n def on_screen_changed():\n if self.for_current_screen:\n self.update()\n\n def update(self, *args):\n if self.for_current_screen:\n w = self.qtile.current_screen.group.current_window\n else:\n w = self.bar.screen.group.current_window\n state = ''\n if self.show_state and w is not None:\n if w.maximized:\n state = '[] '\n elif w.minimized:\n state = '_ '\n elif w.floating:\n state = 'V '\n self.text = \"%s%s\" % (state, w.name if w and w.name else \" \")\n self.bar.draw()\n", "path": "libqtile/widget/windowname.py"}], "after_files": [{"content": "# Copyright (c) 2008, 2010 Aldo Cortesi\n# Copyright (c) 2010 matt\n# Copyright (c) 2011 Mounier Florian\n# Copyright (c) 2012 Tim Neumann\n# Copyright (c) 2013 Craig Barnes\n# Copyright (c) 2014 Sean Vig\n# Copyright (c) 2014 Tycho Andersen\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nfrom libqtile import bar, hook, pangocffi\nfrom libqtile.widget import base\n\n\nclass WindowName(base._TextBox):\n \"\"\"Displays the name of the window that currently has focus\"\"\"\n orientations = base.ORIENTATION_HORIZONTAL\n defaults = [\n ('show_state', True, 'show window status before window name'),\n ('for_current_screen', False, 'instead of this bars screen use currently active screen')\n ]\n\n def __init__(self, width=bar.STRETCH, **config):\n base._TextBox.__init__(self, width=width, **config)\n self.add_defaults(WindowName.defaults)\n\n def _configure(self, qtile, bar):\n base._TextBox._configure(self, qtile, bar)\n hook.subscribe.client_name_updated(self.update)\n hook.subscribe.focus_change(self.update)\n hook.subscribe.float_change(self.update)\n\n @hook.subscribe.current_screen_change\n def on_screen_changed():\n if self.for_current_screen:\n self.update()\n\n def update(self, *args):\n if self.for_current_screen:\n w = self.qtile.current_screen.group.current_window\n else:\n w = self.bar.screen.group.current_window\n state = ''\n if self.show_state and w is not None:\n if w.maximized:\n state = '[] '\n elif w.minimized:\n state = '_ '\n elif w.floating:\n state = 'V '\n unescaped = \"%s%s\" % (state, w.name if w and w.name else \" \")\n self.text = pangocffi.markup_escape_text(unescaped)\n self.bar.draw()\n", "path": "libqtile/widget/windowname.py"}]}
1,395
201
gh_patches_debug_10207
rasdani/github-patches
git_diff
certbot__certbot-2248
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- VersionConflict: ConfigArgParse 0.9.3 Hello, I tried renewing our certificates today, and I got an error from letsencrypt-auto. The error message was: VersionConflict: (ConfigArgParse 0.9.3 (/home/bogdanb/.local/share/letsencrypt/lib/python2.7/site-packages), Requirement.parse('ConfigArgParse>=0.10.0')) Please see the two attached files for the logs: [execution.log.txt](https://github.com/letsencrypt/letsencrypt/files/97381/execution.log.txt) [letencrypt.log.txt](https://github.com/letsencrypt/letsencrypt/files/97383/letencrypt.log.txt) This is after I had deleted the ~/.local folder where letsencrypt stores its downloads. (I had the same exception before.) This is on an Ubuntu 15.04 machine: $ uname -a Linux qotilabs-dime 2.6.32-042stab112.15 #1 SMP Tue Oct 20 17:22:56 MSK 2015 x86_64 x86_64 x86_64 GNU/Linux --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `setup.py` Content: ``` 1 import codecs 2 import os 3 import re 4 import sys 5 6 from setuptools import setup 7 from setuptools import find_packages 8 9 # Workaround for http://bugs.python.org/issue8876, see 10 # http://bugs.python.org/issue8876#msg208792 11 # This can be removed when using Python 2.7.9 or later: 12 # https://hg.python.org/cpython/raw-file/v2.7.9/Misc/NEWS 13 if os.path.abspath(__file__).split(os.path.sep)[1] == 'vagrant': 14 del os.link 15 16 17 def read_file(filename, encoding='utf8'): 18 """Read unicode from given file.""" 19 with codecs.open(filename, encoding=encoding) as fd: 20 return fd.read() 21 22 23 here = os.path.abspath(os.path.dirname(__file__)) 24 25 # read version number (and other metadata) from package init 26 init_fn = os.path.join(here, 'letsencrypt', '__init__.py') 27 meta = dict(re.findall(r"""__([a-z]+)__ = '([^']+)""", read_file(init_fn))) 28 29 readme = read_file(os.path.join(here, 'README.rst')) 30 changes = read_file(os.path.join(here, 'CHANGES.rst')) 31 version = meta['version'] 32 33 # Please update tox.ini when modifying dependency version requirements 34 install_requires = [ 35 'acme=={0}'.format(version), 36 'ConfigArgParse>=0.10.0', # python2.6 support, upstream #17 37 'configobj', 38 'cryptography>=0.7', # load_pem_x509_certificate 39 'parsedatetime', 40 'psutil>=2.1.0', # net_connections introduced in 2.1.0 41 'PyOpenSSL', 42 'pyrfc3339', 43 'python2-pythondialog>=3.2.2rc1', # Debian squeeze support, cf. #280 44 'pytz', 45 'setuptools', # pkg_resources 46 'six', 47 'zope.component', 48 'zope.interface', 49 ] 50 51 # env markers in extras_require cause problems with older pip: #517 52 # Keep in sync with conditional_requirements.py. 53 if sys.version_info < (2, 7): 54 install_requires.extend([ 55 # only some distros recognize stdlib argparse as already satisfying 56 'argparse', 57 'mock<1.1.0', 58 ]) 59 else: 60 install_requires.append('mock') 61 62 dev_extras = [ 63 # Pin astroid==1.3.5, pylint==1.4.2 as a workaround for #289 64 'astroid==1.3.5', 65 'pylint==1.4.2', # upstream #248 66 'twine', 67 'wheel', 68 ] 69 70 docs_extras = [ 71 'repoze.sphinx.autointerface', 72 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags 73 'sphinx_rtd_theme', 74 'sphinxcontrib-programoutput', 75 ] 76 77 testing_extras = [ 78 'coverage', 79 'nose', 80 'nosexcover', 81 'pep8', 82 'tox', 83 ] 84 85 setup( 86 name='letsencrypt', 87 version=version, 88 description="Let's Encrypt client", 89 long_description=readme, # later: + '\n\n' + changes 90 url='https://github.com/letsencrypt/letsencrypt', 91 author="Let's Encrypt Project", 92 author_email='[email protected]', 93 license='Apache License 2.0', 94 classifiers=[ 95 'Development Status :: 3 - Alpha', 96 'Environment :: Console', 97 'Environment :: Console :: Curses', 98 'Intended Audience :: System Administrators', 99 'License :: OSI Approved :: Apache Software License', 100 'Operating System :: POSIX :: Linux', 101 'Programming Language :: Python', 102 'Programming Language :: Python :: 2', 103 'Programming Language :: Python :: 2.6', 104 'Programming Language :: Python :: 2.7', 105 'Topic :: Internet :: WWW/HTTP', 106 'Topic :: Security', 107 'Topic :: System :: Installation/Setup', 108 'Topic :: System :: Networking', 109 'Topic :: System :: Systems Administration', 110 'Topic :: Utilities', 111 ], 112 113 packages=find_packages(exclude=['docs', 'examples', 'tests', 'venv']), 114 include_package_data=True, 115 116 install_requires=install_requires, 117 extras_require={ 118 'dev': dev_extras, 119 'docs': docs_extras, 120 'testing': testing_extras, 121 }, 122 123 # to test all packages run "python setup.py test -s 124 # {acme,letsencrypt_apache,letsencrypt_nginx}" 125 test_suite='letsencrypt', 126 127 entry_points={ 128 'console_scripts': [ 129 'letsencrypt = letsencrypt.cli:main', 130 'letsencrypt-renewer = letsencrypt.renewer:main', 131 ], 132 'letsencrypt.plugins': [ 133 'manual = letsencrypt.plugins.manual:Authenticator', 134 'null = letsencrypt.plugins.null:Installer', 135 'standalone = letsencrypt.plugins.standalone:Authenticator', 136 'webroot = letsencrypt.plugins.webroot:Authenticator', 137 ], 138 }, 139 ) 140 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -33,7 +33,10 @@ # Please update tox.ini when modifying dependency version requirements install_requires = [ 'acme=={0}'.format(version), - 'ConfigArgParse>=0.10.0', # python2.6 support, upstream #17 + # We technically need ConfigArgParse 0.10.0 for Python 2.6 support, but + # saying so here causes a runtime error against our temporary fork of 0.9.3 + # in which we added 2.6 support (see #2243), so we relax the requirement. + 'ConfigArgParse>=0.9.3', 'configobj', 'cryptography>=0.7', # load_pem_x509_certificate 'parsedatetime',
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -33,7 +33,10 @@\n # Please update tox.ini when modifying dependency version requirements\n install_requires = [\n 'acme=={0}'.format(version),\n- 'ConfigArgParse>=0.10.0', # python2.6 support, upstream #17\n+ # We technically need ConfigArgParse 0.10.0 for Python 2.6 support, but\n+ # saying so here causes a runtime error against our temporary fork of 0.9.3\n+ # in which we added 2.6 support (see #2243), so we relax the requirement.\n+ 'ConfigArgParse>=0.9.3',\n 'configobj',\n 'cryptography>=0.7', # load_pem_x509_certificate\n 'parsedatetime',\n", "issue": "VersionConflict: ConfigArgParse 0.9.3\nHello,\n\nI tried renewing our certificates today, and I got an error from letsencrypt-auto. The error message was:\n\nVersionConflict: (ConfigArgParse 0.9.3 (/home/bogdanb/.local/share/letsencrypt/lib/python2.7/site-packages), Requirement.parse('ConfigArgParse>=0.10.0'))\n\nPlease see the two attached files for the logs:\n[execution.log.txt](https://github.com/letsencrypt/letsencrypt/files/97381/execution.log.txt)\n[letencrypt.log.txt](https://github.com/letsencrypt/letsencrypt/files/97383/letencrypt.log.txt)\n\nThis is after I had deleted the ~/.local folder where letsencrypt stores its downloads. (I had the same exception before.) This is on an Ubuntu 15.04 machine:\n\n$ uname -a\nLinux qotilabs-dime 2.6.32-042stab112.15 #1 SMP Tue Oct 20 17:22:56 MSK 2015 x86_64 x86_64 x86_64 GNU/Linux\n\n", "before_files": [{"content": "import codecs\nimport os\nimport re\nimport sys\n\nfrom setuptools import setup\nfrom setuptools import find_packages\n\n# Workaround for http://bugs.python.org/issue8876, see\n# http://bugs.python.org/issue8876#msg208792\n# This can be removed when using Python 2.7.9 or later:\n# https://hg.python.org/cpython/raw-file/v2.7.9/Misc/NEWS\nif os.path.abspath(__file__).split(os.path.sep)[1] == 'vagrant':\n del os.link\n\n\ndef read_file(filename, encoding='utf8'):\n \"\"\"Read unicode from given file.\"\"\"\n with codecs.open(filename, encoding=encoding) as fd:\n return fd.read()\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n# read version number (and other metadata) from package init\ninit_fn = os.path.join(here, 'letsencrypt', '__init__.py')\nmeta = dict(re.findall(r\"\"\"__([a-z]+)__ = '([^']+)\"\"\", read_file(init_fn)))\n\nreadme = read_file(os.path.join(here, 'README.rst'))\nchanges = read_file(os.path.join(here, 'CHANGES.rst'))\nversion = meta['version']\n\n# Please update tox.ini when modifying dependency version requirements\ninstall_requires = [\n 'acme=={0}'.format(version),\n 'ConfigArgParse>=0.10.0', # python2.6 support, upstream #17\n 'configobj',\n 'cryptography>=0.7', # load_pem_x509_certificate\n 'parsedatetime',\n 'psutil>=2.1.0', # net_connections introduced in 2.1.0\n 'PyOpenSSL',\n 'pyrfc3339',\n 'python2-pythondialog>=3.2.2rc1', # Debian squeeze support, cf. #280\n 'pytz',\n 'setuptools', # pkg_resources\n 'six',\n 'zope.component',\n 'zope.interface',\n]\n\n# env markers in extras_require cause problems with older pip: #517\n# Keep in sync with conditional_requirements.py.\nif sys.version_info < (2, 7):\n install_requires.extend([\n # only some distros recognize stdlib argparse as already satisfying\n 'argparse',\n 'mock<1.1.0',\n ])\nelse:\n install_requires.append('mock')\n\ndev_extras = [\n # Pin astroid==1.3.5, pylint==1.4.2 as a workaround for #289\n 'astroid==1.3.5',\n 'pylint==1.4.2', # upstream #248\n 'twine',\n 'wheel',\n]\n\ndocs_extras = [\n 'repoze.sphinx.autointerface',\n 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags\n 'sphinx_rtd_theme',\n 'sphinxcontrib-programoutput',\n]\n\ntesting_extras = [\n 'coverage',\n 'nose',\n 'nosexcover',\n 'pep8',\n 'tox',\n]\n\nsetup(\n name='letsencrypt',\n version=version,\n description=\"Let's Encrypt client\",\n long_description=readme, # later: + '\\n\\n' + changes\n url='https://github.com/letsencrypt/letsencrypt',\n author=\"Let's Encrypt Project\",\n author_email='[email protected]',\n license='Apache License 2.0',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Environment :: Console',\n 'Environment :: Console :: Curses',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Security',\n 'Topic :: System :: Installation/Setup',\n 'Topic :: System :: Networking',\n 'Topic :: System :: Systems Administration',\n 'Topic :: Utilities',\n ],\n\n packages=find_packages(exclude=['docs', 'examples', 'tests', 'venv']),\n include_package_data=True,\n\n install_requires=install_requires,\n extras_require={\n 'dev': dev_extras,\n 'docs': docs_extras,\n 'testing': testing_extras,\n },\n\n # to test all packages run \"python setup.py test -s\n # {acme,letsencrypt_apache,letsencrypt_nginx}\"\n test_suite='letsencrypt',\n\n entry_points={\n 'console_scripts': [\n 'letsencrypt = letsencrypt.cli:main',\n 'letsencrypt-renewer = letsencrypt.renewer:main',\n ],\n 'letsencrypt.plugins': [\n 'manual = letsencrypt.plugins.manual:Authenticator',\n 'null = letsencrypt.plugins.null:Installer',\n 'standalone = letsencrypt.plugins.standalone:Authenticator',\n 'webroot = letsencrypt.plugins.webroot:Authenticator',\n ],\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "import codecs\nimport os\nimport re\nimport sys\n\nfrom setuptools import setup\nfrom setuptools import find_packages\n\n# Workaround for http://bugs.python.org/issue8876, see\n# http://bugs.python.org/issue8876#msg208792\n# This can be removed when using Python 2.7.9 or later:\n# https://hg.python.org/cpython/raw-file/v2.7.9/Misc/NEWS\nif os.path.abspath(__file__).split(os.path.sep)[1] == 'vagrant':\n del os.link\n\n\ndef read_file(filename, encoding='utf8'):\n \"\"\"Read unicode from given file.\"\"\"\n with codecs.open(filename, encoding=encoding) as fd:\n return fd.read()\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n# read version number (and other metadata) from package init\ninit_fn = os.path.join(here, 'letsencrypt', '__init__.py')\nmeta = dict(re.findall(r\"\"\"__([a-z]+)__ = '([^']+)\"\"\", read_file(init_fn)))\n\nreadme = read_file(os.path.join(here, 'README.rst'))\nchanges = read_file(os.path.join(here, 'CHANGES.rst'))\nversion = meta['version']\n\n# Please update tox.ini when modifying dependency version requirements\ninstall_requires = [\n 'acme=={0}'.format(version),\n # We technically need ConfigArgParse 0.10.0 for Python 2.6 support, but\n # saying so here causes a runtime error against our temporary fork of 0.9.3\n # in which we added 2.6 support (see #2243), so we relax the requirement.\n 'ConfigArgParse>=0.9.3',\n 'configobj',\n 'cryptography>=0.7', # load_pem_x509_certificate\n 'parsedatetime',\n 'psutil>=2.1.0', # net_connections introduced in 2.1.0\n 'PyOpenSSL',\n 'pyrfc3339',\n 'python2-pythondialog>=3.2.2rc1', # Debian squeeze support, cf. #280\n 'pytz',\n 'setuptools', # pkg_resources\n 'six',\n 'zope.component',\n 'zope.interface',\n]\n\n# env markers in extras_require cause problems with older pip: #517\nif sys.version_info < (2, 7):\n install_requires.extend([\n # only some distros recognize stdlib argparse as already satisfying\n 'argparse',\n 'mock<1.1.0',\n ])\nelse:\n install_requires.extend([\n 'mock',\n ])\n\ndev_extras = [\n # Pin astroid==1.3.5, pylint==1.4.2 as a workaround for #289\n 'astroid==1.3.5',\n 'pylint==1.4.2', # upstream #248\n 'twine',\n 'wheel',\n]\n\ndocs_extras = [\n 'repoze.sphinx.autointerface',\n 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags\n 'sphinx_rtd_theme',\n 'sphinxcontrib-programoutput',\n]\n\ntesting_extras = [\n 'coverage',\n 'nose',\n 'nosexcover',\n 'pep8',\n 'tox',\n]\n\nsetup(\n name='letsencrypt',\n version=version,\n description=\"Let's Encrypt client\",\n long_description=readme, # later: + '\\n\\n' + changes\n url='https://github.com/letsencrypt/letsencrypt',\n author=\"Let's Encrypt Project\",\n author_email='[email protected]',\n license='Apache License 2.0',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Environment :: Console',\n 'Environment :: Console :: Curses',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Security',\n 'Topic :: System :: Installation/Setup',\n 'Topic :: System :: Networking',\n 'Topic :: System :: Systems Administration',\n 'Topic :: Utilities',\n ],\n\n packages=find_packages(exclude=['docs', 'examples', 'tests', 'venv']),\n include_package_data=True,\n\n install_requires=install_requires,\n extras_require={\n 'dev': dev_extras,\n 'docs': docs_extras,\n 'testing': testing_extras,\n },\n\n tests_require=install_requires,\n # to test all packages run \"python setup.py test -s\n # {acme,letsencrypt_apache,letsencrypt_nginx}\"\n test_suite='letsencrypt',\n\n entry_points={\n 'console_scripts': [\n 'letsencrypt = letsencrypt.cli:main',\n 'letsencrypt-renewer = letsencrypt.renewer:main',\n ],\n 'letsencrypt.plugins': [\n 'manual = letsencrypt.plugins.manual:Authenticator',\n 'null = letsencrypt.plugins.null:Installer',\n 'standalone = letsencrypt.plugins.standalone:Authenticator',\n 'webroot = letsencrypt.plugins.webroot:Authenticator',\n ],\n },\n)\n", "path": "setup.py"}]}
1,980
203
gh_patches_debug_8833
rasdani/github-patches
git_diff
conan-io__conan-8912
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [bug] Conan client fails when server charset is specified. I'm experimenting with implementing a conan server. While testing said server I ran into problems because my server fully qualifies the "Content-Type" HTTP header to "application/json; charset=utf-8". The conan client though errors when talking to that server with this error: `Response from remote is not json, but 'application/json; charset=utf-8'.`. JSON content requires utf-8 encoding and charset, which makes the addition in the content-type of the charset optional by the server. But regardless of what the charset is, utf-8 or ascii, clients need to handle there being a charset being specified. Which the conan client doesn't appear to be doing. ### Environment Details (include every applicable attribute) * Operating System+version: 5.11.14-1-MANJARO #1 SMP PREEMPT Wed Apr 14 08:25:29 UTC 2021 x86_64 GNU/Linux * Compiler+version: NA * Conan version: 1.35.1 * Python version: 3.9.3 [bug] Conan client fails when server charset is specified. I'm experimenting with implementing a conan server. While testing said server I ran into problems because my server fully qualifies the "Content-Type" HTTP header to "application/json; charset=utf-8". The conan client though errors when talking to that server with this error: `Response from remote is not json, but 'application/json; charset=utf-8'.`. JSON content requires utf-8 encoding and charset, which makes the addition in the content-type of the charset optional by the server. But regardless of what the charset is, utf-8 or ascii, clients need to handle there being a charset being specified. Which the conan client doesn't appear to be doing. ### Environment Details (include every applicable attribute) * Operating System+version: 5.11.14-1-MANJARO #1 SMP PREEMPT Wed Apr 14 08:25:29 UTC 2021 x86_64 GNU/Linux * Compiler+version: NA * Conan version: 1.35.1 * Python version: 3.9.3 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `conans/client/rest/rest_client_common.py` Content: ``` 1 import json 2 3 from requests.auth import AuthBase, HTTPBasicAuth 4 5 from conans.client.rest import response_to_str 6 from conans.errors import (EXCEPTION_CODE_MAPPING, ConanException, 7 AuthenticationException, RecipeNotFoundException, 8 PackageNotFoundException) 9 from conans.model.ref import ConanFileReference 10 from conans.util.files import decode_text 11 from conans.util.log import logger 12 13 14 class JWTAuth(AuthBase): 15 """Attaches JWT Authentication to the given Request object.""" 16 17 def __init__(self, token): 18 self.token = token 19 20 def __call__(self, request): 21 if self.token: 22 request.headers['Authorization'] = "Bearer %s" % str(self.token) 23 return request 24 25 26 def get_exception_from_error(error_code): 27 tmp = {v: k for k, v in EXCEPTION_CODE_MAPPING.items() # All except NotFound 28 if k not in (RecipeNotFoundException, PackageNotFoundException)} 29 if error_code in tmp: 30 logger.debug("REST ERROR: %s" % str(tmp[error_code])) 31 return tmp[error_code] 32 else: 33 base_error = int(str(error_code)[0] + "00") 34 logger.debug("REST ERROR: %s" % str(base_error)) 35 try: 36 return tmp[base_error] 37 except KeyError: 38 return None 39 40 41 def handle_return_deserializer(deserializer=None): 42 """Decorator for rest api methods. 43 Map exceptions and http return codes and deserialize if needed. 44 45 deserializer: Function for deserialize values""" 46 47 def handle_return(method): 48 def inner(*argc, **argv): 49 ret = method(*argc, **argv) 50 if ret.status_code != 200: 51 ret.charset = "utf-8" # To be able to access ret.text (ret.content are bytes) 52 text = ret.text if ret.status_code != 404 else "404 Not found" 53 raise get_exception_from_error(ret.status_code)(text) 54 return deserializer(ret.content) if deserializer else decode_text(ret.content) 55 56 return inner 57 58 return handle_return 59 60 61 class RestCommonMethods(object): 62 63 def __init__(self, remote_url, token, custom_headers, output, requester, config, verify_ssl, 64 artifacts_properties=None, matrix_params=False): 65 self.token = token 66 self.remote_url = remote_url 67 self.custom_headers = custom_headers 68 self._output = output 69 self.requester = requester 70 self._config = config 71 self.verify_ssl = verify_ssl 72 self._artifacts_properties = artifacts_properties 73 self._matrix_params = matrix_params 74 75 @property 76 def auth(self): 77 return JWTAuth(self.token) 78 79 @staticmethod 80 def _check_error_response(ret): 81 if ret.status_code == 401: 82 raise AuthenticationException("Wrong user or password") 83 # Cannot check content-type=text/html, conan server is doing it wrong 84 if not ret.ok or "html>" in str(ret.content): 85 raise ConanException("%s\n\nInvalid server response, check remote URL and " 86 "try again" % str(ret.content)) 87 88 def authenticate(self, user, password): 89 """Sends user + password to get: 90 - A plain response with a regular token (not supported refresh in the remote) and None 91 """ 92 auth = HTTPBasicAuth(user, password) 93 url = self.router.common_authenticate() 94 logger.debug("REST: Authenticate to get access_token: %s" % url) 95 ret = self.requester.get(url, auth=auth, headers=self.custom_headers, 96 verify=self.verify_ssl) 97 98 self._check_error_response(ret) 99 return decode_text(ret.content) 100 101 def authenticate_oauth(self, user, password): 102 """Sends user + password to get: 103 - A json with an access_token and a refresh token (if supported in the remote) 104 Artifactory >= 6.13.X 105 """ 106 url = self.router.oauth_authenticate() 107 auth = HTTPBasicAuth(user, password) 108 headers = {} 109 headers.update(self.custom_headers) 110 headers["Content-type"] = "application/x-www-form-urlencoded" 111 logger.debug("REST: Authenticating with OAUTH: %s" % url) 112 ret = self.requester.post(url, auth=auth, headers=headers, verify=self.verify_ssl) 113 self._check_error_response(ret) 114 115 data = ret.json() 116 access_token = data["access_token"] 117 refresh_token = data["refresh_token"] 118 logger.debug("REST: Obtained refresh and access tokens") 119 return access_token, refresh_token 120 121 def refresh_token(self, token, refresh_token): 122 """Sends access_token and the refresh_token to get a pair of 123 access_token and refresh token 124 125 Artifactory >= 6.13.X 126 """ 127 url = self.router.oauth_authenticate() 128 logger.debug("REST: Refreshing Token: %s" % url) 129 headers = {} 130 headers.update(self.custom_headers) 131 headers["Content-type"] = "application/x-www-form-urlencoded" 132 payload = {'access_token': token, 'refresh_token': refresh_token, 133 'grant_type': 'refresh_token'} 134 ret = self.requester.post(url, headers=headers, verify=self.verify_ssl, data=payload) 135 self._check_error_response(ret) 136 137 data = ret.json() 138 if "access_token" not in data: 139 logger.debug("REST: unexpected data from server: {}".format(data)) 140 raise ConanException("Error refreshing the token") 141 142 new_access_token = data["access_token"] 143 new_refresh_token = data["refresh_token"] 144 logger.debug("REST: Obtained new refresh and access tokens") 145 return new_access_token, new_refresh_token 146 147 @handle_return_deserializer() 148 def check_credentials(self): 149 """If token is not valid will raise AuthenticationException. 150 User will be asked for new user/pass""" 151 url = self.router.common_check_credentials() 152 logger.debug("REST: Check credentials: %s" % url) 153 ret = self.requester.get(url, auth=self.auth, headers=self.custom_headers, 154 verify=self.verify_ssl) 155 return ret 156 157 def server_capabilities(self, user=None, password=None): 158 """Get information about the server: status, version, type and capabilities""" 159 url = self.router.ping() 160 logger.debug("REST: ping: %s" % url) 161 if user and password: 162 # This can happen in "conan user" cmd. Instead of empty token, use HttpBasic 163 auth = HTTPBasicAuth(user, password) 164 else: 165 auth = self.auth 166 ret = self.requester.get(url, auth=auth, headers=self.custom_headers, verify=self.verify_ssl) 167 168 server_capabilities = ret.headers.get('X-Conan-Server-Capabilities', "") 169 if not server_capabilities and not ret.ok: 170 # Old Artifactory might return 401/403 without capabilities, we don't want 171 # to cache them #5687, so raise the exception and force authentication 172 raise get_exception_from_error(ret.status_code)(response_to_str(ret)) 173 174 return [cap.strip() for cap in server_capabilities.split(",") if cap] 175 176 def get_json(self, url, data=None, headers=None): 177 req_headers = self.custom_headers.copy() 178 req_headers.update(headers or {}) 179 if data: # POST request 180 req_headers.update({'Content-type': 'application/json', 181 'Accept': 'application/json'}) 182 logger.debug("REST: post: %s" % url) 183 response = self.requester.post(url, auth=self.auth, headers=req_headers, 184 verify=self.verify_ssl, 185 stream=True, 186 data=json.dumps(data)) 187 else: 188 logger.debug("REST: get: %s" % url) 189 response = self.requester.get(url, auth=self.auth, headers=req_headers, 190 verify=self.verify_ssl, 191 stream=True) 192 193 if response.status_code != 200: # Error message is text 194 response.charset = "utf-8" # To be able to access ret.text (ret.content are bytes) 195 raise get_exception_from_error(response.status_code)(response_to_str(response)) 196 197 content = decode_text(response.content) 198 content_type = response.headers.get("Content-Type") 199 if content_type != 'application/json': 200 raise ConanException("%s\n\nResponse from remote is not json, but '%s'" 201 % (content, content_type)) 202 203 try: # This can fail, if some proxy returns 200 and an html message 204 result = json.loads(content) 205 except Exception: 206 raise ConanException("Remote responded with broken json: %s" % content) 207 if not isinstance(result, dict): 208 raise ConanException("Unexpected server response %s" % result) 209 return result 210 211 def upload_recipe(self, ref, files_to_upload, deleted, retry, retry_wait): 212 if files_to_upload: 213 self._upload_recipe(ref, files_to_upload, retry, retry_wait) 214 if deleted: 215 self._remove_conanfile_files(ref, deleted) 216 217 def get_recipe_snapshot(self, ref): 218 # this method is used only for UPLOADING, then it requires the credentials 219 # Check of credentials is done in the uploader 220 url = self.router.recipe_snapshot(ref) 221 snap = self._get_snapshot(url) 222 return snap 223 224 def get_package_snapshot(self, pref): 225 # this method is also used to check the integrity of the package upstream 226 # while installing, so check_credentials is done in uploader. 227 url = self.router.package_snapshot(pref) 228 snap = self._get_snapshot(url) 229 return snap 230 231 def upload_package(self, pref, files_to_upload, deleted, retry, retry_wait): 232 if files_to_upload: 233 self._upload_package(pref, files_to_upload, retry, retry_wait) 234 if deleted: 235 raise Exception("This shouldn't be happening, deleted files " 236 "in local package present in remote: %s.\n Please, report it at " 237 "https://github.com/conan-io/conan/issues " % str(deleted)) 238 239 def search(self, pattern=None, ignorecase=True): 240 """ 241 the_files: dict with relative_path: content 242 """ 243 url = self.router.search(pattern, ignorecase) 244 response = self.get_json(url)["results"] 245 return [ConanFileReference.loads(reference) for reference in response] 246 247 def search_packages(self, ref, query): 248 """Client is filtering by the query""" 249 url = self.router.search_packages(ref, query) 250 package_infos = self.get_json(url) 251 return package_infos 252 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/conans/client/rest/rest_client_common.py b/conans/client/rest/rest_client_common.py --- a/conans/client/rest/rest_client_common.py +++ b/conans/client/rest/rest_client_common.py @@ -196,7 +196,7 @@ content = decode_text(response.content) content_type = response.headers.get("Content-Type") - if content_type != 'application/json': + if content_type != 'application/json' and content_type != 'application/json; charset=utf-8': raise ConanException("%s\n\nResponse from remote is not json, but '%s'" % (content, content_type))
{"golden_diff": "diff --git a/conans/client/rest/rest_client_common.py b/conans/client/rest/rest_client_common.py\n--- a/conans/client/rest/rest_client_common.py\n+++ b/conans/client/rest/rest_client_common.py\n@@ -196,7 +196,7 @@\n \n content = decode_text(response.content)\n content_type = response.headers.get(\"Content-Type\")\n- if content_type != 'application/json':\n+ if content_type != 'application/json' and content_type != 'application/json; charset=utf-8':\n raise ConanException(\"%s\\n\\nResponse from remote is not json, but '%s'\"\n % (content, content_type))\n", "issue": "[bug] Conan client fails when server charset is specified.\nI'm experimenting with implementing a conan server. While testing said server I ran into problems because my server fully qualifies the \"Content-Type\" HTTP header to \"application/json; charset=utf-8\". The conan client though errors when talking to that server with this error: `Response from remote is not json, but 'application/json; charset=utf-8'.`. JSON content requires utf-8 encoding and charset, which makes the addition in the content-type of the charset optional by the server. But regardless of what the charset is, utf-8 or ascii, clients need to handle there being a charset being specified. Which the conan client doesn't appear to be doing.\r\n\r\n### Environment Details (include every applicable attribute)\r\n * Operating System+version: 5.11.14-1-MANJARO #1 SMP PREEMPT Wed Apr 14 08:25:29 UTC 2021 x86_64 GNU/Linux\r\n * Compiler+version: NA\r\n * Conan version: 1.35.1\r\n * Python version: 3.9.3\r\n\n[bug] Conan client fails when server charset is specified.\nI'm experimenting with implementing a conan server. While testing said server I ran into problems because my server fully qualifies the \"Content-Type\" HTTP header to \"application/json; charset=utf-8\". The conan client though errors when talking to that server with this error: `Response from remote is not json, but 'application/json; charset=utf-8'.`. JSON content requires utf-8 encoding and charset, which makes the addition in the content-type of the charset optional by the server. But regardless of what the charset is, utf-8 or ascii, clients need to handle there being a charset being specified. Which the conan client doesn't appear to be doing.\r\n\r\n### Environment Details (include every applicable attribute)\r\n * Operating System+version: 5.11.14-1-MANJARO #1 SMP PREEMPT Wed Apr 14 08:25:29 UTC 2021 x86_64 GNU/Linux\r\n * Compiler+version: NA\r\n * Conan version: 1.35.1\r\n * Python version: 3.9.3\r\n\n", "before_files": [{"content": "import json\n\nfrom requests.auth import AuthBase, HTTPBasicAuth\n\nfrom conans.client.rest import response_to_str\nfrom conans.errors import (EXCEPTION_CODE_MAPPING, ConanException,\n AuthenticationException, RecipeNotFoundException,\n PackageNotFoundException)\nfrom conans.model.ref import ConanFileReference\nfrom conans.util.files import decode_text\nfrom conans.util.log import logger\n\n\nclass JWTAuth(AuthBase):\n \"\"\"Attaches JWT Authentication to the given Request object.\"\"\"\n\n def __init__(self, token):\n self.token = token\n\n def __call__(self, request):\n if self.token:\n request.headers['Authorization'] = \"Bearer %s\" % str(self.token)\n return request\n\n\ndef get_exception_from_error(error_code):\n tmp = {v: k for k, v in EXCEPTION_CODE_MAPPING.items() # All except NotFound\n if k not in (RecipeNotFoundException, PackageNotFoundException)}\n if error_code in tmp:\n logger.debug(\"REST ERROR: %s\" % str(tmp[error_code]))\n return tmp[error_code]\n else:\n base_error = int(str(error_code)[0] + \"00\")\n logger.debug(\"REST ERROR: %s\" % str(base_error))\n try:\n return tmp[base_error]\n except KeyError:\n return None\n\n\ndef handle_return_deserializer(deserializer=None):\n \"\"\"Decorator for rest api methods.\n Map exceptions and http return codes and deserialize if needed.\n\n deserializer: Function for deserialize values\"\"\"\n\n def handle_return(method):\n def inner(*argc, **argv):\n ret = method(*argc, **argv)\n if ret.status_code != 200:\n ret.charset = \"utf-8\" # To be able to access ret.text (ret.content are bytes)\n text = ret.text if ret.status_code != 404 else \"404 Not found\"\n raise get_exception_from_error(ret.status_code)(text)\n return deserializer(ret.content) if deserializer else decode_text(ret.content)\n\n return inner\n\n return handle_return\n\n\nclass RestCommonMethods(object):\n\n def __init__(self, remote_url, token, custom_headers, output, requester, config, verify_ssl,\n artifacts_properties=None, matrix_params=False):\n self.token = token\n self.remote_url = remote_url\n self.custom_headers = custom_headers\n self._output = output\n self.requester = requester\n self._config = config\n self.verify_ssl = verify_ssl\n self._artifacts_properties = artifacts_properties\n self._matrix_params = matrix_params\n\n @property\n def auth(self):\n return JWTAuth(self.token)\n\n @staticmethod\n def _check_error_response(ret):\n if ret.status_code == 401:\n raise AuthenticationException(\"Wrong user or password\")\n # Cannot check content-type=text/html, conan server is doing it wrong\n if not ret.ok or \"html>\" in str(ret.content):\n raise ConanException(\"%s\\n\\nInvalid server response, check remote URL and \"\n \"try again\" % str(ret.content))\n\n def authenticate(self, user, password):\n \"\"\"Sends user + password to get:\n - A plain response with a regular token (not supported refresh in the remote) and None\n \"\"\"\n auth = HTTPBasicAuth(user, password)\n url = self.router.common_authenticate()\n logger.debug(\"REST: Authenticate to get access_token: %s\" % url)\n ret = self.requester.get(url, auth=auth, headers=self.custom_headers,\n verify=self.verify_ssl)\n\n self._check_error_response(ret)\n return decode_text(ret.content)\n\n def authenticate_oauth(self, user, password):\n \"\"\"Sends user + password to get:\n - A json with an access_token and a refresh token (if supported in the remote)\n Artifactory >= 6.13.X\n \"\"\"\n url = self.router.oauth_authenticate()\n auth = HTTPBasicAuth(user, password)\n headers = {}\n headers.update(self.custom_headers)\n headers[\"Content-type\"] = \"application/x-www-form-urlencoded\"\n logger.debug(\"REST: Authenticating with OAUTH: %s\" % url)\n ret = self.requester.post(url, auth=auth, headers=headers, verify=self.verify_ssl)\n self._check_error_response(ret)\n\n data = ret.json()\n access_token = data[\"access_token\"]\n refresh_token = data[\"refresh_token\"]\n logger.debug(\"REST: Obtained refresh and access tokens\")\n return access_token, refresh_token\n\n def refresh_token(self, token, refresh_token):\n \"\"\"Sends access_token and the refresh_token to get a pair of\n access_token and refresh token\n\n Artifactory >= 6.13.X\n \"\"\"\n url = self.router.oauth_authenticate()\n logger.debug(\"REST: Refreshing Token: %s\" % url)\n headers = {}\n headers.update(self.custom_headers)\n headers[\"Content-type\"] = \"application/x-www-form-urlencoded\"\n payload = {'access_token': token, 'refresh_token': refresh_token,\n 'grant_type': 'refresh_token'}\n ret = self.requester.post(url, headers=headers, verify=self.verify_ssl, data=payload)\n self._check_error_response(ret)\n\n data = ret.json()\n if \"access_token\" not in data:\n logger.debug(\"REST: unexpected data from server: {}\".format(data))\n raise ConanException(\"Error refreshing the token\")\n\n new_access_token = data[\"access_token\"]\n new_refresh_token = data[\"refresh_token\"]\n logger.debug(\"REST: Obtained new refresh and access tokens\")\n return new_access_token, new_refresh_token\n\n @handle_return_deserializer()\n def check_credentials(self):\n \"\"\"If token is not valid will raise AuthenticationException.\n User will be asked for new user/pass\"\"\"\n url = self.router.common_check_credentials()\n logger.debug(\"REST: Check credentials: %s\" % url)\n ret = self.requester.get(url, auth=self.auth, headers=self.custom_headers,\n verify=self.verify_ssl)\n return ret\n\n def server_capabilities(self, user=None, password=None):\n \"\"\"Get information about the server: status, version, type and capabilities\"\"\"\n url = self.router.ping()\n logger.debug(\"REST: ping: %s\" % url)\n if user and password:\n # This can happen in \"conan user\" cmd. Instead of empty token, use HttpBasic\n auth = HTTPBasicAuth(user, password)\n else:\n auth = self.auth\n ret = self.requester.get(url, auth=auth, headers=self.custom_headers, verify=self.verify_ssl)\n\n server_capabilities = ret.headers.get('X-Conan-Server-Capabilities', \"\")\n if not server_capabilities and not ret.ok:\n # Old Artifactory might return 401/403 without capabilities, we don't want\n # to cache them #5687, so raise the exception and force authentication\n raise get_exception_from_error(ret.status_code)(response_to_str(ret))\n\n return [cap.strip() for cap in server_capabilities.split(\",\") if cap]\n\n def get_json(self, url, data=None, headers=None):\n req_headers = self.custom_headers.copy()\n req_headers.update(headers or {})\n if data: # POST request\n req_headers.update({'Content-type': 'application/json',\n 'Accept': 'application/json'})\n logger.debug(\"REST: post: %s\" % url)\n response = self.requester.post(url, auth=self.auth, headers=req_headers,\n verify=self.verify_ssl,\n stream=True,\n data=json.dumps(data))\n else:\n logger.debug(\"REST: get: %s\" % url)\n response = self.requester.get(url, auth=self.auth, headers=req_headers,\n verify=self.verify_ssl,\n stream=True)\n\n if response.status_code != 200: # Error message is text\n response.charset = \"utf-8\" # To be able to access ret.text (ret.content are bytes)\n raise get_exception_from_error(response.status_code)(response_to_str(response))\n\n content = decode_text(response.content)\n content_type = response.headers.get(\"Content-Type\")\n if content_type != 'application/json':\n raise ConanException(\"%s\\n\\nResponse from remote is not json, but '%s'\"\n % (content, content_type))\n\n try: # This can fail, if some proxy returns 200 and an html message\n result = json.loads(content)\n except Exception:\n raise ConanException(\"Remote responded with broken json: %s\" % content)\n if not isinstance(result, dict):\n raise ConanException(\"Unexpected server response %s\" % result)\n return result\n\n def upload_recipe(self, ref, files_to_upload, deleted, retry, retry_wait):\n if files_to_upload:\n self._upload_recipe(ref, files_to_upload, retry, retry_wait)\n if deleted:\n self._remove_conanfile_files(ref, deleted)\n\n def get_recipe_snapshot(self, ref):\n # this method is used only for UPLOADING, then it requires the credentials\n # Check of credentials is done in the uploader\n url = self.router.recipe_snapshot(ref)\n snap = self._get_snapshot(url)\n return snap\n\n def get_package_snapshot(self, pref):\n # this method is also used to check the integrity of the package upstream\n # while installing, so check_credentials is done in uploader.\n url = self.router.package_snapshot(pref)\n snap = self._get_snapshot(url)\n return snap\n\n def upload_package(self, pref, files_to_upload, deleted, retry, retry_wait):\n if files_to_upload:\n self._upload_package(pref, files_to_upload, retry, retry_wait)\n if deleted:\n raise Exception(\"This shouldn't be happening, deleted files \"\n \"in local package present in remote: %s.\\n Please, report it at \"\n \"https://github.com/conan-io/conan/issues \" % str(deleted))\n\n def search(self, pattern=None, ignorecase=True):\n \"\"\"\n the_files: dict with relative_path: content\n \"\"\"\n url = self.router.search(pattern, ignorecase)\n response = self.get_json(url)[\"results\"]\n return [ConanFileReference.loads(reference) for reference in response]\n\n def search_packages(self, ref, query):\n \"\"\"Client is filtering by the query\"\"\"\n url = self.router.search_packages(ref, query)\n package_infos = self.get_json(url)\n return package_infos\n", "path": "conans/client/rest/rest_client_common.py"}], "after_files": [{"content": "import json\n\nfrom requests.auth import AuthBase, HTTPBasicAuth\n\nfrom conans.client.rest import response_to_str\nfrom conans.errors import (EXCEPTION_CODE_MAPPING, ConanException,\n AuthenticationException, RecipeNotFoundException,\n PackageNotFoundException)\nfrom conans.model.ref import ConanFileReference\nfrom conans.util.files import decode_text\nfrom conans.util.log import logger\n\n\nclass JWTAuth(AuthBase):\n \"\"\"Attaches JWT Authentication to the given Request object.\"\"\"\n\n def __init__(self, token):\n self.token = token\n\n def __call__(self, request):\n if self.token:\n request.headers['Authorization'] = \"Bearer %s\" % str(self.token)\n return request\n\n\ndef get_exception_from_error(error_code):\n tmp = {v: k for k, v in EXCEPTION_CODE_MAPPING.items() # All except NotFound\n if k not in (RecipeNotFoundException, PackageNotFoundException)}\n if error_code in tmp:\n logger.debug(\"REST ERROR: %s\" % str(tmp[error_code]))\n return tmp[error_code]\n else:\n base_error = int(str(error_code)[0] + \"00\")\n logger.debug(\"REST ERROR: %s\" % str(base_error))\n try:\n return tmp[base_error]\n except KeyError:\n return None\n\n\ndef handle_return_deserializer(deserializer=None):\n \"\"\"Decorator for rest api methods.\n Map exceptions and http return codes and deserialize if needed.\n\n deserializer: Function for deserialize values\"\"\"\n\n def handle_return(method):\n def inner(*argc, **argv):\n ret = method(*argc, **argv)\n if ret.status_code != 200:\n ret.charset = \"utf-8\" # To be able to access ret.text (ret.content are bytes)\n text = ret.text if ret.status_code != 404 else \"404 Not found\"\n raise get_exception_from_error(ret.status_code)(text)\n return deserializer(ret.content) if deserializer else decode_text(ret.content)\n\n return inner\n\n return handle_return\n\n\nclass RestCommonMethods(object):\n\n def __init__(self, remote_url, token, custom_headers, output, requester, config, verify_ssl,\n artifacts_properties=None, matrix_params=False):\n self.token = token\n self.remote_url = remote_url\n self.custom_headers = custom_headers\n self._output = output\n self.requester = requester\n self._config = config\n self.verify_ssl = verify_ssl\n self._artifacts_properties = artifacts_properties\n self._matrix_params = matrix_params\n\n @property\n def auth(self):\n return JWTAuth(self.token)\n\n @staticmethod\n def _check_error_response(ret):\n if ret.status_code == 401:\n raise AuthenticationException(\"Wrong user or password\")\n # Cannot check content-type=text/html, conan server is doing it wrong\n if not ret.ok or \"html>\" in str(ret.content):\n raise ConanException(\"%s\\n\\nInvalid server response, check remote URL and \"\n \"try again\" % str(ret.content))\n\n def authenticate(self, user, password):\n \"\"\"Sends user + password to get:\n - A plain response with a regular token (not supported refresh in the remote) and None\n \"\"\"\n auth = HTTPBasicAuth(user, password)\n url = self.router.common_authenticate()\n logger.debug(\"REST: Authenticate to get access_token: %s\" % url)\n ret = self.requester.get(url, auth=auth, headers=self.custom_headers,\n verify=self.verify_ssl)\n\n self._check_error_response(ret)\n return decode_text(ret.content)\n\n def authenticate_oauth(self, user, password):\n \"\"\"Sends user + password to get:\n - A json with an access_token and a refresh token (if supported in the remote)\n Artifactory >= 6.13.X\n \"\"\"\n url = self.router.oauth_authenticate()\n auth = HTTPBasicAuth(user, password)\n headers = {}\n headers.update(self.custom_headers)\n headers[\"Content-type\"] = \"application/x-www-form-urlencoded\"\n logger.debug(\"REST: Authenticating with OAUTH: %s\" % url)\n ret = self.requester.post(url, auth=auth, headers=headers, verify=self.verify_ssl)\n self._check_error_response(ret)\n\n data = ret.json()\n access_token = data[\"access_token\"]\n refresh_token = data[\"refresh_token\"]\n logger.debug(\"REST: Obtained refresh and access tokens\")\n return access_token, refresh_token\n\n def refresh_token(self, token, refresh_token):\n \"\"\"Sends access_token and the refresh_token to get a pair of\n access_token and refresh token\n\n Artifactory >= 6.13.X\n \"\"\"\n url = self.router.oauth_authenticate()\n logger.debug(\"REST: Refreshing Token: %s\" % url)\n headers = {}\n headers.update(self.custom_headers)\n headers[\"Content-type\"] = \"application/x-www-form-urlencoded\"\n payload = {'access_token': token, 'refresh_token': refresh_token,\n 'grant_type': 'refresh_token'}\n ret = self.requester.post(url, headers=headers, verify=self.verify_ssl, data=payload)\n self._check_error_response(ret)\n\n data = ret.json()\n if \"access_token\" not in data:\n logger.debug(\"REST: unexpected data from server: {}\".format(data))\n raise ConanException(\"Error refreshing the token\")\n\n new_access_token = data[\"access_token\"]\n new_refresh_token = data[\"refresh_token\"]\n logger.debug(\"REST: Obtained new refresh and access tokens\")\n return new_access_token, new_refresh_token\n\n @handle_return_deserializer()\n def check_credentials(self):\n \"\"\"If token is not valid will raise AuthenticationException.\n User will be asked for new user/pass\"\"\"\n url = self.router.common_check_credentials()\n logger.debug(\"REST: Check credentials: %s\" % url)\n ret = self.requester.get(url, auth=self.auth, headers=self.custom_headers,\n verify=self.verify_ssl)\n return ret\n\n def server_capabilities(self, user=None, password=None):\n \"\"\"Get information about the server: status, version, type and capabilities\"\"\"\n url = self.router.ping()\n logger.debug(\"REST: ping: %s\" % url)\n if user and password:\n # This can happen in \"conan user\" cmd. Instead of empty token, use HttpBasic\n auth = HTTPBasicAuth(user, password)\n else:\n auth = self.auth\n ret = self.requester.get(url, auth=auth, headers=self.custom_headers, verify=self.verify_ssl)\n\n server_capabilities = ret.headers.get('X-Conan-Server-Capabilities', \"\")\n if not server_capabilities and not ret.ok:\n # Old Artifactory might return 401/403 without capabilities, we don't want\n # to cache them #5687, so raise the exception and force authentication\n raise get_exception_from_error(ret.status_code)(response_to_str(ret))\n\n return [cap.strip() for cap in server_capabilities.split(\",\") if cap]\n\n def get_json(self, url, data=None, headers=None):\n req_headers = self.custom_headers.copy()\n req_headers.update(headers or {})\n if data: # POST request\n req_headers.update({'Content-type': 'application/json',\n 'Accept': 'application/json'})\n logger.debug(\"REST: post: %s\" % url)\n response = self.requester.post(url, auth=self.auth, headers=req_headers,\n verify=self.verify_ssl,\n stream=True,\n data=json.dumps(data))\n else:\n logger.debug(\"REST: get: %s\" % url)\n response = self.requester.get(url, auth=self.auth, headers=req_headers,\n verify=self.verify_ssl,\n stream=True)\n\n if response.status_code != 200: # Error message is text\n response.charset = \"utf-8\" # To be able to access ret.text (ret.content are bytes)\n raise get_exception_from_error(response.status_code)(response_to_str(response))\n\n content = decode_text(response.content)\n content_type = response.headers.get(\"Content-Type\")\n if content_type != 'application/json' and content_type != 'application/json; charset=utf-8':\n raise ConanException(\"%s\\n\\nResponse from remote is not json, but '%s'\"\n % (content, content_type))\n\n try: # This can fail, if some proxy returns 200 and an html message\n result = json.loads(content)\n except Exception:\n raise ConanException(\"Remote responded with broken json: %s\" % content)\n if not isinstance(result, dict):\n raise ConanException(\"Unexpected server response %s\" % result)\n return result\n\n def upload_recipe(self, ref, files_to_upload, deleted, retry, retry_wait):\n if files_to_upload:\n self._upload_recipe(ref, files_to_upload, retry, retry_wait)\n if deleted:\n self._remove_conanfile_files(ref, deleted)\n\n def get_recipe_snapshot(self, ref):\n # this method is used only for UPLOADING, then it requires the credentials\n # Check of credentials is done in the uploader\n url = self.router.recipe_snapshot(ref)\n snap = self._get_snapshot(url)\n return snap\n\n def get_package_snapshot(self, pref):\n # this method is also used to check the integrity of the package upstream\n # while installing, so check_credentials is done in uploader.\n url = self.router.package_snapshot(pref)\n snap = self._get_snapshot(url)\n return snap\n\n def upload_package(self, pref, files_to_upload, deleted, retry, retry_wait):\n if files_to_upload:\n self._upload_package(pref, files_to_upload, retry, retry_wait)\n if deleted:\n raise Exception(\"This shouldn't be happening, deleted files \"\n \"in local package present in remote: %s.\\n Please, report it at \"\n \"https://github.com/conan-io/conan/issues \" % str(deleted))\n\n def search(self, pattern=None, ignorecase=True):\n \"\"\"\n the_files: dict with relative_path: content\n \"\"\"\n url = self.router.search(pattern, ignorecase)\n response = self.get_json(url)[\"results\"]\n return [ConanFileReference.loads(reference) for reference in response]\n\n def search_packages(self, ref, query):\n \"\"\"Client is filtering by the query\"\"\"\n url = self.router.search_packages(ref, query)\n package_infos = self.get_json(url)\n return package_infos\n", "path": "conans/client/rest/rest_client_common.py"}]}
3,659
138
gh_patches_debug_7527
rasdani/github-patches
git_diff
mindsdb__mindsdb-748
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Check if the python version is at least 3.6 on server start. On Server start check if Python version is < 3.8 if lower version, print message: ``` MindsDB server requires Python >= 3.8 to run Once you have Python 3.8 installed you can tun mindsdb as follows: 1. create and activate venv: python3.8 -m venv venv source venv/bin/activate 2. install MindsDB: pip3 install mindsdb 3. Run MindsDB python3.8 -m mindsdb More instructions in https://docs.mindsdb.com ``` and exit --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `mindsdb/__main__.py` Content: ``` 1 import atexit 2 import traceback 3 import sys 4 import os 5 import time 6 7 from pkg_resources import get_distribution 8 import torch.multiprocessing as mp 9 10 from mindsdb.utilities.config import Config 11 from mindsdb.interfaces.native.mindsdb import MindsdbNative 12 from mindsdb.interfaces.custom.custom_models import CustomModels 13 from mindsdb.api.http.start import start as start_http 14 from mindsdb.api.mysql.start import start as start_mysql 15 from mindsdb.api.mongo.start import start as start_mongo 16 from mindsdb.utilities.fs import get_or_create_dir_struct, update_versions_file 17 from mindsdb.utilities.ps import is_port_in_use 18 from mindsdb.interfaces.database.database import DatabaseWrapper 19 from mindsdb.utilities.functions import args_parse 20 21 22 def close_api_gracefully(p_arr): 23 for p in p_arr: 24 sys.stdout.flush() 25 p.terminate() 26 p.join() 27 sys.stdout.flush() 28 29 30 if __name__ == '__main__': 31 mp.freeze_support() 32 33 args = args_parse() 34 35 config_path = args.config 36 if config_path is None: 37 config_dir, _ = get_or_create_dir_struct() 38 config_path = os.path.join(config_dir, 'config.json') 39 40 config = Config(config_path) 41 42 from mindsdb.__about__ import __version__ as mindsdb_version 43 44 if args.version: 45 print(f'MindsDB {mindsdb_version}') 46 sys.exit(0) 47 48 try: 49 lightwood_version = get_distribution('lightwood').version 50 except Exception: 51 from lightwood.__about__ import __version__ as lightwood_version 52 53 try: 54 mindsdb_native_version = get_distribution('mindsdb_native').version 55 except Exception: 56 from mindsdb_native.__about__ import __version__ as mindsdb_native_version 57 58 if args.verbose: 59 config['log']['level']['console'] = 'INFO' 60 61 print(f'Configuration file:\n {config_path}') 62 print(f"Storage path:\n {config.paths['root']}") 63 64 print('Versions:') 65 print(f' - lightwood {lightwood_version}') 66 print(f' - MindsDB_native {mindsdb_native_version}') 67 print(f' - MindsDB {mindsdb_version}') 68 69 os.environ['MINDSDB_STORAGE_PATH'] = config.paths['predictors'] 70 if args.verbose is True: 71 os.environ['DEFAULT_LOG_LEVEL'] = 'INFO' 72 os.environ['LIGHTWOOD_LOG_LEVEL'] = 'INFO' 73 else: 74 os.environ['DEFAULT_LOG_LEVEL'] = 'ERROR' 75 os.environ['LIGHTWOOD_LOG_LEVEL'] = 'ERROR' 76 77 update_versions_file( 78 config, 79 { 80 'lightwood': lightwood_version, 81 'mindsdb_native': mindsdb_native_version, 82 'mindsdb': mindsdb_version, 83 'python': sys.version.replace('\n', '') 84 } 85 ) 86 87 if args.api is None: 88 api_arr = ['http', 'mysql'] 89 else: 90 api_arr = args.api.split(',') 91 92 api_arr = [{ 93 'name': api, 94 'port': config['api'][api]['port'], 95 'started': False 96 } for api in api_arr] 97 98 for api in api_arr: 99 api_name = api['name'] 100 if api_name not in config['api']: 101 print(f"Trying run '{api_name}' API, but is no config for this api.") 102 print(f"Please, fill config['api']['{api_name}']") 103 sys.exit(0) 104 105 start_functions = { 106 'http': start_http, 107 'mysql': start_mysql, 108 'mongodb': start_mongo 109 } 110 111 mdb = MindsdbNative(config) 112 cst = CustomModels(config) 113 # @TODO Maybe just use `get_model_data` directly here ? Seems like a useless abstraction 114 model_data_arr = [ 115 { 116 'name': x['name'], 117 'predict': x['predict'], 118 'data_analysis': mdb.get_model_data(x['name'])['data_analysis_v2'] 119 } for x in mdb.get_models() 120 ] 121 122 for m in model_data_arr: 123 if 'columns_to_ignore' in m['data_analysis']: 124 del m['data_analysis']['columns_to_ignore'] 125 if 'train_std_dev' in m['data_analysis']: 126 del m['data_analysis']['train_std_dev'] 127 128 model_data_arr.extend(cst.get_models()) 129 130 dbw = DatabaseWrapper(config) 131 dbw.register_predictors(model_data_arr) 132 133 for broken_name in [name for name, connected in dbw.check_connections().items() if connected is False]: 134 print(f'Error failed to integrate with database aliased: {broken_name}') 135 136 p_arr = [] 137 ctx = mp.get_context('spawn') 138 139 for api in api_arr: 140 api_name = api['name'] 141 print(f'{api_name} API: starting...') 142 try: 143 p = ctx.Process(target=start_functions[api_name], args=(config_path, args.verbose)) 144 p.start() 145 p_arr.append(p) 146 except Exception as e: 147 close_api_gracefully(p_arr) 148 print(f'Failed to start {api_name} API with exception {e}') 149 print(traceback.format_exc()) 150 raise 151 152 atexit.register(close_api_gracefully, p_arr=p_arr) 153 154 timeout = 15 155 start_time = time.time() 156 all_started = False 157 while (time.time() - start_time) < timeout and all_started is False: 158 all_started = True 159 for i, api in enumerate(api_arr): 160 try: 161 in_use = api['started'] or is_port_in_use(api['port']) 162 except Exception: 163 # NOTE that hotfix for OSX: is_port_in_use will raise AccessDenied error if it runned not as sudo 164 in_use = True 165 if in_use and api['started'] != in_use: 166 api['started'] = in_use 167 print(f"{api['name']} API: started on {api['port']}") 168 all_started = all_started and in_use 169 time.sleep(0.5) 170 171 for p in p_arr: 172 p.join() 173 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/mindsdb/__main__.py b/mindsdb/__main__.py --- a/mindsdb/__main__.py +++ b/mindsdb/__main__.py @@ -28,6 +28,28 @@ if __name__ == '__main__': + version_error_msg = """ +MindsDB server requires Python >= 3.6 to run + +Once you have Python 3.6 installed you can tun mindsdb as follows: + +1. create and activate venv: +python3.6 -m venv venv +source venv/bin/activate + +2. install MindsDB: +pip3 install mindsdb + +3. Run MindsDB +python3.6 -m mindsdb + +More instructions in https://docs.mindsdb.com + """ + + if not (sys.version_info[0] >= 3 and sys.version_info[1] >= 6): + print(version_error_msg) + exit(1) + mp.freeze_support() args = args_parse()
{"golden_diff": "diff --git a/mindsdb/__main__.py b/mindsdb/__main__.py\n--- a/mindsdb/__main__.py\n+++ b/mindsdb/__main__.py\n@@ -28,6 +28,28 @@\n \n \n if __name__ == '__main__':\n+ version_error_msg = \"\"\"\n+MindsDB server requires Python >= 3.6 to run\n+\n+Once you have Python 3.6 installed you can tun mindsdb as follows:\n+\n+1. create and activate venv:\n+python3.6 -m venv venv\n+source venv/bin/activate\n+\n+2. install MindsDB:\n+pip3 install mindsdb\n+\n+3. Run MindsDB\n+python3.6 -m mindsdb\n+\n+More instructions in https://docs.mindsdb.com\n+ \"\"\"\n+\n+ if not (sys.version_info[0] >= 3 and sys.version_info[1] >= 6):\n+ print(version_error_msg)\n+ exit(1)\n+\n mp.freeze_support()\n \n args = args_parse()\n", "issue": "Check if the python version is at least 3.6 on server start.\nOn Server start check if Python version is < 3.8\r\n\r\nif lower version, print message:\r\n\r\n```\r\nMindsDB server requires Python >= 3.8 to run\r\n\r\nOnce you have Python 3.8 installed you can tun mindsdb as follows:\r\n\r\n1. create and activate venv:\r\n python3.8 -m venv venv\r\n source venv/bin/activate\r\n2. install MindsDB:\r\n pip3 install mindsdb\r\n3. Run MindsDB\r\n python3.8 -m mindsdb\r\n\r\nMore instructions in https://docs.mindsdb.com\r\n\r\n```\r\n\r\nand exit\n", "before_files": [{"content": "import atexit\nimport traceback\nimport sys\nimport os\nimport time\n\nfrom pkg_resources import get_distribution\nimport torch.multiprocessing as mp\n\nfrom mindsdb.utilities.config import Config\nfrom mindsdb.interfaces.native.mindsdb import MindsdbNative\nfrom mindsdb.interfaces.custom.custom_models import CustomModels\nfrom mindsdb.api.http.start import start as start_http\nfrom mindsdb.api.mysql.start import start as start_mysql\nfrom mindsdb.api.mongo.start import start as start_mongo\nfrom mindsdb.utilities.fs import get_or_create_dir_struct, update_versions_file\nfrom mindsdb.utilities.ps import is_port_in_use\nfrom mindsdb.interfaces.database.database import DatabaseWrapper\nfrom mindsdb.utilities.functions import args_parse\n\n\ndef close_api_gracefully(p_arr):\n for p in p_arr:\n sys.stdout.flush()\n p.terminate()\n p.join()\n sys.stdout.flush()\n\n\nif __name__ == '__main__':\n mp.freeze_support()\n\n args = args_parse()\n\n config_path = args.config\n if config_path is None:\n config_dir, _ = get_or_create_dir_struct()\n config_path = os.path.join(config_dir, 'config.json')\n\n config = Config(config_path)\n\n from mindsdb.__about__ import __version__ as mindsdb_version\n\n if args.version:\n print(f'MindsDB {mindsdb_version}')\n sys.exit(0)\n\n try:\n lightwood_version = get_distribution('lightwood').version\n except Exception:\n from lightwood.__about__ import __version__ as lightwood_version\n\n try:\n mindsdb_native_version = get_distribution('mindsdb_native').version\n except Exception:\n from mindsdb_native.__about__ import __version__ as mindsdb_native_version\n\n if args.verbose:\n config['log']['level']['console'] = 'INFO'\n\n print(f'Configuration file:\\n {config_path}')\n print(f\"Storage path:\\n {config.paths['root']}\")\n\n print('Versions:')\n print(f' - lightwood {lightwood_version}')\n print(f' - MindsDB_native {mindsdb_native_version}')\n print(f' - MindsDB {mindsdb_version}')\n\n os.environ['MINDSDB_STORAGE_PATH'] = config.paths['predictors']\n if args.verbose is True:\n os.environ['DEFAULT_LOG_LEVEL'] = 'INFO'\n os.environ['LIGHTWOOD_LOG_LEVEL'] = 'INFO'\n else:\n os.environ['DEFAULT_LOG_LEVEL'] = 'ERROR'\n os.environ['LIGHTWOOD_LOG_LEVEL'] = 'ERROR'\n\n update_versions_file(\n config,\n {\n 'lightwood': lightwood_version,\n 'mindsdb_native': mindsdb_native_version,\n 'mindsdb': mindsdb_version,\n 'python': sys.version.replace('\\n', '')\n }\n )\n\n if args.api is None:\n api_arr = ['http', 'mysql']\n else:\n api_arr = args.api.split(',')\n\n api_arr = [{\n 'name': api,\n 'port': config['api'][api]['port'],\n 'started': False\n } for api in api_arr]\n\n for api in api_arr:\n api_name = api['name']\n if api_name not in config['api']:\n print(f\"Trying run '{api_name}' API, but is no config for this api.\")\n print(f\"Please, fill config['api']['{api_name}']\")\n sys.exit(0)\n\n start_functions = {\n 'http': start_http,\n 'mysql': start_mysql,\n 'mongodb': start_mongo\n }\n\n mdb = MindsdbNative(config)\n cst = CustomModels(config)\n # @TODO Maybe just use `get_model_data` directly here ? Seems like a useless abstraction\n model_data_arr = [\n {\n 'name': x['name'],\n 'predict': x['predict'],\n 'data_analysis': mdb.get_model_data(x['name'])['data_analysis_v2']\n } for x in mdb.get_models()\n ]\n\n for m in model_data_arr:\n if 'columns_to_ignore' in m['data_analysis']:\n del m['data_analysis']['columns_to_ignore']\n if 'train_std_dev' in m['data_analysis']:\n del m['data_analysis']['train_std_dev']\n\n model_data_arr.extend(cst.get_models())\n\n dbw = DatabaseWrapper(config)\n dbw.register_predictors(model_data_arr)\n\n for broken_name in [name for name, connected in dbw.check_connections().items() if connected is False]:\n print(f'Error failed to integrate with database aliased: {broken_name}')\n\n p_arr = []\n ctx = mp.get_context('spawn')\n\n for api in api_arr:\n api_name = api['name']\n print(f'{api_name} API: starting...')\n try:\n p = ctx.Process(target=start_functions[api_name], args=(config_path, args.verbose))\n p.start()\n p_arr.append(p)\n except Exception as e:\n close_api_gracefully(p_arr)\n print(f'Failed to start {api_name} API with exception {e}')\n print(traceback.format_exc())\n raise\n\n atexit.register(close_api_gracefully, p_arr=p_arr)\n\n timeout = 15\n start_time = time.time()\n all_started = False\n while (time.time() - start_time) < timeout and all_started is False:\n all_started = True\n for i, api in enumerate(api_arr):\n try:\n in_use = api['started'] or is_port_in_use(api['port'])\n except Exception:\n # NOTE that hotfix for OSX: is_port_in_use will raise AccessDenied error if it runned not as sudo\n in_use = True\n if in_use and api['started'] != in_use:\n api['started'] = in_use\n print(f\"{api['name']} API: started on {api['port']}\")\n all_started = all_started and in_use\n time.sleep(0.5)\n\n for p in p_arr:\n p.join()\n", "path": "mindsdb/__main__.py"}], "after_files": [{"content": "import atexit\nimport traceback\nimport sys\nimport os\nimport time\n\nfrom pkg_resources import get_distribution\nimport torch.multiprocessing as mp\n\nfrom mindsdb.utilities.config import Config\nfrom mindsdb.interfaces.native.mindsdb import MindsdbNative\nfrom mindsdb.interfaces.custom.custom_models import CustomModels\nfrom mindsdb.api.http.start import start as start_http\nfrom mindsdb.api.mysql.start import start as start_mysql\nfrom mindsdb.api.mongo.start import start as start_mongo\nfrom mindsdb.utilities.fs import get_or_create_dir_struct, update_versions_file\nfrom mindsdb.utilities.ps import is_port_in_use\nfrom mindsdb.interfaces.database.database import DatabaseWrapper\nfrom mindsdb.utilities.functions import args_parse\n\n\ndef close_api_gracefully(p_arr):\n for p in p_arr:\n sys.stdout.flush()\n p.terminate()\n p.join()\n sys.stdout.flush()\n\n\nif __name__ == '__main__':\n version_error_msg = \"\"\"\nMindsDB server requires Python >= 3.6 to run\n\nOnce you have Python 3.6 installed you can tun mindsdb as follows:\n\n1. create and activate venv:\npython3.6 -m venv venv\nsource venv/bin/activate\n\n2. install MindsDB:\npip3 install mindsdb\n\n3. Run MindsDB\npython3.6 -m mindsdb\n\nMore instructions in https://docs.mindsdb.com\n \"\"\"\n\n if not (sys.version_info[0] >= 3 and sys.version_info[1] >= 6):\n print(version_error_msg)\n exit(1)\n\n mp.freeze_support()\n\n args = args_parse()\n\n config_path = args.config\n if config_path is None:\n config_dir, _ = get_or_create_dir_struct()\n config_path = os.path.join(config_dir, 'config.json')\n\n config = Config(config_path)\n\n from mindsdb.__about__ import __version__ as mindsdb_version\n\n if args.version:\n print(f'MindsDB {mindsdb_version}')\n sys.exit(0)\n\n try:\n lightwood_version = get_distribution('lightwood').version\n except Exception:\n from lightwood.__about__ import __version__ as lightwood_version\n\n try:\n mindsdb_native_version = get_distribution('mindsdb_native').version\n except Exception:\n from mindsdb_native.__about__ import __version__ as mindsdb_native_version\n\n if args.verbose:\n config['log']['level']['console'] = 'INFO'\n\n print(f'Configuration file:\\n {config_path}')\n print(f\"Storage path:\\n {config.paths['root']}\")\n\n print('Versions:')\n print(f' - lightwood {lightwood_version}')\n print(f' - MindsDB_native {mindsdb_native_version}')\n print(f' - MindsDB {mindsdb_version}')\n\n os.environ['MINDSDB_STORAGE_PATH'] = config.paths['predictors']\n if args.verbose is True:\n os.environ['DEFAULT_LOG_LEVEL'] = 'INFO'\n os.environ['LIGHTWOOD_LOG_LEVEL'] = 'INFO'\n else:\n os.environ['DEFAULT_LOG_LEVEL'] = 'ERROR'\n os.environ['LIGHTWOOD_LOG_LEVEL'] = 'ERROR'\n\n update_versions_file(\n config,\n {\n 'lightwood': lightwood_version,\n 'mindsdb_native': mindsdb_native_version,\n 'mindsdb': mindsdb_version,\n 'python': sys.version.replace('\\n', '')\n }\n )\n\n if args.api is None:\n api_arr = ['http', 'mysql']\n else:\n api_arr = args.api.split(',')\n\n api_arr = [{\n 'name': api,\n 'port': config['api'][api]['port'],\n 'started': False\n } for api in api_arr]\n\n for api in api_arr:\n api_name = api['name']\n if api_name not in config['api']:\n print(f\"Trying run '{api_name}' API, but is no config for this api.\")\n print(f\"Please, fill config['api']['{api_name}']\")\n sys.exit(0)\n\n start_functions = {\n 'http': start_http,\n 'mysql': start_mysql,\n 'mongodb': start_mongo\n }\n\n mdb = MindsdbNative(config)\n cst = CustomModels(config)\n # @TODO Maybe just use `get_model_data` directly here ? Seems like a useless abstraction\n model_data_arr = [\n {\n 'name': x['name'],\n 'predict': x['predict'],\n 'data_analysis': mdb.get_model_data(x['name'])['data_analysis_v2']\n } for x in mdb.get_models()\n ]\n\n for m in model_data_arr:\n if 'columns_to_ignore' in m['data_analysis']:\n del m['data_analysis']['columns_to_ignore']\n if 'train_std_dev' in m['data_analysis']:\n del m['data_analysis']['train_std_dev']\n\n model_data_arr.extend(cst.get_models())\n\n dbw = DatabaseWrapper(config)\n dbw.register_predictors(model_data_arr)\n\n for broken_name in [name for name, connected in dbw.check_connections().items() if connected is False]:\n print(f'Error failed to integrate with database aliased: {broken_name}')\n\n p_arr = []\n ctx = mp.get_context('spawn')\n\n for api in api_arr:\n api_name = api['name']\n print(f'{api_name} API: starting...')\n try:\n p = ctx.Process(target=start_functions[api_name], args=(config_path, args.verbose))\n p.start()\n p_arr.append(p)\n except Exception as e:\n close_api_gracefully(p_arr)\n print(f'Failed to start {api_name} API with exception {e}')\n print(traceback.format_exc())\n raise\n\n atexit.register(close_api_gracefully, p_arr=p_arr)\n\n timeout = 15\n start_time = time.time()\n all_started = False\n while (time.time() - start_time) < timeout and all_started is False:\n all_started = True\n for i, api in enumerate(api_arr):\n try:\n in_use = api['started'] or is_port_in_use(api['port'])\n except Exception:\n # NOTE that hotfix for OSX: is_port_in_use will raise AccessDenied error if it runned not as sudo\n in_use = True\n if in_use and api['started'] != in_use:\n api['started'] = in_use\n print(f\"{api['name']} API: started on {api['port']}\")\n all_started = all_started and in_use\n time.sleep(0.5)\n\n for p in p_arr:\n p.join()\n", "path": "mindsdb/__main__.py"}]}
2,120
234
gh_patches_debug_15825
rasdani/github-patches
git_diff
zulip__zulip-13771
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Creation of temporary files in requirements/ can cause provision to fail An example to trigger this for me is was as follows: * `cd requirements/` * edit file using editor which creates temporary file in this location (eg vim, depending on configuration) * `tools/provision` * provision fails with an error like ``` UnicodeDecodeError: 'utf-8' codec can't decode byte 0xcd in position 17: invalid continuation byte ``` This appears to be due to the venv management script not being able to handle the unexpected file produced by eg. vim. This is not a major issue, but is a bit of a strange issue to debug if you are not expecting it or are new, and potentially could be easy to fix. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `scripts/lib/clean_venv_cache.py` Content: ``` 1 #!/usr/bin/env python3 2 import argparse 3 import os 4 import sys 5 6 from typing import Set 7 8 ZULIP_PATH = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) 9 sys.path.append(ZULIP_PATH) 10 from scripts.lib.hash_reqs import expand_reqs, hash_deps 11 from scripts.lib.zulip_tools import \ 12 get_environment, get_recent_deployments, parse_cache_script_args, \ 13 purge_unused_caches 14 15 ENV = get_environment() 16 VENV_CACHE_DIR = '/srv/zulip-venv-cache' 17 if ENV == "travis": 18 VENV_CACHE_DIR = os.path.join(os.environ["HOME"], "zulip-venv-cache") 19 20 def get_caches_in_use(threshold_days): 21 # type: (int) -> Set[str] 22 setups_to_check = set([ZULIP_PATH, ]) 23 caches_in_use = set() 24 25 def add_current_venv_cache(venv_name: str) -> None: 26 CACHE_SYMLINK = os.path.join(os.path.dirname(ZULIP_PATH), venv_name) 27 CURRENT_CACHE = os.path.dirname(os.path.realpath(CACHE_SYMLINK)) 28 caches_in_use.add(CURRENT_CACHE) 29 30 if ENV == "prod": 31 setups_to_check |= get_recent_deployments(threshold_days) 32 if ENV == "dev": 33 add_current_venv_cache("zulip-py3-venv") 34 add_current_venv_cache("zulip-thumbor-venv") 35 36 for path in setups_to_check: 37 reqs_dir = os.path.join(path, "requirements") 38 # If the target directory doesn't contain a requirements 39 # directory, skip it to avoid throwing an exception trying to 40 # list its requirements subdirectory. 41 if not os.path.exists(reqs_dir): 42 continue 43 for filename in os.listdir(reqs_dir): 44 requirements_file = os.path.join(reqs_dir, filename) 45 deps = expand_reqs(requirements_file) 46 hash_val = hash_deps(deps) 47 caches_in_use.add(os.path.join(VENV_CACHE_DIR, hash_val)) 48 49 return caches_in_use 50 51 def main(args: argparse.Namespace) -> None: 52 caches_in_use = get_caches_in_use(args.threshold_days) 53 purge_unused_caches( 54 VENV_CACHE_DIR, caches_in_use, "venv cache", args) 55 56 if __name__ == "__main__": 57 args = parse_cache_script_args("This script cleans unused zulip venv caches.") 58 main(args) 59 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/scripts/lib/clean_venv_cache.py b/scripts/lib/clean_venv_cache.py --- a/scripts/lib/clean_venv_cache.py +++ b/scripts/lib/clean_venv_cache.py @@ -1,5 +1,6 @@ #!/usr/bin/env python3 import argparse +import glob import os import sys @@ -40,8 +41,8 @@ # list its requirements subdirectory. if not os.path.exists(reqs_dir): continue - for filename in os.listdir(reqs_dir): - requirements_file = os.path.join(reqs_dir, filename) + requirements_files = glob.glob(os.path.join(reqs_dir, "*.txt")) + for requirements_file in requirements_files: deps = expand_reqs(requirements_file) hash_val = hash_deps(deps) caches_in_use.add(os.path.join(VENV_CACHE_DIR, hash_val))
{"golden_diff": "diff --git a/scripts/lib/clean_venv_cache.py b/scripts/lib/clean_venv_cache.py\n--- a/scripts/lib/clean_venv_cache.py\n+++ b/scripts/lib/clean_venv_cache.py\n@@ -1,5 +1,6 @@\n #!/usr/bin/env python3\n import argparse\n+import glob\n import os\n import sys\n \n@@ -40,8 +41,8 @@\n # list its requirements subdirectory.\n if not os.path.exists(reqs_dir):\n continue\n- for filename in os.listdir(reqs_dir):\n- requirements_file = os.path.join(reqs_dir, filename)\n+ requirements_files = glob.glob(os.path.join(reqs_dir, \"*.txt\"))\n+ for requirements_file in requirements_files:\n deps = expand_reqs(requirements_file)\n hash_val = hash_deps(deps)\n caches_in_use.add(os.path.join(VENV_CACHE_DIR, hash_val))\n", "issue": "Creation of temporary files in requirements/ can cause provision to fail\nAn example to trigger this for me is was as follows:\r\n* `cd requirements/`\r\n* edit file using editor which creates temporary file in this location (eg vim, depending on configuration)\r\n* `tools/provision`\r\n* provision fails with an error like\r\n```\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xcd in position 17: invalid continuation byte\r\n```\r\n\r\nThis appears to be due to the venv management script not being able to handle the unexpected file produced by eg. vim.\r\n\r\nThis is not a major issue, but is a bit of a strange issue to debug if you are not expecting it or are new, and potentially could be easy to fix.\n", "before_files": [{"content": "#!/usr/bin/env python3\nimport argparse\nimport os\nimport sys\n\nfrom typing import Set\n\nZULIP_PATH = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))\nsys.path.append(ZULIP_PATH)\nfrom scripts.lib.hash_reqs import expand_reqs, hash_deps\nfrom scripts.lib.zulip_tools import \\\n get_environment, get_recent_deployments, parse_cache_script_args, \\\n purge_unused_caches\n\nENV = get_environment()\nVENV_CACHE_DIR = '/srv/zulip-venv-cache'\nif ENV == \"travis\":\n VENV_CACHE_DIR = os.path.join(os.environ[\"HOME\"], \"zulip-venv-cache\")\n\ndef get_caches_in_use(threshold_days):\n # type: (int) -> Set[str]\n setups_to_check = set([ZULIP_PATH, ])\n caches_in_use = set()\n\n def add_current_venv_cache(venv_name: str) -> None:\n CACHE_SYMLINK = os.path.join(os.path.dirname(ZULIP_PATH), venv_name)\n CURRENT_CACHE = os.path.dirname(os.path.realpath(CACHE_SYMLINK))\n caches_in_use.add(CURRENT_CACHE)\n\n if ENV == \"prod\":\n setups_to_check |= get_recent_deployments(threshold_days)\n if ENV == \"dev\":\n add_current_venv_cache(\"zulip-py3-venv\")\n add_current_venv_cache(\"zulip-thumbor-venv\")\n\n for path in setups_to_check:\n reqs_dir = os.path.join(path, \"requirements\")\n # If the target directory doesn't contain a requirements\n # directory, skip it to avoid throwing an exception trying to\n # list its requirements subdirectory.\n if not os.path.exists(reqs_dir):\n continue\n for filename in os.listdir(reqs_dir):\n requirements_file = os.path.join(reqs_dir, filename)\n deps = expand_reqs(requirements_file)\n hash_val = hash_deps(deps)\n caches_in_use.add(os.path.join(VENV_CACHE_DIR, hash_val))\n\n return caches_in_use\n\ndef main(args: argparse.Namespace) -> None:\n caches_in_use = get_caches_in_use(args.threshold_days)\n purge_unused_caches(\n VENV_CACHE_DIR, caches_in_use, \"venv cache\", args)\n\nif __name__ == \"__main__\":\n args = parse_cache_script_args(\"This script cleans unused zulip venv caches.\")\n main(args)\n", "path": "scripts/lib/clean_venv_cache.py"}], "after_files": [{"content": "#!/usr/bin/env python3\nimport argparse\nimport glob\nimport os\nimport sys\n\nfrom typing import Set\n\nZULIP_PATH = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))\nsys.path.append(ZULIP_PATH)\nfrom scripts.lib.hash_reqs import expand_reqs, hash_deps\nfrom scripts.lib.zulip_tools import \\\n get_environment, get_recent_deployments, parse_cache_script_args, \\\n purge_unused_caches\n\nENV = get_environment()\nVENV_CACHE_DIR = '/srv/zulip-venv-cache'\nif ENV == \"travis\":\n VENV_CACHE_DIR = os.path.join(os.environ[\"HOME\"], \"zulip-venv-cache\")\n\ndef get_caches_in_use(threshold_days):\n # type: (int) -> Set[str]\n setups_to_check = set([ZULIP_PATH, ])\n caches_in_use = set()\n\n def add_current_venv_cache(venv_name: str) -> None:\n CACHE_SYMLINK = os.path.join(os.path.dirname(ZULIP_PATH), venv_name)\n CURRENT_CACHE = os.path.dirname(os.path.realpath(CACHE_SYMLINK))\n caches_in_use.add(CURRENT_CACHE)\n\n if ENV == \"prod\":\n setups_to_check |= get_recent_deployments(threshold_days)\n if ENV == \"dev\":\n add_current_venv_cache(\"zulip-py3-venv\")\n add_current_venv_cache(\"zulip-thumbor-venv\")\n\n for path in setups_to_check:\n reqs_dir = os.path.join(path, \"requirements\")\n # If the target directory doesn't contain a requirements\n # directory, skip it to avoid throwing an exception trying to\n # list its requirements subdirectory.\n if not os.path.exists(reqs_dir):\n continue\n requirements_files = glob.glob(os.path.join(reqs_dir, \"*.txt\"))\n for requirements_file in requirements_files:\n deps = expand_reqs(requirements_file)\n hash_val = hash_deps(deps)\n caches_in_use.add(os.path.join(VENV_CACHE_DIR, hash_val))\n\n return caches_in_use\n\ndef main(args: argparse.Namespace) -> None:\n caches_in_use = get_caches_in_use(args.threshold_days)\n purge_unused_caches(\n VENV_CACHE_DIR, caches_in_use, \"venv cache\", args)\n\nif __name__ == \"__main__\":\n args = parse_cache_script_args(\"This script cleans unused zulip venv caches.\")\n main(args)\n", "path": "scripts/lib/clean_venv_cache.py"}]}
1,060
198
gh_patches_debug_37464
rasdani/github-patches
git_diff
hydroshare__hydroshare-5088
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- track users' full email domain **Describe the feature you'd like and what it will do** In HS v2.5.4, we don't track full user email domain in our metrics. **Why is this feature important?** We need more insight into how HS' ecosystem of tools are being used. This information should drive our continued development on existing tools and our consideration of additions for future use. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `hs_tracking/middleware.py` Content: ``` 1 from django.utils.deprecation import MiddlewareMixin 2 3 from .models import Session 4 from . import utils 5 import re 6 7 RESOURCE_RE = re.compile('resource/([0-9a-f]{32})/') # parser for resource id 8 BAG_RE = re.compile('bags/([0-9a-f]{32})\.zip') # parser for resource id # noqa 9 LANDING_RE = re.compile('resource/([0-9a-f]{32})/$') # reference to resource home page 10 REST_RE = re.compile('/hsapi/') # reference to REST or internal 11 INTERNAL_RE = re.compile('/hsapi/_internal/') # reference to an internal page 12 13 14 def get_resource_id_from_url(path): 15 """ read a resource id from a URL """ 16 m = RESOURCE_RE.search(path) 17 if m and m.group(1): 18 return m.group(1) 19 m = BAG_RE.search(path) 20 if m and m.group(1): 21 return m.group(1) 22 return None 23 24 25 def get_rest_from_url(path): 26 """ determine whether a URL is a REST call or not 27 28 This should always return boolean, not search result. 29 """ 30 if REST_RE.search(path): 31 if INTERNAL_RE.search(path): 32 return False 33 else: 34 return True 35 else: 36 return False 37 38 39 def get_landing_from_url(path): 40 """ determine whether a URL is a landing page. 41 42 This should always return boolean, not search result. 43 """ 44 if LANDING_RE.search(path): 45 return True 46 else: 47 return False 48 49 50 class Tracking(MiddlewareMixin): 51 """The default tracking middleware logs all successful responses as a 'visit' variable with 52 the URL path as its value.""" 53 54 def process_response(self, request, response): 55 56 # filter out heartbeat messages 57 if request.path.startswith('/heartbeat/'): 58 return response 59 60 # filter out web crawlers 61 is_human = getattr(request, 'is_human', False) 62 if not is_human: 63 return response 64 65 # filter out everything that is not an OK response 66 if response.status_code != 200: 67 return response 68 69 if not hasattr(request, 'user'): 70 return response 71 72 # get user info that will be recorded in the visit log 73 session = Session.objects.for_request(request) 74 usertype = utils.get_user_type(session) 75 emaildomain = utils.get_user_email_domain(session) 76 ip = utils.get_client_ip(request) 77 78 # build the message string (key:value pairs) 79 msg = '|'.join([str(item) for item in 80 ['user_ip=%s' % ip, 81 'http_method=%s' % request.method, 82 'http_code=%s' % response.status_code, 83 'user_type=%s' % usertype, 84 'user_email_domain=%s' % emaildomain, 85 'request_url=%s' % request.path]]) 86 87 resource_id = get_resource_id_from_url(request.path) 88 rest = get_rest_from_url(request.path) 89 landing = get_landing_from_url(request.path) 90 91 # save the activity in the database 92 session.record('visit', value=msg, resource_id=resource_id, 93 landing=landing, rest=rest) 94 95 return response 96 ``` Path: `hs_tracking/utils.py` Content: ``` 1 import robot_detection 2 from ipware.ip import get_ip 3 from hs_tools_resource.models import RequestUrlBase, RequestUrlBaseAggregation, RequestUrlBaseFile 4 from urllib.parse import urlparse 5 6 7 def get_client_ip(request): 8 return get_ip(request) 9 10 11 def get_user_type(session): 12 try: 13 user = session.visitor.user 14 usertype = user.userprofile.user_type 15 except AttributeError: 16 usertype = None 17 return usertype 18 19 20 def get_user_email_domain(session): 21 try: 22 user = session.visitor.user 23 emaildomain = user.email.split('@')[-1] 24 shortdomain = '.'.join(emaildomain.split('.')[1:]) 25 except AttributeError: 26 shortdomain = None 27 return shortdomain 28 29 30 def is_human(user_agent): 31 if robot_detection.is_robot(user_agent): 32 return False 33 return True 34 35 36 def get_std_log_fields(request, session=None): 37 """ returns a standard set of metadata that to each receiver function. 38 This ensures that all activities are reporting a consistent set of metrics 39 """ 40 user_type = None 41 user_email = None 42 if session is not None: 43 user_type = get_user_type(session) 44 user_email = get_user_email_domain(session) 45 46 return { 47 'user_ip': get_client_ip(request), 48 'user_type': user_type, 49 'user_email_domain': user_email, 50 } 51 52 53 def authentic_redirect_url(url): 54 """ Validates a url scheme and netloc is in an existing web app 55 :param url: String of a url 56 :return: Boolean, True if the url exists in a web app 57 """ 58 if not url: 59 return False 60 u = urlparse(url) 61 url_base = "{}://{}".format(u.scheme, u.netloc) 62 return RequestUrlBase.objects.filter(value__startswith=url_base).exists() \ 63 or RequestUrlBaseAggregation.objects.filter(value__startswith=url_base).exists() \ 64 or RequestUrlBaseFile.objects.filter(value__startswith=url_base).exists() 65 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/hs_tracking/middleware.py b/hs_tracking/middleware.py --- a/hs_tracking/middleware.py +++ b/hs_tracking/middleware.py @@ -72,7 +72,7 @@ # get user info that will be recorded in the visit log session = Session.objects.for_request(request) usertype = utils.get_user_type(session) - emaildomain = utils.get_user_email_domain(session) + email_tld = utils.get_user_email_tld(session) ip = utils.get_client_ip(request) # build the message string (key:value pairs) @@ -81,7 +81,7 @@ 'http_method=%s' % request.method, 'http_code=%s' % response.status_code, 'user_type=%s' % usertype, - 'user_email_domain=%s' % emaildomain, + 'user_email_domain=%s' % email_tld, 'request_url=%s' % request.path]]) resource_id = get_resource_id_from_url(request.path) diff --git a/hs_tracking/utils.py b/hs_tracking/utils.py --- a/hs_tracking/utils.py +++ b/hs_tracking/utils.py @@ -21,10 +21,20 @@ try: user = session.visitor.user emaildomain = user.email.split('@')[-1] - shortdomain = '.'.join(emaildomain.split('.')[1:]) except AttributeError: - shortdomain = None - return shortdomain + emaildomain = None + return emaildomain + + +def get_user_email_tld(session, emaildomain=None): + try: + if not emaildomain: + emaildomain = get_user_email_domain(session) + if emaildomain: + shortdomain = '.'.join(emaildomain.split('.')[1:]) + return shortdomain + except AttributeError: + return None def is_human(user_agent): @@ -38,15 +48,18 @@ This ensures that all activities are reporting a consistent set of metrics """ user_type = None - user_email = None + user_email_tld = None + full_domain = None if session is not None: user_type = get_user_type(session) - user_email = get_user_email_domain(session) + full_domain = get_user_email_domain(session) + user_email_tld = get_user_email_tld(session, full_domain) return { 'user_ip': get_client_ip(request), 'user_type': user_type, - 'user_email_domain': user_email, + 'user_email_domain': user_email_tld, + 'user_email_domain_full': full_domain }
{"golden_diff": "diff --git a/hs_tracking/middleware.py b/hs_tracking/middleware.py\n--- a/hs_tracking/middleware.py\n+++ b/hs_tracking/middleware.py\n@@ -72,7 +72,7 @@\n # get user info that will be recorded in the visit log\n session = Session.objects.for_request(request)\n usertype = utils.get_user_type(session)\n- emaildomain = utils.get_user_email_domain(session)\n+ email_tld = utils.get_user_email_tld(session)\n ip = utils.get_client_ip(request)\n \n # build the message string (key:value pairs)\n@@ -81,7 +81,7 @@\n 'http_method=%s' % request.method,\n 'http_code=%s' % response.status_code,\n 'user_type=%s' % usertype,\n- 'user_email_domain=%s' % emaildomain,\n+ 'user_email_domain=%s' % email_tld,\n 'request_url=%s' % request.path]])\n \n resource_id = get_resource_id_from_url(request.path)\ndiff --git a/hs_tracking/utils.py b/hs_tracking/utils.py\n--- a/hs_tracking/utils.py\n+++ b/hs_tracking/utils.py\n@@ -21,10 +21,20 @@\n try:\n user = session.visitor.user\n emaildomain = user.email.split('@')[-1]\n- shortdomain = '.'.join(emaildomain.split('.')[1:])\n except AttributeError:\n- shortdomain = None\n- return shortdomain\n+ emaildomain = None\n+ return emaildomain\n+\n+\n+def get_user_email_tld(session, emaildomain=None):\n+ try:\n+ if not emaildomain:\n+ emaildomain = get_user_email_domain(session)\n+ if emaildomain:\n+ shortdomain = '.'.join(emaildomain.split('.')[1:])\n+ return shortdomain\n+ except AttributeError:\n+ return None\n \n \n def is_human(user_agent):\n@@ -38,15 +48,18 @@\n This ensures that all activities are reporting a consistent set of metrics\n \"\"\"\n user_type = None\n- user_email = None\n+ user_email_tld = None\n+ full_domain = None\n if session is not None:\n user_type = get_user_type(session)\n- user_email = get_user_email_domain(session)\n+ full_domain = get_user_email_domain(session)\n+ user_email_tld = get_user_email_tld(session, full_domain)\n \n return {\n 'user_ip': get_client_ip(request),\n 'user_type': user_type,\n- 'user_email_domain': user_email,\n+ 'user_email_domain': user_email_tld,\n+ 'user_email_domain_full': full_domain\n }\n", "issue": "track users' full email domain\n**Describe the feature you'd like and what it will do**\r\nIn HS v2.5.4, we don't track full user email domain in our metrics.\r\n\r\n\r\n**Why is this feature important?**\r\nWe need more insight into how HS' ecosystem of tools are being used. This information should drive our continued development on existing tools and our consideration of additions for future use.\r\n\n", "before_files": [{"content": "from django.utils.deprecation import MiddlewareMixin\n\nfrom .models import Session\nfrom . import utils\nimport re\n\nRESOURCE_RE = re.compile('resource/([0-9a-f]{32})/') # parser for resource id\nBAG_RE = re.compile('bags/([0-9a-f]{32})\\.zip') # parser for resource id # noqa\nLANDING_RE = re.compile('resource/([0-9a-f]{32})/$') # reference to resource home page\nREST_RE = re.compile('/hsapi/') # reference to REST or internal\nINTERNAL_RE = re.compile('/hsapi/_internal/') # reference to an internal page\n\n\ndef get_resource_id_from_url(path):\n \"\"\" read a resource id from a URL \"\"\"\n m = RESOURCE_RE.search(path)\n if m and m.group(1):\n return m.group(1)\n m = BAG_RE.search(path)\n if m and m.group(1):\n return m.group(1)\n return None\n\n\ndef get_rest_from_url(path):\n \"\"\" determine whether a URL is a REST call or not\n\n This should always return boolean, not search result.\n \"\"\"\n if REST_RE.search(path):\n if INTERNAL_RE.search(path):\n return False\n else:\n return True\n else:\n return False\n\n\ndef get_landing_from_url(path):\n \"\"\" determine whether a URL is a landing page.\n\n This should always return boolean, not search result.\n \"\"\"\n if LANDING_RE.search(path):\n return True\n else:\n return False\n\n\nclass Tracking(MiddlewareMixin):\n \"\"\"The default tracking middleware logs all successful responses as a 'visit' variable with\n the URL path as its value.\"\"\"\n\n def process_response(self, request, response):\n\n # filter out heartbeat messages\n if request.path.startswith('/heartbeat/'):\n return response\n\n # filter out web crawlers\n is_human = getattr(request, 'is_human', False)\n if not is_human:\n return response\n\n # filter out everything that is not an OK response\n if response.status_code != 200:\n return response\n\n if not hasattr(request, 'user'):\n return response\n\n # get user info that will be recorded in the visit log\n session = Session.objects.for_request(request)\n usertype = utils.get_user_type(session)\n emaildomain = utils.get_user_email_domain(session)\n ip = utils.get_client_ip(request)\n\n # build the message string (key:value pairs)\n msg = '|'.join([str(item) for item in\n ['user_ip=%s' % ip,\n 'http_method=%s' % request.method,\n 'http_code=%s' % response.status_code,\n 'user_type=%s' % usertype,\n 'user_email_domain=%s' % emaildomain,\n 'request_url=%s' % request.path]])\n\n resource_id = get_resource_id_from_url(request.path)\n rest = get_rest_from_url(request.path)\n landing = get_landing_from_url(request.path)\n\n # save the activity in the database\n session.record('visit', value=msg, resource_id=resource_id,\n landing=landing, rest=rest)\n\n return response\n", "path": "hs_tracking/middleware.py"}, {"content": "import robot_detection\nfrom ipware.ip import get_ip\nfrom hs_tools_resource.models import RequestUrlBase, RequestUrlBaseAggregation, RequestUrlBaseFile\nfrom urllib.parse import urlparse\n\n\ndef get_client_ip(request):\n return get_ip(request)\n\n\ndef get_user_type(session):\n try:\n user = session.visitor.user\n usertype = user.userprofile.user_type\n except AttributeError:\n usertype = None\n return usertype\n\n\ndef get_user_email_domain(session):\n try:\n user = session.visitor.user\n emaildomain = user.email.split('@')[-1]\n shortdomain = '.'.join(emaildomain.split('.')[1:])\n except AttributeError:\n shortdomain = None\n return shortdomain\n\n\ndef is_human(user_agent):\n if robot_detection.is_robot(user_agent):\n return False\n return True\n\n\ndef get_std_log_fields(request, session=None):\n \"\"\" returns a standard set of metadata that to each receiver function.\n This ensures that all activities are reporting a consistent set of metrics\n \"\"\"\n user_type = None\n user_email = None\n if session is not None:\n user_type = get_user_type(session)\n user_email = get_user_email_domain(session)\n\n return {\n 'user_ip': get_client_ip(request),\n 'user_type': user_type,\n 'user_email_domain': user_email,\n }\n\n\ndef authentic_redirect_url(url):\n \"\"\" Validates a url scheme and netloc is in an existing web app\n :param url: String of a url\n :return: Boolean, True if the url exists in a web app\n \"\"\"\n if not url:\n return False\n u = urlparse(url)\n url_base = \"{}://{}\".format(u.scheme, u.netloc)\n return RequestUrlBase.objects.filter(value__startswith=url_base).exists() \\\n or RequestUrlBaseAggregation.objects.filter(value__startswith=url_base).exists() \\\n or RequestUrlBaseFile.objects.filter(value__startswith=url_base).exists()\n", "path": "hs_tracking/utils.py"}], "after_files": [{"content": "from django.utils.deprecation import MiddlewareMixin\n\nfrom .models import Session\nfrom . import utils\nimport re\n\nRESOURCE_RE = re.compile('resource/([0-9a-f]{32})/') # parser for resource id\nBAG_RE = re.compile('bags/([0-9a-f]{32})\\.zip') # parser for resource id # noqa\nLANDING_RE = re.compile('resource/([0-9a-f]{32})/$') # reference to resource home page\nREST_RE = re.compile('/hsapi/') # reference to REST or internal\nINTERNAL_RE = re.compile('/hsapi/_internal/') # reference to an internal page\n\n\ndef get_resource_id_from_url(path):\n \"\"\" read a resource id from a URL \"\"\"\n m = RESOURCE_RE.search(path)\n if m and m.group(1):\n return m.group(1)\n m = BAG_RE.search(path)\n if m and m.group(1):\n return m.group(1)\n return None\n\n\ndef get_rest_from_url(path):\n \"\"\" determine whether a URL is a REST call or not\n\n This should always return boolean, not search result.\n \"\"\"\n if REST_RE.search(path):\n if INTERNAL_RE.search(path):\n return False\n else:\n return True\n else:\n return False\n\n\ndef get_landing_from_url(path):\n \"\"\" determine whether a URL is a landing page.\n\n This should always return boolean, not search result.\n \"\"\"\n if LANDING_RE.search(path):\n return True\n else:\n return False\n\n\nclass Tracking(MiddlewareMixin):\n \"\"\"The default tracking middleware logs all successful responses as a 'visit' variable with\n the URL path as its value.\"\"\"\n\n def process_response(self, request, response):\n\n # filter out heartbeat messages\n if request.path.startswith('/heartbeat/'):\n return response\n\n # filter out web crawlers\n is_human = getattr(request, 'is_human', False)\n if not is_human:\n return response\n\n # filter out everything that is not an OK response\n if response.status_code != 200:\n return response\n\n if not hasattr(request, 'user'):\n return response\n\n # get user info that will be recorded in the visit log\n session = Session.objects.for_request(request)\n usertype = utils.get_user_type(session)\n email_tld = utils.get_user_email_tld(session)\n ip = utils.get_client_ip(request)\n\n # build the message string (key:value pairs)\n msg = '|'.join([str(item) for item in\n ['user_ip=%s' % ip,\n 'http_method=%s' % request.method,\n 'http_code=%s' % response.status_code,\n 'user_type=%s' % usertype,\n 'user_email_domain=%s' % email_tld,\n 'request_url=%s' % request.path]])\n\n resource_id = get_resource_id_from_url(request.path)\n rest = get_rest_from_url(request.path)\n landing = get_landing_from_url(request.path)\n\n # save the activity in the database\n session.record('visit', value=msg, resource_id=resource_id,\n landing=landing, rest=rest)\n\n return response\n", "path": "hs_tracking/middleware.py"}, {"content": "import robot_detection\nfrom ipware.ip import get_ip\nfrom hs_tools_resource.models import RequestUrlBase, RequestUrlBaseAggregation, RequestUrlBaseFile\nfrom urllib.parse import urlparse\n\n\ndef get_client_ip(request):\n return get_ip(request)\n\n\ndef get_user_type(session):\n try:\n user = session.visitor.user\n usertype = user.userprofile.user_type\n except AttributeError:\n usertype = None\n return usertype\n\n\ndef get_user_email_domain(session):\n try:\n user = session.visitor.user\n emaildomain = user.email.split('@')[-1]\n except AttributeError:\n emaildomain = None\n return emaildomain\n\n\ndef get_user_email_tld(session, emaildomain=None):\n try:\n if not emaildomain:\n emaildomain = get_user_email_domain(session)\n if emaildomain:\n shortdomain = '.'.join(emaildomain.split('.')[1:])\n return shortdomain\n except AttributeError:\n return None\n\n\ndef is_human(user_agent):\n if robot_detection.is_robot(user_agent):\n return False\n return True\n\n\ndef get_std_log_fields(request, session=None):\n \"\"\" returns a standard set of metadata that to each receiver function.\n This ensures that all activities are reporting a consistent set of metrics\n \"\"\"\n user_type = None\n user_email_tld = None\n full_domain = None\n if session is not None:\n user_type = get_user_type(session)\n full_domain = get_user_email_domain(session)\n user_email_tld = get_user_email_tld(session, full_domain)\n\n return {\n 'user_ip': get_client_ip(request),\n 'user_type': user_type,\n 'user_email_domain': user_email_tld,\n 'user_email_domain_full': full_domain\n }\n\n\ndef authentic_redirect_url(url):\n \"\"\" Validates a url scheme and netloc is in an existing web app\n :param url: String of a url\n :return: Boolean, True if the url exists in a web app\n \"\"\"\n if not url:\n return False\n u = urlparse(url)\n url_base = \"{}://{}\".format(u.scheme, u.netloc)\n return RequestUrlBase.objects.filter(value__startswith=url_base).exists() \\\n or RequestUrlBaseAggregation.objects.filter(value__startswith=url_base).exists() \\\n or RequestUrlBaseFile.objects.filter(value__startswith=url_base).exists()\n", "path": "hs_tracking/utils.py"}]}
1,793
589
gh_patches_debug_36736
rasdani/github-patches
git_diff
Textualize__textual-2034
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Testing Using Pilot - `pilot.click()` Doesn't Have Effect On `Static` Widget Hi, I am trying to write tests for the example calculator app present in the repo (at the same time, expanding it to add a few more operators). This is my `CalculatorApp` class: ```python class CalculatorApp(App): CSS_PATH = "calculator.css" viewport = var("0") show_ac = var(True) left = var(Decimal("0")) right = var(Decimal("0")) value = var("") operator = var("plus") def watch_viewport(self, value: str) -> None: self.query_one("#viewport", Static).update(value) def compute_show_ac(self) -> bool: return self.value in ("", "0") and self.viewport == "0" def watch_show_ac(self, show_ac: bool) -> None: self.query_one("#ac").display = show_ac self.query_one("#c").display = not show_ac def compose(self) -> ComposeResult: with Container(id="calculator"): yield Static(id="viewport") yield Button("AC", id="ac", variant="primary") yield Button("C", id="c", variant="primary") yield Button("+/-", id="negation", variant="primary") yield Button("%", id="percent", variant="primary") yield Button("sin(x)", id="sine", variant="warning") yield Button("cos(x)", id="cosine", variant="warning") yield Button("7", id="number-7", variant="primary") yield Button("8", id="number-8", variant="primary") yield Button("9", id="number-9", variant="primary") yield Button("+", id="plus", variant="warning") yield Button("x^y", id="exponent", variant="warning") yield Button("4", id="number-4", variant="primary") yield Button("5", id="number-5", variant="primary") yield Button("6", id="number-6", variant="primary") yield Button("-", id="minus", variant="warning") yield Button("ln(x)", id="logarithm", variant="warning") yield Button("1", id="number-1", variant="primary") yield Button("2", id="number-2", variant="primary") yield Button("3", id="number-3", variant="primary") yield Button("*", id="multiply", variant="warning") yield Button("x!", id="factorial", variant="warning") yield Button("0", id="number-0", variant="primary") yield Button(".", id="point", variant="primary") yield Button("÷", id="divide", variant="warning") yield Button("=", id="equals", variant="warning") def on_button_pressed(self, event: Button.Pressed) -> None: button_id = event.button.id assert button_id is not None if button_id.startswith("number-"): number = button_id.split("-")[-1] self.viewport = self.value = self.value.lstrip("0") + number ``` I wrote the following test to check that clicking the number buttons results in the calculator's display (`Static(id="viewport")`) accumulating digits to make a number: ```python async def test_number_buttons(): async with CalculatorApp().run_test() as pilot: app = pilot.app await pilot.click("#number-1") display_content = app.query_one("#viewport").render() assert str(display_content) == "1" await pilot.click("#number-2") display_content = app.query_one("#viewport").render() assert str(display_content) == "12" await pilot.click("#number-3") display_content = app.query_one("#viewport").render() assert str(display_content) == "123" ``` While the GUI gets updated correctly on clicking the buttons, the test always fails since `app.query_one("#viewport").render()` always returns `"0"`. I've also tried replacing `app.query_one("#viewport").render()` with `app.query_one("#viewport", Static).render()` but that hasn't helped either. Is this supposed to happen? # Textual Diagnostics ## Versions | Name | Value | |---------|--------| | Textual | 0.14.0 | | Rich | 13.3.2 | ## Python | Name | Value | |----------------|-----------------------------------------------| | Version | 3.10.9 | | Implementation | CPython | | Compiler | GCC 11.2.0 | | Executable | /home/malay_agr/anaconda3/envs/spe/bin/python | ## Operating System | Name | Value | |---------|---------------------------------------------| | System | Linux | | Release | 5.15.0-52-generic | | Version | #58-Ubuntu SMP Thu Oct 13 08:03:55 UTC 2022 | ## Terminal | Name | Value | |----------------------|-----------------| | Terminal Application | vscode (1.76.1) | | TERM | xterm-256color | | COLORTERM | truecolor | | FORCE_COLOR | *Not set* | | NO_COLOR | *Not set* | ## Rich Console options | Name | Value | |----------------|----------------------| | size | width=197, height=19 | | legacy_windows | False | | min_width | 1 | | max_width | 197 | | is_terminal | True | | encoding | utf-8 | | max_height | 19 | | justify | None | | overflow | None | | no_wrap | False | | highlight | None | | markup | None | | height | None | --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `src/textual/pilot.py` Content: ``` 1 from __future__ import annotations 2 3 import asyncio 4 from typing import Any, Generic 5 6 import rich.repr 7 8 from ._wait import wait_for_idle 9 from .app import App, ReturnType 10 from .css.query import QueryType 11 from .events import Click, MouseDown, MouseMove, MouseUp 12 from .geometry import Offset 13 from .widget import Widget 14 15 16 def _get_mouse_message_arguments( 17 target: Widget, offset: Offset = Offset(), button: int = 0 18 ) -> dict[str, Any]: 19 """Get the arguments to pass into mouse messages for the click and hover methods.""" 20 x, y = offset 21 click_x, click_y, _, _ = target.region.translate(offset) 22 message_arguments = { 23 "x": x, 24 "y": y, 25 "delta_x": 0, 26 "delta_y": 0, 27 "button": button, 28 "shift": False, 29 "meta": False, 30 "ctrl": False, 31 "screen_x": click_x, 32 "screen_y": click_y, 33 } 34 return message_arguments 35 36 37 @rich.repr.auto(angular=True) 38 class Pilot(Generic[ReturnType]): 39 """Pilot object to drive an app.""" 40 41 def __init__(self, app: App[ReturnType]) -> None: 42 self._app = app 43 44 def __rich_repr__(self) -> rich.repr.Result: 45 yield "app", self._app 46 47 @property 48 def app(self) -> App[ReturnType]: 49 """App: A reference to the application.""" 50 return self._app 51 52 async def press(self, *keys: str) -> None: 53 """Simulate key-presses. 54 55 Args: 56 *keys: Keys to press. 57 58 """ 59 if keys: 60 await self._app._press_keys(keys) 61 62 async def click( 63 self, selector: QueryType | None = None, offset: Offset = Offset() 64 ) -> None: 65 """Simulate clicking with the mouse. 66 67 Args: 68 selector: The widget that should be clicked. If None, then the click 69 will occur relative to the screen. Note that this simply causes 70 a click to occur at the location of the widget. If the widget is 71 currently hidden or obscured by another widget, then the click may 72 not land on it. 73 offset: The offset to click within the selected widget. 74 """ 75 app = self.app 76 screen = app.screen 77 if selector is not None: 78 target_widget = screen.query_one(selector) 79 else: 80 target_widget = screen 81 82 message_arguments = _get_mouse_message_arguments( 83 target_widget, offset, button=1 84 ) 85 app.post_message(MouseDown(**message_arguments)) 86 app.post_message(MouseUp(**message_arguments)) 87 app.post_message(Click(**message_arguments)) 88 await self.pause() 89 90 async def hover( 91 self, selector: QueryType | None = None, offset: Offset = Offset() 92 ) -> None: 93 """Simulate hovering with the mouse cursor. 94 95 Args: 96 selector: The widget that should be hovered. If None, then the click 97 will occur relative to the screen. Note that this simply causes 98 a hover to occur at the location of the widget. If the widget is 99 currently hidden or obscured by another widget, then the hover may 100 not land on it. 101 offset: The offset to hover over within the selected widget. 102 """ 103 app = self.app 104 screen = app.screen 105 if selector is not None: 106 target_widget = screen.query_one(selector) 107 else: 108 target_widget = screen 109 110 message_arguments = _get_mouse_message_arguments( 111 target_widget, offset, button=0 112 ) 113 app.post_message(MouseMove(**message_arguments)) 114 await self.pause() 115 116 async def pause(self, delay: float | None = None) -> None: 117 """Insert a pause. 118 119 Args: 120 delay: Seconds to pause, or None to wait for cpu idle. 121 """ 122 # These sleep zeros, are to force asyncio to give up a time-slice, 123 if delay is None: 124 await wait_for_idle(0) 125 else: 126 await asyncio.sleep(delay) 127 128 async def wait_for_animation(self) -> None: 129 """Wait for any current animation to complete.""" 130 await self._app.animator.wait_for_idle() 131 132 async def wait_for_scheduled_animations(self) -> None: 133 """Wait for any current and scheduled animations to complete.""" 134 await self._app.animator.wait_until_complete() 135 await wait_for_idle() 136 137 async def exit(self, result: ReturnType) -> None: 138 """Exit the app with the given result. 139 140 Args: 141 result: The app result returned by `run` or `run_async`. 142 """ 143 await wait_for_idle() 144 self.app.exit(result) 145 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/src/textual/pilot.py b/src/textual/pilot.py --- a/src/textual/pilot.py +++ b/src/textual/pilot.py @@ -14,20 +14,24 @@ def _get_mouse_message_arguments( - target: Widget, offset: Offset = Offset(), button: int = 0 + target: Widget, + offset: Offset = Offset(), + button: int = 0, + shift: bool = False, + meta: bool = False, + control: bool = False, ) -> dict[str, Any]: """Get the arguments to pass into mouse messages for the click and hover methods.""" - x, y = offset click_x, click_y, _, _ = target.region.translate(offset) message_arguments = { - "x": x, - "y": y, + "x": click_x, + "y": click_y, "delta_x": 0, "delta_y": 0, "button": button, - "shift": False, - "meta": False, - "ctrl": False, + "shift": shift, + "meta": meta, + "ctrl": control, "screen_x": click_x, "screen_y": click_y, } @@ -60,7 +64,12 @@ await self._app._press_keys(keys) async def click( - self, selector: QueryType | None = None, offset: Offset = Offset() + self, + selector: QueryType | None = None, + offset: Offset = Offset(), + shift: bool = False, + meta: bool = False, + control: bool = False, ) -> None: """Simulate clicking with the mouse. @@ -71,6 +80,9 @@ currently hidden or obscured by another widget, then the click may not land on it. offset: The offset to click within the selected widget. + shift: Click with the shift key held down. + meta: Click with the meta key held down. + control: Click with the control key held down. """ app = self.app screen = app.screen @@ -80,7 +92,7 @@ target_widget = screen message_arguments = _get_mouse_message_arguments( - target_widget, offset, button=1 + target_widget, offset, button=1, shift=shift, meta=meta, control=control ) app.post_message(MouseDown(**message_arguments)) app.post_message(MouseUp(**message_arguments))
{"golden_diff": "diff --git a/src/textual/pilot.py b/src/textual/pilot.py\n--- a/src/textual/pilot.py\n+++ b/src/textual/pilot.py\n@@ -14,20 +14,24 @@\n \n \n def _get_mouse_message_arguments(\n- target: Widget, offset: Offset = Offset(), button: int = 0\n+ target: Widget,\n+ offset: Offset = Offset(),\n+ button: int = 0,\n+ shift: bool = False,\n+ meta: bool = False,\n+ control: bool = False,\n ) -> dict[str, Any]:\n \"\"\"Get the arguments to pass into mouse messages for the click and hover methods.\"\"\"\n- x, y = offset\n click_x, click_y, _, _ = target.region.translate(offset)\n message_arguments = {\n- \"x\": x,\n- \"y\": y,\n+ \"x\": click_x,\n+ \"y\": click_y,\n \"delta_x\": 0,\n \"delta_y\": 0,\n \"button\": button,\n- \"shift\": False,\n- \"meta\": False,\n- \"ctrl\": False,\n+ \"shift\": shift,\n+ \"meta\": meta,\n+ \"ctrl\": control,\n \"screen_x\": click_x,\n \"screen_y\": click_y,\n }\n@@ -60,7 +64,12 @@\n await self._app._press_keys(keys)\n \n async def click(\n- self, selector: QueryType | None = None, offset: Offset = Offset()\n+ self,\n+ selector: QueryType | None = None,\n+ offset: Offset = Offset(),\n+ shift: bool = False,\n+ meta: bool = False,\n+ control: bool = False,\n ) -> None:\n \"\"\"Simulate clicking with the mouse.\n \n@@ -71,6 +80,9 @@\n currently hidden or obscured by another widget, then the click may\n not land on it.\n offset: The offset to click within the selected widget.\n+ shift: Click with the shift key held down.\n+ meta: Click with the meta key held down.\n+ control: Click with the control key held down.\n \"\"\"\n app = self.app\n screen = app.screen\n@@ -80,7 +92,7 @@\n target_widget = screen\n \n message_arguments = _get_mouse_message_arguments(\n- target_widget, offset, button=1\n+ target_widget, offset, button=1, shift=shift, meta=meta, control=control\n )\n app.post_message(MouseDown(**message_arguments))\n app.post_message(MouseUp(**message_arguments))\n", "issue": "Testing Using Pilot - `pilot.click()` Doesn't Have Effect On `Static` Widget\nHi,\r\n\r\nI am trying to write tests for the example calculator app present in the repo (at the same time, expanding it to add a few more operators). This is my `CalculatorApp` class:\r\n\r\n```python\r\nclass CalculatorApp(App):\r\n CSS_PATH = \"calculator.css\"\r\n\r\n viewport = var(\"0\")\r\n show_ac = var(True)\r\n left = var(Decimal(\"0\"))\r\n right = var(Decimal(\"0\"))\r\n value = var(\"\")\r\n operator = var(\"plus\")\r\n\r\n def watch_viewport(self, value: str) -> None:\r\n self.query_one(\"#viewport\", Static).update(value)\r\n\r\n def compute_show_ac(self) -> bool:\r\n return self.value in (\"\", \"0\") and self.viewport == \"0\"\r\n\r\n def watch_show_ac(self, show_ac: bool) -> None:\r\n self.query_one(\"#ac\").display = show_ac\r\n self.query_one(\"#c\").display = not show_ac\r\n\r\n def compose(self) -> ComposeResult:\r\n with Container(id=\"calculator\"):\r\n yield Static(id=\"viewport\")\r\n yield Button(\"AC\", id=\"ac\", variant=\"primary\")\r\n yield Button(\"C\", id=\"c\", variant=\"primary\")\r\n yield Button(\"+/-\", id=\"negation\", variant=\"primary\")\r\n yield Button(\"%\", id=\"percent\", variant=\"primary\")\r\n yield Button(\"sin(x)\", id=\"sine\", variant=\"warning\")\r\n yield Button(\"cos(x)\", id=\"cosine\", variant=\"warning\")\r\n yield Button(\"7\", id=\"number-7\", variant=\"primary\")\r\n yield Button(\"8\", id=\"number-8\", variant=\"primary\")\r\n yield Button(\"9\", id=\"number-9\", variant=\"primary\")\r\n yield Button(\"+\", id=\"plus\", variant=\"warning\")\r\n yield Button(\"x^y\", id=\"exponent\", variant=\"warning\")\r\n yield Button(\"4\", id=\"number-4\", variant=\"primary\")\r\n yield Button(\"5\", id=\"number-5\", variant=\"primary\")\r\n yield Button(\"6\", id=\"number-6\", variant=\"primary\")\r\n yield Button(\"-\", id=\"minus\", variant=\"warning\")\r\n yield Button(\"ln(x)\", id=\"logarithm\", variant=\"warning\")\r\n yield Button(\"1\", id=\"number-1\", variant=\"primary\")\r\n yield Button(\"2\", id=\"number-2\", variant=\"primary\")\r\n yield Button(\"3\", id=\"number-3\", variant=\"primary\")\r\n yield Button(\"*\", id=\"multiply\", variant=\"warning\")\r\n yield Button(\"x!\", id=\"factorial\", variant=\"warning\")\r\n yield Button(\"0\", id=\"number-0\", variant=\"primary\")\r\n yield Button(\".\", id=\"point\", variant=\"primary\")\r\n yield Button(\"\u00f7\", id=\"divide\", variant=\"warning\")\r\n yield Button(\"=\", id=\"equals\", variant=\"warning\")\r\n\r\n def on_button_pressed(self, event: Button.Pressed) -> None:\r\n button_id = event.button.id\r\n\r\n assert button_id is not None\r\n\r\n if button_id.startswith(\"number-\"):\r\n number = button_id.split(\"-\")[-1]\r\n self.viewport = self.value = self.value.lstrip(\"0\") + number\r\n```\r\n\r\nI wrote the following test to check that clicking the number buttons results in the calculator's display (`Static(id=\"viewport\")`) accumulating digits to make a number:\r\n\r\n```python\r\nasync def test_number_buttons():\r\n async with CalculatorApp().run_test() as pilot:\r\n app = pilot.app\r\n\r\n await pilot.click(\"#number-1\")\r\n\r\n display_content = app.query_one(\"#viewport\").render()\r\n assert str(display_content) == \"1\"\r\n\r\n await pilot.click(\"#number-2\")\r\n\r\n display_content = app.query_one(\"#viewport\").render()\r\n assert str(display_content) == \"12\"\r\n\r\n await pilot.click(\"#number-3\")\r\n\r\n display_content = app.query_one(\"#viewport\").render()\r\n assert str(display_content) == \"123\"\r\n```\r\n\r\nWhile the GUI gets updated correctly on clicking the buttons, the test always fails since `app.query_one(\"#viewport\").render()` always returns `\"0\"`. I've also tried replacing `app.query_one(\"#viewport\").render()` with `app.query_one(\"#viewport\", Static).render()` but that hasn't helped either.\r\n\r\nIs this supposed to happen?\r\n\r\n# Textual Diagnostics\r\n\r\n## Versions\r\n\r\n| Name | Value |\r\n|---------|--------|\r\n| Textual | 0.14.0 |\r\n| Rich | 13.3.2 |\r\n\r\n## Python\r\n\r\n| Name | Value |\r\n|----------------|-----------------------------------------------|\r\n| Version | 3.10.9 |\r\n| Implementation | CPython |\r\n| Compiler | GCC 11.2.0 |\r\n| Executable | /home/malay_agr/anaconda3/envs/spe/bin/python |\r\n\r\n## Operating System\r\n\r\n| Name | Value |\r\n|---------|---------------------------------------------|\r\n| System | Linux |\r\n| Release | 5.15.0-52-generic |\r\n| Version | #58-Ubuntu SMP Thu Oct 13 08:03:55 UTC 2022 |\r\n\r\n## Terminal\r\n\r\n| Name | Value |\r\n|----------------------|-----------------|\r\n| Terminal Application | vscode (1.76.1) |\r\n| TERM | xterm-256color |\r\n| COLORTERM | truecolor |\r\n| FORCE_COLOR | *Not set* |\r\n| NO_COLOR | *Not set* |\r\n\r\n## Rich Console options\r\n\r\n| Name | Value |\r\n|----------------|----------------------|\r\n| size | width=197, height=19 |\r\n| legacy_windows | False |\r\n| min_width | 1 |\r\n| max_width | 197 |\r\n| is_terminal | True |\r\n| encoding | utf-8 |\r\n| max_height | 19 |\r\n| justify | None |\r\n| overflow | None |\r\n| no_wrap | False |\r\n| highlight | None |\r\n| markup | None |\r\n| height | None |\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport asyncio\nfrom typing import Any, Generic\n\nimport rich.repr\n\nfrom ._wait import wait_for_idle\nfrom .app import App, ReturnType\nfrom .css.query import QueryType\nfrom .events import Click, MouseDown, MouseMove, MouseUp\nfrom .geometry import Offset\nfrom .widget import Widget\n\n\ndef _get_mouse_message_arguments(\n target: Widget, offset: Offset = Offset(), button: int = 0\n) -> dict[str, Any]:\n \"\"\"Get the arguments to pass into mouse messages for the click and hover methods.\"\"\"\n x, y = offset\n click_x, click_y, _, _ = target.region.translate(offset)\n message_arguments = {\n \"x\": x,\n \"y\": y,\n \"delta_x\": 0,\n \"delta_y\": 0,\n \"button\": button,\n \"shift\": False,\n \"meta\": False,\n \"ctrl\": False,\n \"screen_x\": click_x,\n \"screen_y\": click_y,\n }\n return message_arguments\n\n\[email protected](angular=True)\nclass Pilot(Generic[ReturnType]):\n \"\"\"Pilot object to drive an app.\"\"\"\n\n def __init__(self, app: App[ReturnType]) -> None:\n self._app = app\n\n def __rich_repr__(self) -> rich.repr.Result:\n yield \"app\", self._app\n\n @property\n def app(self) -> App[ReturnType]:\n \"\"\"App: A reference to the application.\"\"\"\n return self._app\n\n async def press(self, *keys: str) -> None:\n \"\"\"Simulate key-presses.\n\n Args:\n *keys: Keys to press.\n\n \"\"\"\n if keys:\n await self._app._press_keys(keys)\n\n async def click(\n self, selector: QueryType | None = None, offset: Offset = Offset()\n ) -> None:\n \"\"\"Simulate clicking with the mouse.\n\n Args:\n selector: The widget that should be clicked. If None, then the click\n will occur relative to the screen. Note that this simply causes\n a click to occur at the location of the widget. If the widget is\n currently hidden or obscured by another widget, then the click may\n not land on it.\n offset: The offset to click within the selected widget.\n \"\"\"\n app = self.app\n screen = app.screen\n if selector is not None:\n target_widget = screen.query_one(selector)\n else:\n target_widget = screen\n\n message_arguments = _get_mouse_message_arguments(\n target_widget, offset, button=1\n )\n app.post_message(MouseDown(**message_arguments))\n app.post_message(MouseUp(**message_arguments))\n app.post_message(Click(**message_arguments))\n await self.pause()\n\n async def hover(\n self, selector: QueryType | None = None, offset: Offset = Offset()\n ) -> None:\n \"\"\"Simulate hovering with the mouse cursor.\n\n Args:\n selector: The widget that should be hovered. If None, then the click\n will occur relative to the screen. Note that this simply causes\n a hover to occur at the location of the widget. If the widget is\n currently hidden or obscured by another widget, then the hover may\n not land on it.\n offset: The offset to hover over within the selected widget.\n \"\"\"\n app = self.app\n screen = app.screen\n if selector is not None:\n target_widget = screen.query_one(selector)\n else:\n target_widget = screen\n\n message_arguments = _get_mouse_message_arguments(\n target_widget, offset, button=0\n )\n app.post_message(MouseMove(**message_arguments))\n await self.pause()\n\n async def pause(self, delay: float | None = None) -> None:\n \"\"\"Insert a pause.\n\n Args:\n delay: Seconds to pause, or None to wait for cpu idle.\n \"\"\"\n # These sleep zeros, are to force asyncio to give up a time-slice,\n if delay is None:\n await wait_for_idle(0)\n else:\n await asyncio.sleep(delay)\n\n async def wait_for_animation(self) -> None:\n \"\"\"Wait for any current animation to complete.\"\"\"\n await self._app.animator.wait_for_idle()\n\n async def wait_for_scheduled_animations(self) -> None:\n \"\"\"Wait for any current and scheduled animations to complete.\"\"\"\n await self._app.animator.wait_until_complete()\n await wait_for_idle()\n\n async def exit(self, result: ReturnType) -> None:\n \"\"\"Exit the app with the given result.\n\n Args:\n result: The app result returned by `run` or `run_async`.\n \"\"\"\n await wait_for_idle()\n self.app.exit(result)\n", "path": "src/textual/pilot.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport asyncio\nfrom typing import Any, Generic\n\nimport rich.repr\n\nfrom ._wait import wait_for_idle\nfrom .app import App, ReturnType\nfrom .css.query import QueryType\nfrom .events import Click, MouseDown, MouseMove, MouseUp\nfrom .geometry import Offset\nfrom .widget import Widget\n\n\ndef _get_mouse_message_arguments(\n target: Widget,\n offset: Offset = Offset(),\n button: int = 0,\n shift: bool = False,\n meta: bool = False,\n control: bool = False,\n) -> dict[str, Any]:\n \"\"\"Get the arguments to pass into mouse messages for the click and hover methods.\"\"\"\n click_x, click_y, _, _ = target.region.translate(offset)\n message_arguments = {\n \"x\": click_x,\n \"y\": click_y,\n \"delta_x\": 0,\n \"delta_y\": 0,\n \"button\": button,\n \"shift\": shift,\n \"meta\": meta,\n \"ctrl\": control,\n \"screen_x\": click_x,\n \"screen_y\": click_y,\n }\n return message_arguments\n\n\[email protected](angular=True)\nclass Pilot(Generic[ReturnType]):\n \"\"\"Pilot object to drive an app.\"\"\"\n\n def __init__(self, app: App[ReturnType]) -> None:\n self._app = app\n\n def __rich_repr__(self) -> rich.repr.Result:\n yield \"app\", self._app\n\n @property\n def app(self) -> App[ReturnType]:\n \"\"\"App: A reference to the application.\"\"\"\n return self._app\n\n async def press(self, *keys: str) -> None:\n \"\"\"Simulate key-presses.\n\n Args:\n *keys: Keys to press.\n\n \"\"\"\n if keys:\n await self._app._press_keys(keys)\n\n async def click(\n self,\n selector: QueryType | None = None,\n offset: Offset = Offset(),\n shift: bool = False,\n meta: bool = False,\n control: bool = False,\n ) -> None:\n \"\"\"Simulate clicking with the mouse.\n\n Args:\n selector: The widget that should be clicked. If None, then the click\n will occur relative to the screen. Note that this simply causes\n a click to occur at the location of the widget. If the widget is\n currently hidden or obscured by another widget, then the click may\n not land on it.\n offset: The offset to click within the selected widget.\n shift: Click with the shift key held down.\n meta: Click with the meta key held down.\n control: Click with the control key held down.\n \"\"\"\n app = self.app\n screen = app.screen\n if selector is not None:\n target_widget = screen.query_one(selector)\n else:\n target_widget = screen\n\n message_arguments = _get_mouse_message_arguments(\n target_widget, offset, button=1, shift=shift, meta=meta, control=control\n )\n app.post_message(MouseDown(**message_arguments))\n app.post_message(MouseUp(**message_arguments))\n app.post_message(Click(**message_arguments))\n await self.pause()\n\n async def hover(\n self, selector: QueryType | None = None, offset: Offset = Offset()\n ) -> None:\n \"\"\"Simulate hovering with the mouse cursor.\n\n Args:\n selector: The widget that should be hovered. If None, then the click\n will occur relative to the screen. Note that this simply causes\n a hover to occur at the location of the widget. If the widget is\n currently hidden or obscured by another widget, then the hover may\n not land on it.\n offset: The offset to hover over within the selected widget.\n \"\"\"\n app = self.app\n screen = app.screen\n if selector is not None:\n target_widget = screen.query_one(selector)\n else:\n target_widget = screen\n\n message_arguments = _get_mouse_message_arguments(\n target_widget, offset, button=0\n )\n app.post_message(MouseMove(**message_arguments))\n await self.pause()\n\n async def pause(self, delay: float | None = None) -> None:\n \"\"\"Insert a pause.\n\n Args:\n delay: Seconds to pause, or None to wait for cpu idle.\n \"\"\"\n # These sleep zeros, are to force asyncio to give up a time-slice,\n if delay is None:\n await wait_for_idle(0)\n else:\n await asyncio.sleep(delay)\n\n async def wait_for_animation(self) -> None:\n \"\"\"Wait for any current animation to complete.\"\"\"\n await self._app.animator.wait_for_idle()\n\n async def wait_for_scheduled_animations(self) -> None:\n \"\"\"Wait for any current and scheduled animations to complete.\"\"\"\n await self._app.animator.wait_until_complete()\n await wait_for_idle()\n\n async def exit(self, result: ReturnType) -> None:\n \"\"\"Exit the app with the given result.\n\n Args:\n result: The app result returned by `run` or `run_async`.\n \"\"\"\n await wait_for_idle()\n self.app.exit(result)\n", "path": "src/textual/pilot.py"}]}
2,929
576
gh_patches_debug_36083
rasdani/github-patches
git_diff
microsoft__Qcodes-771
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Data mingled across columns in dataset ### Steps to reproduce 1. Measure something with `.run(use_threads=True)` ### Expected behaviour Get a neat dataset ### Actual behaviour Data mingles across columns in dataset 💘 ### System **operating system** Win 10 **qcodes branch** Master Two identical measurements: ![image](https://cloud.githubusercontent.com/assets/8595386/26193146/99b0d318-3bb4-11e7-9e93-473d782fe41b.png) --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `qcodes/actions.py` Content: ``` 1 """Actions, mainly to be executed in measurement Loops.""" 2 import time 3 4 from qcodes.utils.deferred_operations import is_function 5 from qcodes.utils.threading import thread_map 6 7 8 _NO_SNAPSHOT = {'type': None, 'description': 'Action without snapshot'} 9 10 11 def _actions_snapshot(actions, update): 12 """Make a list of snapshots from a list of actions.""" 13 snapshot = [] 14 for action in actions: 15 if hasattr(action, 'snapshot'): 16 snapshot.append(action.snapshot(update=update)) 17 else: 18 snapshot.append(_NO_SNAPSHOT) 19 return snapshot 20 21 22 class Task: 23 """ 24 A predefined task to be executed within a measurement Loop. 25 26 The first argument should be a callable, to which any subsequent 27 args and kwargs (which are evaluated before the loop starts) are passed. 28 29 The args and kwargs are first evaluated if they are found to be callable. 30 31 Keyword Args passed when the Task is called are ignored, 32 but are accepted for compatibility with other things happening in a Loop. 33 34 Args: 35 func (callable): Function to executed 36 *args: pass to func, after evaluation if callable 37 **kwargs: pass to func, after evaluation if callable 38 39 """ 40 def __init__(self, func, *args, **kwargs): 41 self.func = func 42 self.args = args 43 self.kwargs = kwargs 44 45 def __call__(self, **ignore_kwargs): 46 # If any of the arguments are callable, evaluate them first 47 eval_args = [arg() if callable(arg) else arg for arg in self.args] 48 eval_kwargs = {k: (v() if callable(v) else v) for k, v in self.kwargs.items()} 49 50 self.func(*eval_args, **eval_kwargs) 51 52 def snapshot(self, update=False): 53 """ 54 Snapshots task 55 Args: 56 update (bool): TODO not in use 57 58 Returns: 59 dict: snapshot 60 """ 61 return {'type': 'Task', 'func': repr(self.func)} 62 63 64 class Wait: 65 """ 66 A simple class to tell a Loop to wait <delay> seconds. 67 68 This is transformed into a Task within the Loop, such that 69 it can do other things (monitor, check for halt) during the delay. 70 71 But for use outside of a Loop, it is also callable (then it just sleeps) 72 73 Args: 74 delay: seconds to delay 75 76 Raises: 77 ValueError: if delay is negative 78 """ 79 def __init__(self, delay): 80 if not delay >= 0: 81 raise ValueError('delay must be > 0, not {}'.format(repr(delay))) 82 self.delay = delay 83 84 def __call__(self): 85 if self.delay: 86 time.sleep(self.delay) 87 88 def snapshot(self, update=False): 89 """ 90 Snapshots delay 91 Args: 92 update (bool): TODO not in use 93 94 Returns: 95 dict: snapshot 96 """ 97 return {'type': 'Wait', 'delay': self.delay} 98 99 100 class _Measure: 101 """ 102 A callable collection of parameters to measure. 103 104 This should not be constructed manually, only by an ActiveLoop. 105 """ 106 def __init__(self, params_indices, data_set, use_threads): 107 self.use_threads = use_threads and len(params_indices) > 1 108 # the applicable DataSet.store function 109 self.store = data_set.store 110 111 # for performance, pre-calculate which params return data for 112 # multiple arrays, and the name mappings 113 self.getters = [] 114 self.param_ids = [] 115 self.composite = [] 116 for param, action_indices in params_indices: 117 self.getters.append(param.get) 118 119 if hasattr(param, 'names'): 120 part_ids = [] 121 for i in range(len(param.names)): 122 param_id = data_set.action_id_map[action_indices + (i,)] 123 part_ids.append(param_id) 124 self.param_ids.append(None) 125 self.composite.append(part_ids) 126 else: 127 param_id = data_set.action_id_map[action_indices] 128 self.param_ids.append(param_id) 129 self.composite.append(False) 130 131 def __call__(self, loop_indices, **ignore_kwargs): 132 out_dict = {} 133 if self.use_threads: 134 out = thread_map(self.getters) 135 else: 136 out = [g() for g in self.getters] 137 138 for param_out, param_id, composite in zip(out, self.param_ids, 139 self.composite): 140 if composite: 141 for val, part_id in zip(param_out, composite): 142 out_dict[part_id] = val 143 else: 144 out_dict[param_id] = param_out 145 146 self.store(loop_indices, out_dict) 147 148 149 class _Nest: 150 151 """ 152 Wrapper to make a callable nested ActiveLoop. 153 154 This should not be constructed manually, only by an ActiveLoop. 155 """ 156 157 def __init__(self, inner_loop, action_indices): 158 self.inner_loop = inner_loop 159 self.action_indices = action_indices 160 161 def __call__(self, **kwargs): 162 self.inner_loop._run_loop(action_indices=self.action_indices, **kwargs) 163 164 165 class BreakIf: 166 167 """ 168 Loop action that breaks out of the loop if a condition is truthy. 169 170 Args: 171 condition (callable): a callable taking no arguments. 172 Can be a simple function that returns truthy when it's time to quit 173 May also be constructed by deferred operations on `Parameter`. 174 Raises: 175 TypeError: if condition is not a callable with no aguments. 176 177 Examples: 178 >>> BreakIf(gates.chan1 >= 3) 179 >>> BreakIf(abs(source.I * source.V) >= source.power_limit.get_latest) 180 """ 181 182 def __init__(self, condition): 183 if not is_function(condition, 0): 184 raise TypeError('BreakIf condition must be a callable with ' 185 'no arguments') 186 self.condition = condition 187 188 def __call__(self, **ignore_kwargs): 189 if self.condition(): 190 raise _QcodesBreak 191 192 def snapshot(self, update=False): 193 # TODO: make nice reprs for DeferredOperations 194 """ 195 Snapshots breakif action 196 Args: 197 update (bool): TODO not in use 198 199 Returns: 200 dict: snapshot 201 202 """ 203 return {'type': 'BreakIf', 'condition': repr(self.condition)} 204 205 206 class _QcodesBreak(Exception): 207 pass 208 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/qcodes/actions.py b/qcodes/actions.py --- a/qcodes/actions.py +++ b/qcodes/actions.py @@ -8,6 +8,12 @@ _NO_SNAPSHOT = {'type': None, 'description': 'Action without snapshot'} +# exception when threading is attempted used to simultaneously +# query the same instrument for several values +class UnsafeThreadingException(Exception): + pass + + def _actions_snapshot(actions, update): """Make a list of snapshots from a list of actions.""" snapshot = [] @@ -113,9 +119,13 @@ self.getters = [] self.param_ids = [] self.composite = [] + paramcheck = [] # list to check if parameters are unique for param, action_indices in params_indices: self.getters.append(param.get) + if param._instrument: + paramcheck.append((param, param._instrument)) + if hasattr(param, 'names'): part_ids = [] for i in range(len(param.names)): @@ -128,6 +138,17 @@ self.param_ids.append(param_id) self.composite.append(False) + if self.use_threads: + insts = [p[1] for p in paramcheck] + if (len(set(insts)) != len(insts)): + duplicates = [p for p in paramcheck if insts.count(p[1]) > 1] + raise UnsafeThreadingException('Can not use threading to ' + 'read ' + 'several things from the same ' + 'instrument. Specifically, you ' + 'asked for' + ' {}.'.format(duplicates)) + def __call__(self, loop_indices, **ignore_kwargs): out_dict = {} if self.use_threads: @@ -166,7 +187,7 @@ """ Loop action that breaks out of the loop if a condition is truthy. - + Args: condition (callable): a callable taking no arguments. Can be a simple function that returns truthy when it's time to quit
{"golden_diff": "diff --git a/qcodes/actions.py b/qcodes/actions.py\n--- a/qcodes/actions.py\n+++ b/qcodes/actions.py\n@@ -8,6 +8,12 @@\n _NO_SNAPSHOT = {'type': None, 'description': 'Action without snapshot'}\n \n \n+# exception when threading is attempted used to simultaneously\n+# query the same instrument for several values\n+class UnsafeThreadingException(Exception):\n+ pass\n+\n+\n def _actions_snapshot(actions, update):\n \"\"\"Make a list of snapshots from a list of actions.\"\"\"\n snapshot = []\n@@ -113,9 +119,13 @@\n self.getters = []\n self.param_ids = []\n self.composite = []\n+ paramcheck = [] # list to check if parameters are unique\n for param, action_indices in params_indices:\n self.getters.append(param.get)\n \n+ if param._instrument:\n+ paramcheck.append((param, param._instrument))\n+\n if hasattr(param, 'names'):\n part_ids = []\n for i in range(len(param.names)):\n@@ -128,6 +138,17 @@\n self.param_ids.append(param_id)\n self.composite.append(False)\n \n+ if self.use_threads:\n+ insts = [p[1] for p in paramcheck]\n+ if (len(set(insts)) != len(insts)):\n+ duplicates = [p for p in paramcheck if insts.count(p[1]) > 1]\n+ raise UnsafeThreadingException('Can not use threading to '\n+ 'read '\n+ 'several things from the same '\n+ 'instrument. Specifically, you '\n+ 'asked for'\n+ ' {}.'.format(duplicates))\n+\n def __call__(self, loop_indices, **ignore_kwargs):\n out_dict = {}\n if self.use_threads:\n@@ -166,7 +187,7 @@\n \n \"\"\"\n Loop action that breaks out of the loop if a condition is truthy.\n- \n+\n Args:\n condition (callable): a callable taking no arguments.\n Can be a simple function that returns truthy when it's time to quit\n", "issue": "Data mingled across columns in dataset\n### Steps to reproduce\r\n1. Measure something with `.run(use_threads=True)`\r\n\r\n### Expected behaviour\r\nGet a neat dataset\r\n\r\n### Actual behaviour\r\nData mingles across columns in dataset \ud83d\udc98 \r\n\r\n### System\r\n**operating system**\r\nWin 10\r\n\r\n**qcodes branch**\r\nMaster\r\n\r\nTwo identical measurements:\r\n![image](https://cloud.githubusercontent.com/assets/8595386/26193146/99b0d318-3bb4-11e7-9e93-473d782fe41b.png)\r\n\r\n\n", "before_files": [{"content": "\"\"\"Actions, mainly to be executed in measurement Loops.\"\"\"\nimport time\n\nfrom qcodes.utils.deferred_operations import is_function\nfrom qcodes.utils.threading import thread_map\n\n\n_NO_SNAPSHOT = {'type': None, 'description': 'Action without snapshot'}\n\n\ndef _actions_snapshot(actions, update):\n \"\"\"Make a list of snapshots from a list of actions.\"\"\"\n snapshot = []\n for action in actions:\n if hasattr(action, 'snapshot'):\n snapshot.append(action.snapshot(update=update))\n else:\n snapshot.append(_NO_SNAPSHOT)\n return snapshot\n\n\nclass Task:\n \"\"\"\n A predefined task to be executed within a measurement Loop.\n\n The first argument should be a callable, to which any subsequent\n args and kwargs (which are evaluated before the loop starts) are passed.\n\n The args and kwargs are first evaluated if they are found to be callable.\n\n Keyword Args passed when the Task is called are ignored,\n but are accepted for compatibility with other things happening in a Loop.\n\n Args:\n func (callable): Function to executed\n *args: pass to func, after evaluation if callable\n **kwargs: pass to func, after evaluation if callable\n\n \"\"\"\n def __init__(self, func, *args, **kwargs):\n self.func = func\n self.args = args\n self.kwargs = kwargs\n\n def __call__(self, **ignore_kwargs):\n # If any of the arguments are callable, evaluate them first\n eval_args = [arg() if callable(arg) else arg for arg in self.args]\n eval_kwargs = {k: (v() if callable(v) else v) for k, v in self.kwargs.items()}\n\n self.func(*eval_args, **eval_kwargs)\n\n def snapshot(self, update=False):\n \"\"\"\n Snapshots task\n Args:\n update (bool): TODO not in use\n\n Returns:\n dict: snapshot\n \"\"\"\n return {'type': 'Task', 'func': repr(self.func)}\n\n\nclass Wait:\n \"\"\"\n A simple class to tell a Loop to wait <delay> seconds.\n\n This is transformed into a Task within the Loop, such that\n it can do other things (monitor, check for halt) during the delay.\n\n But for use outside of a Loop, it is also callable (then it just sleeps)\n\n Args:\n delay: seconds to delay\n\n Raises:\n ValueError: if delay is negative\n \"\"\"\n def __init__(self, delay):\n if not delay >= 0:\n raise ValueError('delay must be > 0, not {}'.format(repr(delay)))\n self.delay = delay\n\n def __call__(self):\n if self.delay:\n time.sleep(self.delay)\n\n def snapshot(self, update=False):\n \"\"\"\n Snapshots delay\n Args:\n update (bool): TODO not in use\n\n Returns:\n dict: snapshot\n \"\"\"\n return {'type': 'Wait', 'delay': self.delay}\n\n\nclass _Measure:\n \"\"\"\n A callable collection of parameters to measure.\n\n This should not be constructed manually, only by an ActiveLoop.\n \"\"\"\n def __init__(self, params_indices, data_set, use_threads):\n self.use_threads = use_threads and len(params_indices) > 1\n # the applicable DataSet.store function\n self.store = data_set.store\n\n # for performance, pre-calculate which params return data for\n # multiple arrays, and the name mappings\n self.getters = []\n self.param_ids = []\n self.composite = []\n for param, action_indices in params_indices:\n self.getters.append(param.get)\n\n if hasattr(param, 'names'):\n part_ids = []\n for i in range(len(param.names)):\n param_id = data_set.action_id_map[action_indices + (i,)]\n part_ids.append(param_id)\n self.param_ids.append(None)\n self.composite.append(part_ids)\n else:\n param_id = data_set.action_id_map[action_indices]\n self.param_ids.append(param_id)\n self.composite.append(False)\n\n def __call__(self, loop_indices, **ignore_kwargs):\n out_dict = {}\n if self.use_threads:\n out = thread_map(self.getters)\n else:\n out = [g() for g in self.getters]\n\n for param_out, param_id, composite in zip(out, self.param_ids,\n self.composite):\n if composite:\n for val, part_id in zip(param_out, composite):\n out_dict[part_id] = val\n else:\n out_dict[param_id] = param_out\n\n self.store(loop_indices, out_dict)\n\n\nclass _Nest:\n\n \"\"\"\n Wrapper to make a callable nested ActiveLoop.\n\n This should not be constructed manually, only by an ActiveLoop.\n \"\"\"\n\n def __init__(self, inner_loop, action_indices):\n self.inner_loop = inner_loop\n self.action_indices = action_indices\n\n def __call__(self, **kwargs):\n self.inner_loop._run_loop(action_indices=self.action_indices, **kwargs)\n\n\nclass BreakIf:\n\n \"\"\"\n Loop action that breaks out of the loop if a condition is truthy.\n \n Args:\n condition (callable): a callable taking no arguments.\n Can be a simple function that returns truthy when it's time to quit\n May also be constructed by deferred operations on `Parameter`.\n Raises:\n TypeError: if condition is not a callable with no aguments.\n\n Examples:\n >>> BreakIf(gates.chan1 >= 3)\n >>> BreakIf(abs(source.I * source.V) >= source.power_limit.get_latest)\n \"\"\"\n\n def __init__(self, condition):\n if not is_function(condition, 0):\n raise TypeError('BreakIf condition must be a callable with '\n 'no arguments')\n self.condition = condition\n\n def __call__(self, **ignore_kwargs):\n if self.condition():\n raise _QcodesBreak\n\n def snapshot(self, update=False):\n # TODO: make nice reprs for DeferredOperations\n \"\"\"\n Snapshots breakif action\n Args:\n update (bool): TODO not in use\n\n Returns:\n dict: snapshot\n\n \"\"\"\n return {'type': 'BreakIf', 'condition': repr(self.condition)}\n\n\nclass _QcodesBreak(Exception):\n pass\n", "path": "qcodes/actions.py"}], "after_files": [{"content": "\"\"\"Actions, mainly to be executed in measurement Loops.\"\"\"\nimport time\n\nfrom qcodes.utils.deferred_operations import is_function\nfrom qcodes.utils.threading import thread_map\n\n\n_NO_SNAPSHOT = {'type': None, 'description': 'Action without snapshot'}\n\n\n# exception when threading is attempted used to simultaneously\n# query the same instrument for several values\nclass UnsafeThreadingException(Exception):\n pass\n\n\ndef _actions_snapshot(actions, update):\n \"\"\"Make a list of snapshots from a list of actions.\"\"\"\n snapshot = []\n for action in actions:\n if hasattr(action, 'snapshot'):\n snapshot.append(action.snapshot(update=update))\n else:\n snapshot.append(_NO_SNAPSHOT)\n return snapshot\n\n\nclass Task:\n \"\"\"\n A predefined task to be executed within a measurement Loop.\n\n The first argument should be a callable, to which any subsequent\n args and kwargs (which are evaluated before the loop starts) are passed.\n\n The args and kwargs are first evaluated if they are found to be callable.\n\n Keyword Args passed when the Task is called are ignored,\n but are accepted for compatibility with other things happening in a Loop.\n\n Args:\n func (callable): Function to executed\n *args: pass to func, after evaluation if callable\n **kwargs: pass to func, after evaluation if callable\n\n \"\"\"\n def __init__(self, func, *args, **kwargs):\n self.func = func\n self.args = args\n self.kwargs = kwargs\n\n def __call__(self, **ignore_kwargs):\n # If any of the arguments are callable, evaluate them first\n eval_args = [arg() if callable(arg) else arg for arg in self.args]\n eval_kwargs = {k: (v() if callable(v) else v) for k, v in self.kwargs.items()}\n\n self.func(*eval_args, **eval_kwargs)\n\n def snapshot(self, update=False):\n \"\"\"\n Snapshots task\n Args:\n update (bool): TODO not in use\n\n Returns:\n dict: snapshot\n \"\"\"\n return {'type': 'Task', 'func': repr(self.func)}\n\n\nclass Wait:\n \"\"\"\n A simple class to tell a Loop to wait <delay> seconds.\n\n This is transformed into a Task within the Loop, such that\n it can do other things (monitor, check for halt) during the delay.\n\n But for use outside of a Loop, it is also callable (then it just sleeps)\n\n Args:\n delay: seconds to delay\n\n Raises:\n ValueError: if delay is negative\n \"\"\"\n def __init__(self, delay):\n if not delay >= 0:\n raise ValueError('delay must be > 0, not {}'.format(repr(delay)))\n self.delay = delay\n\n def __call__(self):\n if self.delay:\n time.sleep(self.delay)\n\n def snapshot(self, update=False):\n \"\"\"\n Snapshots delay\n Args:\n update (bool): TODO not in use\n\n Returns:\n dict: snapshot\n \"\"\"\n return {'type': 'Wait', 'delay': self.delay}\n\n\nclass _Measure:\n \"\"\"\n A callable collection of parameters to measure.\n\n This should not be constructed manually, only by an ActiveLoop.\n \"\"\"\n def __init__(self, params_indices, data_set, use_threads):\n self.use_threads = use_threads and len(params_indices) > 1\n # the applicable DataSet.store function\n self.store = data_set.store\n\n # for performance, pre-calculate which params return data for\n # multiple arrays, and the name mappings\n self.getters = []\n self.param_ids = []\n self.composite = []\n paramcheck = [] # list to check if parameters are unique\n for param, action_indices in params_indices:\n self.getters.append(param.get)\n\n if param._instrument:\n paramcheck.append((param, param._instrument))\n\n if hasattr(param, 'names'):\n part_ids = []\n for i in range(len(param.names)):\n param_id = data_set.action_id_map[action_indices + (i,)]\n part_ids.append(param_id)\n self.param_ids.append(None)\n self.composite.append(part_ids)\n else:\n param_id = data_set.action_id_map[action_indices]\n self.param_ids.append(param_id)\n self.composite.append(False)\n\n if self.use_threads:\n insts = [p[1] for p in paramcheck]\n if (len(set(insts)) != len(insts)):\n duplicates = [p for p in paramcheck if insts.count(p[1]) > 1]\n raise UnsafeThreadingException('Can not use threading to '\n 'read '\n 'several things from the same '\n 'instrument. Specifically, you '\n 'asked for'\n ' {}.'.format(duplicates))\n\n def __call__(self, loop_indices, **ignore_kwargs):\n out_dict = {}\n if self.use_threads:\n out = thread_map(self.getters)\n else:\n out = [g() for g in self.getters]\n\n for param_out, param_id, composite in zip(out, self.param_ids,\n self.composite):\n if composite:\n for val, part_id in zip(param_out, composite):\n out_dict[part_id] = val\n else:\n out_dict[param_id] = param_out\n\n self.store(loop_indices, out_dict)\n\n\nclass _Nest:\n\n \"\"\"\n Wrapper to make a callable nested ActiveLoop.\n\n This should not be constructed manually, only by an ActiveLoop.\n \"\"\"\n\n def __init__(self, inner_loop, action_indices):\n self.inner_loop = inner_loop\n self.action_indices = action_indices\n\n def __call__(self, **kwargs):\n self.inner_loop._run_loop(action_indices=self.action_indices, **kwargs)\n\n\nclass BreakIf:\n\n \"\"\"\n Loop action that breaks out of the loop if a condition is truthy.\n\n Args:\n condition (callable): a callable taking no arguments.\n Can be a simple function that returns truthy when it's time to quit\n May also be constructed by deferred operations on `Parameter`.\n Raises:\n TypeError: if condition is not a callable with no aguments.\n\n Examples:\n >>> BreakIf(gates.chan1 >= 3)\n >>> BreakIf(abs(source.I * source.V) >= source.power_limit.get_latest)\n \"\"\"\n\n def __init__(self, condition):\n if not is_function(condition, 0):\n raise TypeError('BreakIf condition must be a callable with '\n 'no arguments')\n self.condition = condition\n\n def __call__(self, **ignore_kwargs):\n if self.condition():\n raise _QcodesBreak\n\n def snapshot(self, update=False):\n # TODO: make nice reprs for DeferredOperations\n \"\"\"\n Snapshots breakif action\n Args:\n update (bool): TODO not in use\n\n Returns:\n dict: snapshot\n\n \"\"\"\n return {'type': 'BreakIf', 'condition': repr(self.condition)}\n\n\nclass _QcodesBreak(Exception):\n pass\n", "path": "qcodes/actions.py"}]}
2,281
464
gh_patches_debug_28691
rasdani/github-patches
git_diff
elastic__apm-agent-python-998
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- 5.10.0 "kind of" regression for elasticsearch instrumentation due to logging of whole body **Describe the bug**: The elasticsearch library makes it possible to define your own custom serializer as part of the library. However, the Python agent doesn't take this into account, which can lead to wrong request bodies being collected, and it is possible to downright crash the agent (which unfortunately was what happened to us). The problem seems to have happened with #940. **To Reproduce - wrong body** https://gist.github.com/HenrikOssipoff/a231b45ce504691cd71a5de23a771132 This gist shows a simple Starlette application with a custom serializer. Using the code, the following document is correctly indexed: ```json {"_index":"test-index","_type":"_doc","_id":"zrmN0nYBBrB60DkP7BVZ","_score":1.0,"_source":{"numbers":2}} ``` However, the wrong body is recorded with the agent: ![Skærmbillede 2021-01-05 kl 13 39 05](https://user-images.githubusercontent.com/4737224/103647318-69c3f400-4f5b-11eb-8841-b26b60533efc.png) **To Reproduce - crash** The following gist shows an application that works perfectly fine with the elasticsearch client, but crashes the apm agent: https://gist.github.com/HenrikOssipoff/2219d059d3156ab8b11879e1191cefe4 Causes exception `TypeError: keys must be str, int, float, bool or None, not NumberObj`. I'll admit this is some weird code; this is a simplified version of what we currently use. The version in our app uses Django's translation module as keys for certain parts of the query, which corresponds to `django.utils.functional.Promise` objects. We're basically looping over these to force-evaluate them, as they're lazy objects. It works perfectly fine, but crashes the agent. **Environment (please complete the following information)** - OS: [e.g. Linux] Linux/Mac - Python version: 3.9 - Framework and version [e.g. Django 2.1]: Starlette, Django (any version, probably) - APM Server version: 7.8.0 - Agent version: >= 5.10.0 **Additional context** I'm not actually sure how best to solve this. The elasticsearch library already supports a wide array of custom serialization, even without using your own custom one: https://github.com/elastic/elasticsearch-py/blob/master/elasticsearch/serializer.py The best case (from a user perspective), would be for the agent to somehow find the correct serializer to use, when dealing with the body. I'm not sure if this is even possible. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `elasticapm/instrumentation/packages/elasticsearch.py` Content: ``` 1 # BSD 3-Clause License 2 # 3 # Copyright (c) 2019, Elasticsearch BV 4 # All rights reserved. 5 # 6 # Redistribution and use in source and binary forms, with or without 7 # modification, are permitted provided that the following conditions are met: 8 # 9 # * Redistributions of source code must retain the above copyright notice, this 10 # list of conditions and the following disclaimer. 11 # 12 # * Redistributions in binary form must reproduce the above copyright notice, 13 # this list of conditions and the following disclaimer in the documentation 14 # and/or other materials provided with the distribution. 15 # 16 # * Neither the name of the copyright holder nor the names of its 17 # contributors may be used to endorse or promote products derived from 18 # this software without specific prior written permission. 19 # 20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE 24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 31 from __future__ import absolute_import 32 33 import json 34 35 import elasticapm 36 from elasticapm.instrumentation.packages.base import AbstractInstrumentedModule 37 from elasticapm.utils import compat 38 from elasticapm.utils.logging import get_logger 39 40 logger = get_logger("elasticapm.instrument") 41 42 43 API_METHOD_KEY_NAME = "__elastic_apm_api_method_name" 44 BODY_REF_NAME = "__elastic_apm_body_ref" 45 46 47 class ElasticSearchConnectionMixin(object): 48 query_methods = ("search", "count", "delete_by_query") 49 50 def get_signature(self, args, kwargs): 51 args_len = len(args) 52 http_method = args[0] if args_len else kwargs.get("method") 53 http_path = args[1] if args_len > 1 else kwargs.get("url") 54 55 return "ES %s %s" % (http_method, http_path) 56 57 def get_context(self, instance, args, kwargs): 58 args_len = len(args) 59 params = args[2] if args_len > 2 else kwargs.get("params") 60 body = params.pop(BODY_REF_NAME, None) if params else None 61 62 api_method = params.pop(API_METHOD_KEY_NAME, None) if params else None 63 context = {"db": {"type": "elasticsearch"}} 64 if api_method in self.query_methods: 65 query = [] 66 # using both q AND body is allowed in some API endpoints / ES versions, 67 # but not in others. We simply capture both if they are there so the 68 # user can see it. 69 if params and "q" in params: 70 # 'q' is already encoded to a byte string at this point 71 # we assume utf8, which is the default 72 query.append("q=" + params["q"].decode("utf-8", errors="replace")) 73 if body and isinstance(body, dict): 74 query.append(json.dumps(body, default=compat.text_type)) 75 if query: 76 context["db"]["statement"] = "\n\n".join(query) 77 elif api_method == "update": 78 if isinstance(body, dict) and "script" in body: 79 # only get the `script` field from the body 80 context["db"]["statement"] = json.dumps({"script": body["script"]}) 81 context["destination"] = { 82 "address": instance.host, 83 "service": {"name": "elasticsearch", "resource": "elasticsearch", "type": "db"}, 84 } 85 return context 86 87 88 class ElasticsearchConnectionInstrumentation(ElasticSearchConnectionMixin, AbstractInstrumentedModule): 89 name = "elasticsearch_connection" 90 91 instrument_list = [ 92 ("elasticsearch.connection.http_urllib3", "Urllib3HttpConnection.perform_request"), 93 ("elasticsearch.connection.http_requests", "RequestsHttpConnection.perform_request"), 94 ] 95 96 def call(self, module, method, wrapped, instance, args, kwargs): 97 signature = self.get_signature(args, kwargs) 98 context = self.get_context(instance, args, kwargs) 99 100 with elasticapm.capture_span( 101 signature, 102 span_type="db", 103 span_subtype="elasticsearch", 104 span_action="query", 105 extra=context, 106 skip_frames=2, 107 leaf=True, 108 ): 109 return wrapped(*args, **kwargs) 110 111 112 class ElasticsearchInstrumentation(AbstractInstrumentedModule): 113 name = "elasticsearch" 114 115 instrument_list = [ 116 ("elasticsearch.client", "Elasticsearch.delete_by_query"), 117 ("elasticsearch.client", "Elasticsearch.search"), 118 ("elasticsearch.client", "Elasticsearch.count"), 119 ("elasticsearch.client", "Elasticsearch.update"), 120 ] 121 122 def __init__(self): 123 super(ElasticsearchInstrumentation, self).__init__() 124 try: 125 from elasticsearch import VERSION 126 127 self.version = VERSION[0] 128 except ImportError: 129 self.version = None 130 131 def instrument(self): 132 if self.version and not 2 <= self.version < 8: 133 logger.debug("Instrumenting version %s of Elasticsearch is not supported by Elastic APM", self.version) 134 return 135 super(ElasticsearchInstrumentation, self).instrument() 136 137 def call(self, module, method, wrapped, instance, args, kwargs): 138 kwargs = self.inject_apm_params(method, kwargs) 139 return wrapped(*args, **kwargs) 140 141 def inject_apm_params(self, method, kwargs): 142 params = kwargs.pop("params", {}) 143 144 # make a copy of params in case the caller reuses them for some reason 145 params = params.copy() if params is not None else {} 146 147 method_name = method.partition(".")[-1] 148 149 # store a reference to the non-serialized body so we can use it in the connection layer 150 body = kwargs.get("body") 151 params[BODY_REF_NAME] = body 152 params[API_METHOD_KEY_NAME] = method_name 153 154 kwargs["params"] = params 155 return kwargs 156 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/elasticapm/instrumentation/packages/elasticsearch.py b/elasticapm/instrumentation/packages/elasticsearch.py --- a/elasticapm/instrumentation/packages/elasticsearch.py +++ b/elasticapm/instrumentation/packages/elasticsearch.py @@ -58,6 +58,7 @@ args_len = len(args) params = args[2] if args_len > 2 else kwargs.get("params") body = params.pop(BODY_REF_NAME, None) if params else None + body_serialized = args[3] if args_len > 3 else kwargs.get("body") api_method = params.pop(API_METHOD_KEY_NAME, None) if params else None context = {"db": {"type": "elasticsearch"}} @@ -70,8 +71,16 @@ # 'q' is already encoded to a byte string at this point # we assume utf8, which is the default query.append("q=" + params["q"].decode("utf-8", errors="replace")) - if body and isinstance(body, dict): - query.append(json.dumps(body, default=compat.text_type)) + if body_serialized: + if isinstance(body_serialized, bytes): + query.append(body_serialized.decode("utf-8", errors="replace")) + else: + query.append(body_serialized) + elif body and isinstance(body, dict): + try: + query.append(json.dumps(body, default=compat.text_type)) + except TypeError: + pass if query: context["db"]["statement"] = "\n\n".join(query) elif api_method == "update":
{"golden_diff": "diff --git a/elasticapm/instrumentation/packages/elasticsearch.py b/elasticapm/instrumentation/packages/elasticsearch.py\n--- a/elasticapm/instrumentation/packages/elasticsearch.py\n+++ b/elasticapm/instrumentation/packages/elasticsearch.py\n@@ -58,6 +58,7 @@\n args_len = len(args)\n params = args[2] if args_len > 2 else kwargs.get(\"params\")\n body = params.pop(BODY_REF_NAME, None) if params else None\n+ body_serialized = args[3] if args_len > 3 else kwargs.get(\"body\")\n \n api_method = params.pop(API_METHOD_KEY_NAME, None) if params else None\n context = {\"db\": {\"type\": \"elasticsearch\"}}\n@@ -70,8 +71,16 @@\n # 'q' is already encoded to a byte string at this point\n # we assume utf8, which is the default\n query.append(\"q=\" + params[\"q\"].decode(\"utf-8\", errors=\"replace\"))\n- if body and isinstance(body, dict):\n- query.append(json.dumps(body, default=compat.text_type))\n+ if body_serialized:\n+ if isinstance(body_serialized, bytes):\n+ query.append(body_serialized.decode(\"utf-8\", errors=\"replace\"))\n+ else:\n+ query.append(body_serialized)\n+ elif body and isinstance(body, dict):\n+ try:\n+ query.append(json.dumps(body, default=compat.text_type))\n+ except TypeError:\n+ pass\n if query:\n context[\"db\"][\"statement\"] = \"\\n\\n\".join(query)\n elif api_method == \"update\":\n", "issue": "5.10.0 \"kind of\" regression for elasticsearch instrumentation due to logging of whole body\n**Describe the bug**:\r\n\r\nThe elasticsearch library makes it possible to define your own custom serializer as part of the library. However, the Python agent doesn't take this into account, which can lead to wrong request bodies being collected, and it is possible to downright crash the agent (which unfortunately was what happened to us).\r\n\r\nThe problem seems to have happened with #940.\r\n\r\n**To Reproduce - wrong body**\r\n\r\nhttps://gist.github.com/HenrikOssipoff/a231b45ce504691cd71a5de23a771132\r\n\r\nThis gist shows a simple Starlette application with a custom serializer.\r\n\r\nUsing the code, the following document is correctly indexed:\r\n\r\n```json\r\n{\"_index\":\"test-index\",\"_type\":\"_doc\",\"_id\":\"zrmN0nYBBrB60DkP7BVZ\",\"_score\":1.0,\"_source\":{\"numbers\":2}}\r\n```\r\n\r\nHowever, the wrong body is recorded with the agent:\r\n\r\n![Sk\u00e6rmbillede 2021-01-05 kl 13 39 05](https://user-images.githubusercontent.com/4737224/103647318-69c3f400-4f5b-11eb-8841-b26b60533efc.png)\r\n\r\n**To Reproduce - crash**\r\n\r\nThe following gist shows an application that works perfectly fine with the elasticsearch client, but crashes the apm agent:\r\n\r\nhttps://gist.github.com/HenrikOssipoff/2219d059d3156ab8b11879e1191cefe4\r\n\r\nCauses exception `TypeError: keys must be str, int, float, bool or None, not NumberObj`.\r\n\r\nI'll admit this is some weird code; this is a simplified version of what we currently use.\r\nThe version in our app uses Django's translation module as keys for certain parts of the query, which corresponds to `django.utils.functional.Promise` objects. We're basically looping over these to force-evaluate them, as they're lazy objects.\r\n\r\nIt works perfectly fine, but crashes the agent.\r\n\r\n**Environment (please complete the following information)**\r\n- OS: [e.g. Linux] Linux/Mac\r\n- Python version: 3.9\r\n- Framework and version [e.g. Django 2.1]: Starlette, Django (any version, probably)\r\n- APM Server version: 7.8.0\r\n- Agent version: >= 5.10.0\r\n\r\n**Additional context**\r\n\r\nI'm not actually sure how best to solve this. The elasticsearch library already supports a wide array of custom serialization, even without using your own custom one: https://github.com/elastic/elasticsearch-py/blob/master/elasticsearch/serializer.py\r\n\r\nThe best case (from a user perspective), would be for the agent to somehow find the correct serializer to use, when dealing with the body. I'm not sure if this is even possible.\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nfrom __future__ import absolute_import\n\nimport json\n\nimport elasticapm\nfrom elasticapm.instrumentation.packages.base import AbstractInstrumentedModule\nfrom elasticapm.utils import compat\nfrom elasticapm.utils.logging import get_logger\n\nlogger = get_logger(\"elasticapm.instrument\")\n\n\nAPI_METHOD_KEY_NAME = \"__elastic_apm_api_method_name\"\nBODY_REF_NAME = \"__elastic_apm_body_ref\"\n\n\nclass ElasticSearchConnectionMixin(object):\n query_methods = (\"search\", \"count\", \"delete_by_query\")\n\n def get_signature(self, args, kwargs):\n args_len = len(args)\n http_method = args[0] if args_len else kwargs.get(\"method\")\n http_path = args[1] if args_len > 1 else kwargs.get(\"url\")\n\n return \"ES %s %s\" % (http_method, http_path)\n\n def get_context(self, instance, args, kwargs):\n args_len = len(args)\n params = args[2] if args_len > 2 else kwargs.get(\"params\")\n body = params.pop(BODY_REF_NAME, None) if params else None\n\n api_method = params.pop(API_METHOD_KEY_NAME, None) if params else None\n context = {\"db\": {\"type\": \"elasticsearch\"}}\n if api_method in self.query_methods:\n query = []\n # using both q AND body is allowed in some API endpoints / ES versions,\n # but not in others. We simply capture both if they are there so the\n # user can see it.\n if params and \"q\" in params:\n # 'q' is already encoded to a byte string at this point\n # we assume utf8, which is the default\n query.append(\"q=\" + params[\"q\"].decode(\"utf-8\", errors=\"replace\"))\n if body and isinstance(body, dict):\n query.append(json.dumps(body, default=compat.text_type))\n if query:\n context[\"db\"][\"statement\"] = \"\\n\\n\".join(query)\n elif api_method == \"update\":\n if isinstance(body, dict) and \"script\" in body:\n # only get the `script` field from the body\n context[\"db\"][\"statement\"] = json.dumps({\"script\": body[\"script\"]})\n context[\"destination\"] = {\n \"address\": instance.host,\n \"service\": {\"name\": \"elasticsearch\", \"resource\": \"elasticsearch\", \"type\": \"db\"},\n }\n return context\n\n\nclass ElasticsearchConnectionInstrumentation(ElasticSearchConnectionMixin, AbstractInstrumentedModule):\n name = \"elasticsearch_connection\"\n\n instrument_list = [\n (\"elasticsearch.connection.http_urllib3\", \"Urllib3HttpConnection.perform_request\"),\n (\"elasticsearch.connection.http_requests\", \"RequestsHttpConnection.perform_request\"),\n ]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n signature = self.get_signature(args, kwargs)\n context = self.get_context(instance, args, kwargs)\n\n with elasticapm.capture_span(\n signature,\n span_type=\"db\",\n span_subtype=\"elasticsearch\",\n span_action=\"query\",\n extra=context,\n skip_frames=2,\n leaf=True,\n ):\n return wrapped(*args, **kwargs)\n\n\nclass ElasticsearchInstrumentation(AbstractInstrumentedModule):\n name = \"elasticsearch\"\n\n instrument_list = [\n (\"elasticsearch.client\", \"Elasticsearch.delete_by_query\"),\n (\"elasticsearch.client\", \"Elasticsearch.search\"),\n (\"elasticsearch.client\", \"Elasticsearch.count\"),\n (\"elasticsearch.client\", \"Elasticsearch.update\"),\n ]\n\n def __init__(self):\n super(ElasticsearchInstrumentation, self).__init__()\n try:\n from elasticsearch import VERSION\n\n self.version = VERSION[0]\n except ImportError:\n self.version = None\n\n def instrument(self):\n if self.version and not 2 <= self.version < 8:\n logger.debug(\"Instrumenting version %s of Elasticsearch is not supported by Elastic APM\", self.version)\n return\n super(ElasticsearchInstrumentation, self).instrument()\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n kwargs = self.inject_apm_params(method, kwargs)\n return wrapped(*args, **kwargs)\n\n def inject_apm_params(self, method, kwargs):\n params = kwargs.pop(\"params\", {})\n\n # make a copy of params in case the caller reuses them for some reason\n params = params.copy() if params is not None else {}\n\n method_name = method.partition(\".\")[-1]\n\n # store a reference to the non-serialized body so we can use it in the connection layer\n body = kwargs.get(\"body\")\n params[BODY_REF_NAME] = body\n params[API_METHOD_KEY_NAME] = method_name\n\n kwargs[\"params\"] = params\n return kwargs\n", "path": "elasticapm/instrumentation/packages/elasticsearch.py"}], "after_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nfrom __future__ import absolute_import\n\nimport json\n\nimport elasticapm\nfrom elasticapm.instrumentation.packages.base import AbstractInstrumentedModule\nfrom elasticapm.utils import compat\nfrom elasticapm.utils.logging import get_logger\n\nlogger = get_logger(\"elasticapm.instrument\")\n\n\nAPI_METHOD_KEY_NAME = \"__elastic_apm_api_method_name\"\nBODY_REF_NAME = \"__elastic_apm_body_ref\"\n\n\nclass ElasticSearchConnectionMixin(object):\n query_methods = (\"search\", \"count\", \"delete_by_query\")\n\n def get_signature(self, args, kwargs):\n args_len = len(args)\n http_method = args[0] if args_len else kwargs.get(\"method\")\n http_path = args[1] if args_len > 1 else kwargs.get(\"url\")\n\n return \"ES %s %s\" % (http_method, http_path)\n\n def get_context(self, instance, args, kwargs):\n args_len = len(args)\n params = args[2] if args_len > 2 else kwargs.get(\"params\")\n body = params.pop(BODY_REF_NAME, None) if params else None\n body_serialized = args[3] if args_len > 3 else kwargs.get(\"body\")\n\n api_method = params.pop(API_METHOD_KEY_NAME, None) if params else None\n context = {\"db\": {\"type\": \"elasticsearch\"}}\n if api_method in self.query_methods:\n query = []\n # using both q AND body is allowed in some API endpoints / ES versions,\n # but not in others. We simply capture both if they are there so the\n # user can see it.\n if params and \"q\" in params:\n # 'q' is already encoded to a byte string at this point\n # we assume utf8, which is the default\n query.append(\"q=\" + params[\"q\"].decode(\"utf-8\", errors=\"replace\"))\n if body_serialized:\n if isinstance(body_serialized, bytes):\n query.append(body_serialized.decode(\"utf-8\", errors=\"replace\"))\n else:\n query.append(body_serialized)\n elif body and isinstance(body, dict):\n try:\n query.append(json.dumps(body, default=compat.text_type))\n except TypeError:\n pass\n if query:\n context[\"db\"][\"statement\"] = \"\\n\\n\".join(query)\n elif api_method == \"update\":\n if isinstance(body, dict) and \"script\" in body:\n # only get the `script` field from the body\n context[\"db\"][\"statement\"] = json.dumps({\"script\": body[\"script\"]})\n context[\"destination\"] = {\n \"address\": instance.host,\n \"service\": {\"name\": \"elasticsearch\", \"resource\": \"elasticsearch\", \"type\": \"db\"},\n }\n return context\n\n\nclass ElasticsearchConnectionInstrumentation(ElasticSearchConnectionMixin, AbstractInstrumentedModule):\n name = \"elasticsearch_connection\"\n\n instrument_list = [\n (\"elasticsearch.connection.http_urllib3\", \"Urllib3HttpConnection.perform_request\"),\n (\"elasticsearch.connection.http_requests\", \"RequestsHttpConnection.perform_request\"),\n ]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n signature = self.get_signature(args, kwargs)\n context = self.get_context(instance, args, kwargs)\n\n with elasticapm.capture_span(\n signature,\n span_type=\"db\",\n span_subtype=\"elasticsearch\",\n span_action=\"query\",\n extra=context,\n skip_frames=2,\n leaf=True,\n ):\n return wrapped(*args, **kwargs)\n\n\nclass ElasticsearchInstrumentation(AbstractInstrumentedModule):\n name = \"elasticsearch\"\n\n instrument_list = [\n (\"elasticsearch.client\", \"Elasticsearch.delete_by_query\"),\n (\"elasticsearch.client\", \"Elasticsearch.search\"),\n (\"elasticsearch.client\", \"Elasticsearch.count\"),\n (\"elasticsearch.client\", \"Elasticsearch.update\"),\n ]\n\n def __init__(self):\n super(ElasticsearchInstrumentation, self).__init__()\n try:\n from elasticsearch import VERSION\n\n self.version = VERSION[0]\n except ImportError:\n self.version = None\n\n def instrument(self):\n if self.version and not 2 <= self.version < 8:\n logger.debug(\"Instrumenting version %s of Elasticsearch is not supported by Elastic APM\", self.version)\n return\n super(ElasticsearchInstrumentation, self).instrument()\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n kwargs = self.inject_apm_params(method, kwargs)\n return wrapped(*args, **kwargs)\n\n def inject_apm_params(self, method, kwargs):\n params = kwargs.pop(\"params\", {})\n\n # make a copy of params in case the caller reuses them for some reason\n params = params.copy() if params is not None else {}\n\n method_name = method.partition(\".\")[-1]\n\n # store a reference to the non-serialized body so we can use it in the connection layer\n body = kwargs.get(\"body\")\n params[BODY_REF_NAME] = body\n params[API_METHOD_KEY_NAME] = method_name\n\n kwargs[\"params\"] = params\n return kwargs\n", "path": "elasticapm/instrumentation/packages/elasticsearch.py"}]}
2,685
361
gh_patches_debug_15375
rasdani/github-patches
git_diff
openstates__openstates-scrapers-1221
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Make `LXMLMixin.lxmlize` use scraper's `session`, rather than a raw `request.get()` Otherwise it slows down `--fast` scrapes because it doesn't use cached responses. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `openstates/utils/lxmlize.py` Content: ``` 1 import requests 2 import lxml.html 3 4 5 class LXMLMixin(object): 6 """Mixin for adding LXML helper functions to Open States code.""" 7 8 def lxmlize(self, url, raise_exceptions=False): 9 """Parses document into an LXML object and makes links absolute. 10 11 Args: 12 url (str): URL of the document to parse. 13 Returns: 14 Element: Document node representing the page. 15 """ 16 try: 17 response = requests.get(url) 18 except requests.exceptions.SSLError: 19 self.warning('`self.lxmlize()` failed due to SSL error, trying'\ 20 'an unverified `requests.get()`') 21 response = requests.get(url, verify=False) 22 23 if raise_exceptions: 24 response.raise_for_status() 25 26 page = lxml.html.fromstring(response.text) 27 page.make_links_absolute(url) 28 29 return page 30 31 def get_node(self, base_node, xpath_query): 32 """Searches for node in an element tree. 33 34 Attempts to return only the first node found for an xpath query. Meant 35 to cut down on exception handling boilerplate. 36 37 Args: 38 base_node (Element): Document node to begin querying from. 39 xpath_query (str): XPath query to define nodes to search for. 40 Returns: 41 Element: First node found that matches the query. 42 """ 43 try: 44 node = base_node.xpath(xpath_query)[0] 45 except IndexError: 46 node = None 47 48 return node 49 50 def get_nodes(self, base_node, xpath_query): 51 """Searches for nodes in an element tree. 52 53 Attempts to return all nodes found for an xpath query. Meant to cut 54 down on exception handling boilerplate. 55 56 Args: 57 base_node (Element): Document node to begin querying from. 58 xpath_query (str): Xpath query to define nodes to search for. 59 Returns: 60 List[Element]: All nodes found that match the query. 61 """ 62 return base_node.xpath(xpath_query) 63 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/openstates/utils/lxmlize.py b/openstates/utils/lxmlize.py --- a/openstates/utils/lxmlize.py +++ b/openstates/utils/lxmlize.py @@ -14,11 +14,13 @@ Element: Document node representing the page. """ try: - response = requests.get(url) + # This class is always mixed into subclasses of `billy.Scraper`, + # which have a `get` method defined. + response = self.get(url) except requests.exceptions.SSLError: self.warning('`self.lxmlize()` failed due to SSL error, trying'\ - 'an unverified `requests.get()`') - response = requests.get(url, verify=False) + 'an unverified `self.get()` (i.e. `requests.get()`)') + response = self.get(url, verify=False) if raise_exceptions: response.raise_for_status()
{"golden_diff": "diff --git a/openstates/utils/lxmlize.py b/openstates/utils/lxmlize.py\n--- a/openstates/utils/lxmlize.py\n+++ b/openstates/utils/lxmlize.py\n@@ -14,11 +14,13 @@\n Element: Document node representing the page.\n \"\"\"\n try:\n- response = requests.get(url)\n+ # This class is always mixed into subclasses of `billy.Scraper`,\n+ # which have a `get` method defined.\n+ response = self.get(url)\n except requests.exceptions.SSLError:\n self.warning('`self.lxmlize()` failed due to SSL error, trying'\\\n- 'an unverified `requests.get()`')\n- response = requests.get(url, verify=False)\n+ 'an unverified `self.get()` (i.e. `requests.get()`)')\n+ response = self.get(url, verify=False)\n \n if raise_exceptions:\n response.raise_for_status()\n", "issue": "Make `LXMLMixin.lxmlize` use scraper's `session`, rather than a raw `request.get()`\nOtherwise it slows down `--fast` scrapes because it doesn't use cached responses.\n", "before_files": [{"content": "import requests\nimport lxml.html\n\n\nclass LXMLMixin(object):\n \"\"\"Mixin for adding LXML helper functions to Open States code.\"\"\"\n\n def lxmlize(self, url, raise_exceptions=False):\n \"\"\"Parses document into an LXML object and makes links absolute.\n\n Args:\n url (str): URL of the document to parse.\n Returns:\n Element: Document node representing the page.\n \"\"\"\n try:\n response = requests.get(url)\n except requests.exceptions.SSLError:\n self.warning('`self.lxmlize()` failed due to SSL error, trying'\\\n 'an unverified `requests.get()`')\n response = requests.get(url, verify=False)\n\n if raise_exceptions:\n response.raise_for_status()\n\n page = lxml.html.fromstring(response.text)\n page.make_links_absolute(url)\n\n return page\n\n def get_node(self, base_node, xpath_query):\n \"\"\"Searches for node in an element tree.\n\n Attempts to return only the first node found for an xpath query. Meant\n to cut down on exception handling boilerplate.\n\n Args:\n base_node (Element): Document node to begin querying from.\n xpath_query (str): XPath query to define nodes to search for.\n Returns:\n Element: First node found that matches the query.\n \"\"\"\n try:\n node = base_node.xpath(xpath_query)[0]\n except IndexError:\n node = None\n\n return node\n\n def get_nodes(self, base_node, xpath_query):\n \"\"\"Searches for nodes in an element tree.\n\n Attempts to return all nodes found for an xpath query. Meant to cut\n down on exception handling boilerplate.\n\n Args:\n base_node (Element): Document node to begin querying from.\n xpath_query (str): Xpath query to define nodes to search for.\n Returns:\n List[Element]: All nodes found that match the query.\n \"\"\"\n return base_node.xpath(xpath_query)\n", "path": "openstates/utils/lxmlize.py"}], "after_files": [{"content": "import requests\nimport lxml.html\n\n\nclass LXMLMixin(object):\n \"\"\"Mixin for adding LXML helper functions to Open States code.\"\"\"\n\n def lxmlize(self, url, raise_exceptions=False):\n \"\"\"Parses document into an LXML object and makes links absolute.\n\n Args:\n url (str): URL of the document to parse.\n Returns:\n Element: Document node representing the page.\n \"\"\"\n try:\n # This class is always mixed into subclasses of `billy.Scraper`,\n # which have a `get` method defined.\n response = self.get(url)\n except requests.exceptions.SSLError:\n self.warning('`self.lxmlize()` failed due to SSL error, trying'\\\n 'an unverified `self.get()` (i.e. `requests.get()`)')\n response = self.get(url, verify=False)\n\n if raise_exceptions:\n response.raise_for_status()\n\n page = lxml.html.fromstring(response.text)\n page.make_links_absolute(url)\n\n return page\n\n def get_node(self, base_node, xpath_query):\n \"\"\"Searches for node in an element tree.\n\n Attempts to return only the first node found for an xpath query. Meant\n to cut down on exception handling boilerplate.\n\n Args:\n base_node (Element): Document node to begin querying from.\n xpath_query (str): XPath query to define nodes to search for.\n Returns:\n Element: First node found that matches the query.\n \"\"\"\n try:\n node = base_node.xpath(xpath_query)[0]\n except IndexError:\n node = None\n\n return node\n\n def get_nodes(self, base_node, xpath_query):\n \"\"\"Searches for nodes in an element tree.\n\n Attempts to return all nodes found for an xpath query. Meant to cut\n down on exception handling boilerplate.\n\n Args:\n base_node (Element): Document node to begin querying from.\n xpath_query (str): Xpath query to define nodes to search for.\n Returns:\n List[Element]: All nodes found that match the query.\n \"\"\"\n return base_node.xpath(xpath_query)\n", "path": "openstates/utils/lxmlize.py"}]}
837
205
gh_patches_debug_23058
rasdani/github-patches
git_diff
Azure__azure-cli-extensions-2069
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [AKS] az aks get-upgrades --output table fails on extensions - If the issue is to do with Azure CLI 2.0 in-particular, create an issue here at [Azure/azure-cli](https://github.com/Azure/azure-cli/issues) ### Extension name (the extension in question) aks-preview ### Description of issue (in as much detail as possible) Running ``` az aks get-upgrades -g {{RG_NAME}} -n {{CLUSTER_NAME}} --output table ``` fails with ``` preview/azext_aks_preview/_format.py", line 72, in aks_upgrades_table_format find_preview_versions(result.get('agentPoolProfiles', [{}])[0]) TypeError: 'NoneType' object is not subscriptable cli.azure.cli.core.util : Table output unavailable. Use the --query option to specify an appropriate query. Use --debug for more info. Table output unavailable. Use the --query option to specify an appropriate query. Use --debug for more info. ``` The fix is to not use the agentPoolProfile when formatting the table since the field is nil. We need to update the formatter. Issue is only when using extensions and the -o table. Works fine on CLI core. ----- --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `src/aks-preview/azext_aks_preview/_format.py` Content: ``` 1 # -------------------------------------------------------------------------------------------- 2 # Copyright (c) Microsoft Corporation. All rights reserved. 3 # Licensed under the MIT License. See License.txt in the project root for license information. 4 # -------------------------------------------------------------------------------------------- 5 6 from collections import OrderedDict 7 # pylint: disable=import-error 8 from jmespath import compile as compile_jmes, Options 9 # pylint: disable=import-error 10 from jmespath import functions 11 12 13 def aks_agentpool_show_table_format(result): 14 """Format an agent pool as summary results for display with "-o table".""" 15 return [_aks_agentpool_table_format(result)] 16 17 18 def _aks_agentpool_table_format(result): 19 parsed = compile_jmes("""{ 20 name: name, 21 osType: osType, 22 kubernetesVersion: kubernetesVersion, 23 vmSize: vmSize, 24 osDiskSizeGB: osDiskSizeGB, 25 count: count, 26 maxPods: maxPods, 27 provisioningState: provisioningState, 28 mode: mode 29 }""") 30 # use ordered dicts so headers are predictable 31 return parsed.search(result, Options(dict_cls=OrderedDict)) 32 33 34 def aks_agentpool_list_table_format(results): 35 """Format an agent pool list for display with "-o table".""" 36 return [_aks_agentpool_table_format(r) for r in results] 37 38 39 def aks_list_table_format(results): 40 """"Format a list of managed clusters as summary results for display with "-o table".""" 41 return [_aks_table_format(r) for r in results] 42 43 44 def aks_show_table_format(result): 45 """Format a managed cluster as summary results for display with "-o table".""" 46 return [_aks_table_format(result)] 47 48 49 def _aks_table_format(result): 50 parsed = compile_jmes("""{ 51 name: name, 52 location: location, 53 resourceGroup: resourceGroup, 54 kubernetesVersion: kubernetesVersion, 55 provisioningState: provisioningState, 56 fqdn: fqdn 57 }""") 58 # use ordered dicts so headers are predictable 59 return parsed.search(result, Options(dict_cls=OrderedDict)) 60 61 62 def aks_upgrades_table_format(result): 63 """Format get-upgrades results as a summary for display with "-o table".""" 64 65 preview = {} 66 67 def find_preview_versions(versions_bag): 68 for upgrade in versions_bag.get('upgrades', []): 69 if upgrade.get('isPreview', False): 70 preview[upgrade['kubernetesVersion']] = True 71 find_preview_versions(result.get('controlPlaneProfile', {})) 72 find_preview_versions(result.get('agentPoolProfiles', [{}])[0]) 73 74 # This expression assumes there is one node pool, and that the master and nodes upgrade in lockstep. 75 parsed = compile_jmes("""{ 76 name: name, 77 resourceGroup: resourceGroup, 78 masterVersion: controlPlaneProfile.kubernetesVersion || `unknown` | set_preview(@), 79 nodePoolVersion: agentPoolProfiles[0].kubernetesVersion || `unknown` | set_preview(@), 80 upgrades: controlPlaneProfile.upgrades[].kubernetesVersion || [`None available`] | sort_versions(@) | set_preview_array(@) | join(`, `, @) 81 }""") 82 # use ordered dicts so headers are predictable 83 return parsed.search(result, Options(dict_cls=OrderedDict, custom_functions=_custom_functions(preview))) 84 85 86 def aks_versions_table_format(result): 87 """Format get-versions results as a summary for display with "-o table".""" 88 89 # get preview orchestrator version 90 preview = {} 91 92 def find_preview_versions(): 93 for orchestrator in result.get('orchestrators', []): 94 if orchestrator.get('isPreview', False): 95 preview[orchestrator['orchestratorVersion']] = True 96 find_preview_versions() 97 98 parsed = compile_jmes("""orchestrators[].{ 99 kubernetesVersion: orchestratorVersion | set_preview(@), 100 upgrades: upgrades[].orchestratorVersion || [`None available`] | sort_versions(@) | set_preview_array(@) | join(`, `, @) 101 }""") 102 # use ordered dicts so headers are predictable 103 results = parsed.search(result, Options(dict_cls=OrderedDict, custom_functions=_custom_functions(preview))) 104 return sorted(results, key=lambda x: version_to_tuple(x.get('kubernetesVersion')), reverse=True) 105 106 107 def version_to_tuple(version): 108 """Removes preview suffix""" 109 if version.endswith('(preview)'): 110 version = version[:-len('(preview)')] 111 return tuple(map(int, (version.split('.')))) 112 113 114 def _custom_functions(preview_versions): 115 class CustomFunctions(functions.Functions): # pylint: disable=too-few-public-methods 116 117 @functions.signature({'types': ['array']}) 118 def _func_sort_versions(self, versions): # pylint: disable=no-self-use 119 """Custom JMESPath `sort_versions` function that sorts an array of strings as software versions""" 120 try: 121 return sorted(versions, key=version_to_tuple) 122 except (TypeError, ValueError): # if it wasn't sortable, return the input so the pipeline continues 123 return versions 124 125 @functions.signature({'types': ['array']}) 126 def _func_set_preview_array(self, versions): 127 """Custom JMESPath `set_preview_array` function that suffixes preview version""" 128 try: 129 for i, _ in enumerate(versions): 130 versions[i] = self._func_set_preview(versions[i]) 131 return versions 132 except(TypeError, ValueError): 133 return versions 134 135 @functions.signature({'types': ['string']}) 136 def _func_set_preview(self, version): # pylint: disable=no-self-use 137 """Custom JMESPath `set_preview` function that suffixes preview version""" 138 try: 139 if preview_versions.get(version, False): 140 return version + '(preview)' 141 return version 142 except(TypeError, ValueError): 143 return version 144 145 return CustomFunctions() 146 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/src/aks-preview/azext_aks_preview/_format.py b/src/aks-preview/azext_aks_preview/_format.py --- a/src/aks-preview/azext_aks_preview/_format.py +++ b/src/aks-preview/azext_aks_preview/_format.py @@ -69,14 +69,12 @@ if upgrade.get('isPreview', False): preview[upgrade['kubernetesVersion']] = True find_preview_versions(result.get('controlPlaneProfile', {})) - find_preview_versions(result.get('agentPoolProfiles', [{}])[0]) # This expression assumes there is one node pool, and that the master and nodes upgrade in lockstep. parsed = compile_jmes("""{ name: name, resourceGroup: resourceGroup, - masterVersion: controlPlaneProfile.kubernetesVersion || `unknown` | set_preview(@), - nodePoolVersion: agentPoolProfiles[0].kubernetesVersion || `unknown` | set_preview(@), + masterVersion: controlPlaneProfile.kubernetesVersion || `unknown`, upgrades: controlPlaneProfile.upgrades[].kubernetesVersion || [`None available`] | sort_versions(@) | set_preview_array(@) | join(`, `, @) }""") # use ordered dicts so headers are predictable
{"golden_diff": "diff --git a/src/aks-preview/azext_aks_preview/_format.py b/src/aks-preview/azext_aks_preview/_format.py\n--- a/src/aks-preview/azext_aks_preview/_format.py\n+++ b/src/aks-preview/azext_aks_preview/_format.py\n@@ -69,14 +69,12 @@\n if upgrade.get('isPreview', False):\n preview[upgrade['kubernetesVersion']] = True\n find_preview_versions(result.get('controlPlaneProfile', {}))\n- find_preview_versions(result.get('agentPoolProfiles', [{}])[0])\n \n # This expression assumes there is one node pool, and that the master and nodes upgrade in lockstep.\n parsed = compile_jmes(\"\"\"{\n name: name,\n resourceGroup: resourceGroup,\n- masterVersion: controlPlaneProfile.kubernetesVersion || `unknown` | set_preview(@),\n- nodePoolVersion: agentPoolProfiles[0].kubernetesVersion || `unknown` | set_preview(@),\n+ masterVersion: controlPlaneProfile.kubernetesVersion || `unknown`,\n upgrades: controlPlaneProfile.upgrades[].kubernetesVersion || [`None available`] | sort_versions(@) | set_preview_array(@) | join(`, `, @)\n }\"\"\")\n # use ordered dicts so headers are predictable\n", "issue": "[AKS] az aks get-upgrades --output table fails on extensions\n- If the issue is to do with Azure CLI 2.0 in-particular, create an issue here at [Azure/azure-cli](https://github.com/Azure/azure-cli/issues)\r\n\r\n### Extension name (the extension in question)\r\naks-preview\r\n\r\n### Description of issue (in as much detail as possible)\r\nRunning\r\n```\r\naz aks get-upgrades -g {{RG_NAME}} -n {{CLUSTER_NAME}} --output table\r\n```\r\nfails with\r\n\r\n```\r\npreview/azext_aks_preview/_format.py\", line 72, in aks_upgrades_table_format\r\n find_preview_versions(result.get('agentPoolProfiles', [{}])[0])\r\nTypeError: 'NoneType' object is not subscriptable\r\n\r\ncli.azure.cli.core.util : Table output unavailable. Use the --query option to specify an appropriate query. Use --debug for more info.\r\nTable output unavailable. Use the --query option to specify an appropriate query. Use --debug for more info.\r\n```\r\n\r\nThe fix is to not use the agentPoolProfile when formatting the table since the field is nil. We need to update the formatter.\r\n\r\nIssue is only when using extensions and the -o table. Works fine on CLI core.\r\n-----\r\n\r\n\n", "before_files": [{"content": "# --------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for license information.\n# --------------------------------------------------------------------------------------------\n\nfrom collections import OrderedDict\n# pylint: disable=import-error\nfrom jmespath import compile as compile_jmes, Options\n# pylint: disable=import-error\nfrom jmespath import functions\n\n\ndef aks_agentpool_show_table_format(result):\n \"\"\"Format an agent pool as summary results for display with \"-o table\".\"\"\"\n return [_aks_agentpool_table_format(result)]\n\n\ndef _aks_agentpool_table_format(result):\n parsed = compile_jmes(\"\"\"{\n name: name,\n osType: osType,\n kubernetesVersion: kubernetesVersion,\n vmSize: vmSize,\n osDiskSizeGB: osDiskSizeGB,\n count: count,\n maxPods: maxPods,\n provisioningState: provisioningState,\n mode: mode\n }\"\"\")\n # use ordered dicts so headers are predictable\n return parsed.search(result, Options(dict_cls=OrderedDict))\n\n\ndef aks_agentpool_list_table_format(results):\n \"\"\"Format an agent pool list for display with \"-o table\".\"\"\"\n return [_aks_agentpool_table_format(r) for r in results]\n\n\ndef aks_list_table_format(results):\n \"\"\"\"Format a list of managed clusters as summary results for display with \"-o table\".\"\"\"\n return [_aks_table_format(r) for r in results]\n\n\ndef aks_show_table_format(result):\n \"\"\"Format a managed cluster as summary results for display with \"-o table\".\"\"\"\n return [_aks_table_format(result)]\n\n\ndef _aks_table_format(result):\n parsed = compile_jmes(\"\"\"{\n name: name,\n location: location,\n resourceGroup: resourceGroup,\n kubernetesVersion: kubernetesVersion,\n provisioningState: provisioningState,\n fqdn: fqdn\n }\"\"\")\n # use ordered dicts so headers are predictable\n return parsed.search(result, Options(dict_cls=OrderedDict))\n\n\ndef aks_upgrades_table_format(result):\n \"\"\"Format get-upgrades results as a summary for display with \"-o table\".\"\"\"\n\n preview = {}\n\n def find_preview_versions(versions_bag):\n for upgrade in versions_bag.get('upgrades', []):\n if upgrade.get('isPreview', False):\n preview[upgrade['kubernetesVersion']] = True\n find_preview_versions(result.get('controlPlaneProfile', {}))\n find_preview_versions(result.get('agentPoolProfiles', [{}])[0])\n\n # This expression assumes there is one node pool, and that the master and nodes upgrade in lockstep.\n parsed = compile_jmes(\"\"\"{\n name: name,\n resourceGroup: resourceGroup,\n masterVersion: controlPlaneProfile.kubernetesVersion || `unknown` | set_preview(@),\n nodePoolVersion: agentPoolProfiles[0].kubernetesVersion || `unknown` | set_preview(@),\n upgrades: controlPlaneProfile.upgrades[].kubernetesVersion || [`None available`] | sort_versions(@) | set_preview_array(@) | join(`, `, @)\n }\"\"\")\n # use ordered dicts so headers are predictable\n return parsed.search(result, Options(dict_cls=OrderedDict, custom_functions=_custom_functions(preview)))\n\n\ndef aks_versions_table_format(result):\n \"\"\"Format get-versions results as a summary for display with \"-o table\".\"\"\"\n\n # get preview orchestrator version\n preview = {}\n\n def find_preview_versions():\n for orchestrator in result.get('orchestrators', []):\n if orchestrator.get('isPreview', False):\n preview[orchestrator['orchestratorVersion']] = True\n find_preview_versions()\n\n parsed = compile_jmes(\"\"\"orchestrators[].{\n kubernetesVersion: orchestratorVersion | set_preview(@),\n upgrades: upgrades[].orchestratorVersion || [`None available`] | sort_versions(@) | set_preview_array(@) | join(`, `, @)\n }\"\"\")\n # use ordered dicts so headers are predictable\n results = parsed.search(result, Options(dict_cls=OrderedDict, custom_functions=_custom_functions(preview)))\n return sorted(results, key=lambda x: version_to_tuple(x.get('kubernetesVersion')), reverse=True)\n\n\ndef version_to_tuple(version):\n \"\"\"Removes preview suffix\"\"\"\n if version.endswith('(preview)'):\n version = version[:-len('(preview)')]\n return tuple(map(int, (version.split('.'))))\n\n\ndef _custom_functions(preview_versions):\n class CustomFunctions(functions.Functions): # pylint: disable=too-few-public-methods\n\n @functions.signature({'types': ['array']})\n def _func_sort_versions(self, versions): # pylint: disable=no-self-use\n \"\"\"Custom JMESPath `sort_versions` function that sorts an array of strings as software versions\"\"\"\n try:\n return sorted(versions, key=version_to_tuple)\n except (TypeError, ValueError): # if it wasn't sortable, return the input so the pipeline continues\n return versions\n\n @functions.signature({'types': ['array']})\n def _func_set_preview_array(self, versions):\n \"\"\"Custom JMESPath `set_preview_array` function that suffixes preview version\"\"\"\n try:\n for i, _ in enumerate(versions):\n versions[i] = self._func_set_preview(versions[i])\n return versions\n except(TypeError, ValueError):\n return versions\n\n @functions.signature({'types': ['string']})\n def _func_set_preview(self, version): # pylint: disable=no-self-use\n \"\"\"Custom JMESPath `set_preview` function that suffixes preview version\"\"\"\n try:\n if preview_versions.get(version, False):\n return version + '(preview)'\n return version\n except(TypeError, ValueError):\n return version\n\n return CustomFunctions()\n", "path": "src/aks-preview/azext_aks_preview/_format.py"}], "after_files": [{"content": "# --------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for license information.\n# --------------------------------------------------------------------------------------------\n\nfrom collections import OrderedDict\n# pylint: disable=import-error\nfrom jmespath import compile as compile_jmes, Options\n# pylint: disable=import-error\nfrom jmespath import functions\n\n\ndef aks_agentpool_show_table_format(result):\n \"\"\"Format an agent pool as summary results for display with \"-o table\".\"\"\"\n return [_aks_agentpool_table_format(result)]\n\n\ndef _aks_agentpool_table_format(result):\n parsed = compile_jmes(\"\"\"{\n name: name,\n osType: osType,\n kubernetesVersion: kubernetesVersion,\n vmSize: vmSize,\n osDiskSizeGB: osDiskSizeGB,\n count: count,\n maxPods: maxPods,\n provisioningState: provisioningState,\n mode: mode\n }\"\"\")\n # use ordered dicts so headers are predictable\n return parsed.search(result, Options(dict_cls=OrderedDict))\n\n\ndef aks_agentpool_list_table_format(results):\n \"\"\"Format an agent pool list for display with \"-o table\".\"\"\"\n return [_aks_agentpool_table_format(r) for r in results]\n\n\ndef aks_list_table_format(results):\n \"\"\"\"Format a list of managed clusters as summary results for display with \"-o table\".\"\"\"\n return [_aks_table_format(r) for r in results]\n\n\ndef aks_show_table_format(result):\n \"\"\"Format a managed cluster as summary results for display with \"-o table\".\"\"\"\n return [_aks_table_format(result)]\n\n\ndef _aks_table_format(result):\n parsed = compile_jmes(\"\"\"{\n name: name,\n location: location,\n resourceGroup: resourceGroup,\n kubernetesVersion: kubernetesVersion,\n provisioningState: provisioningState,\n fqdn: fqdn\n }\"\"\")\n # use ordered dicts so headers are predictable\n return parsed.search(result, Options(dict_cls=OrderedDict))\n\n\ndef aks_upgrades_table_format(result):\n \"\"\"Format get-upgrades results as a summary for display with \"-o table\".\"\"\"\n\n preview = {}\n\n def find_preview_versions(versions_bag):\n for upgrade in versions_bag.get('upgrades', []):\n if upgrade.get('isPreview', False):\n preview[upgrade['kubernetesVersion']] = True\n find_preview_versions(result.get('controlPlaneProfile', {}))\n\n # This expression assumes there is one node pool, and that the master and nodes upgrade in lockstep.\n parsed = compile_jmes(\"\"\"{\n name: name,\n resourceGroup: resourceGroup,\n masterVersion: controlPlaneProfile.kubernetesVersion || `unknown`,\n upgrades: controlPlaneProfile.upgrades[].kubernetesVersion || [`None available`] | sort_versions(@) | set_preview_array(@) | join(`, `, @)\n }\"\"\")\n # use ordered dicts so headers are predictable\n return parsed.search(result, Options(dict_cls=OrderedDict, custom_functions=_custom_functions(preview)))\n\n\ndef aks_versions_table_format(result):\n \"\"\"Format get-versions results as a summary for display with \"-o table\".\"\"\"\n\n # get preview orchestrator version\n preview = {}\n\n def find_preview_versions():\n for orchestrator in result.get('orchestrators', []):\n if orchestrator.get('isPreview', False):\n preview[orchestrator['orchestratorVersion']] = True\n find_preview_versions()\n\n parsed = compile_jmes(\"\"\"orchestrators[].{\n kubernetesVersion: orchestratorVersion | set_preview(@),\n upgrades: upgrades[].orchestratorVersion || [`None available`] | sort_versions(@) | set_preview_array(@) | join(`, `, @)\n }\"\"\")\n # use ordered dicts so headers are predictable\n results = parsed.search(result, Options(dict_cls=OrderedDict, custom_functions=_custom_functions(preview)))\n return sorted(results, key=lambda x: version_to_tuple(x.get('kubernetesVersion')), reverse=True)\n\n\ndef version_to_tuple(version):\n \"\"\"Removes preview suffix\"\"\"\n if version.endswith('(preview)'):\n version = version[:-len('(preview)')]\n return tuple(map(int, (version.split('.'))))\n\n\ndef _custom_functions(preview_versions):\n class CustomFunctions(functions.Functions): # pylint: disable=too-few-public-methods\n\n @functions.signature({'types': ['array']})\n def _func_sort_versions(self, versions): # pylint: disable=no-self-use\n \"\"\"Custom JMESPath `sort_versions` function that sorts an array of strings as software versions\"\"\"\n try:\n return sorted(versions, key=version_to_tuple)\n except (TypeError, ValueError): # if it wasn't sortable, return the input so the pipeline continues\n return versions\n\n @functions.signature({'types': ['array']})\n def _func_set_preview_array(self, versions):\n \"\"\"Custom JMESPath `set_preview_array` function that suffixes preview version\"\"\"\n try:\n for i, _ in enumerate(versions):\n versions[i] = self._func_set_preview(versions[i])\n return versions\n except(TypeError, ValueError):\n return versions\n\n @functions.signature({'types': ['string']})\n def _func_set_preview(self, version): # pylint: disable=no-self-use\n \"\"\"Custom JMESPath `set_preview` function that suffixes preview version\"\"\"\n try:\n if preview_versions.get(version, False):\n return version + '(preview)'\n return version\n except(TypeError, ValueError):\n return version\n\n return CustomFunctions()\n", "path": "src/aks-preview/azext_aks_preview/_format.py"}]}
2,106
280
gh_patches_debug_38544
rasdani/github-patches
git_diff
biolab__orange3-5824
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- FR: Single-Layer Neural Network possible? I could be misreading NN widget help file [1], but is it possible to specify a single-layer NN - i.e. no hidden layers? If I enter 0 for the number of hidden layers neurons, it gives an error (screenshot no 1), while if I leave it blank, it defaults to 10. As an example, for a NN with 4 inputs, one output with logistic, what would I need to input in the neurons count box to get it (screenshot no 2)? ![image](https://user-images.githubusercontent.com/32625394/152094575-c5fffdb2-00d1-4d7f-930a-6de602b94b66.png) ![image](https://user-images.githubusercontent.com/32625394/152094061-2da1ebca-4ec8-4981-ab70-98c2e32656cf.png) [1] Orange Data Mining - Neural Network Widget https://orangedatamining.com/widget-catalog/model/neuralnetwork/ Env: Windows + Orange 3.31 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `Orange/widgets/model/owneuralnetwork.py` Content: ``` 1 from functools import partial 2 import copy 3 import logging 4 import re 5 import concurrent.futures 6 from itertools import chain 7 8 import numpy as np 9 10 from AnyQt.QtWidgets import QFormLayout, QLabel 11 from AnyQt.QtCore import Qt, QThread, QObject 12 from AnyQt.QtCore import pyqtSlot as Slot, pyqtSignal as Signal 13 14 from Orange.data import Table 15 from Orange.modelling import NNLearner 16 from Orange.widgets import gui 17 from Orange.widgets.settings import Setting 18 from Orange.widgets.utils.owlearnerwidget import OWBaseLearner 19 20 from Orange.widgets.utils.concurrent import ThreadExecutor, FutureWatcher 21 from Orange.widgets.utils.widgetpreview import WidgetPreview 22 23 24 class Task(QObject): 25 """ 26 A class that will hold the state for an learner evaluation. 27 """ 28 done = Signal(object) 29 progressChanged = Signal(float) 30 31 future = None # type: concurrent.futures.Future 32 watcher = None # type: FutureWatcher 33 cancelled = False # type: bool 34 35 def setFuture(self, future): 36 if self.future is not None: 37 raise RuntimeError("future is already set") 38 self.future = future 39 self.watcher = FutureWatcher(future, parent=self) 40 self.watcher.done.connect(self.done) 41 42 def cancel(self): 43 """ 44 Cancel the task. 45 46 Set the `cancelled` field to True and block until the future is done. 47 """ 48 # set cancelled state 49 self.cancelled = True 50 self.future.cancel() 51 concurrent.futures.wait([self.future]) 52 53 def emitProgressUpdate(self, value): 54 self.progressChanged.emit(value) 55 56 def isInterruptionRequested(self): 57 return self.cancelled 58 59 60 class CancelTaskException(BaseException): 61 pass 62 63 64 class OWNNLearner(OWBaseLearner): 65 name = "Neural Network" 66 description = "A multi-layer perceptron (MLP) algorithm with " \ 67 "backpropagation." 68 icon = "icons/NN.svg" 69 priority = 90 70 keywords = ["mlp"] 71 72 LEARNER = NNLearner 73 74 activation = ["identity", "logistic", "tanh", "relu"] 75 act_lbl = ["Identity", "Logistic", "tanh", "ReLu"] 76 solver = ["lbfgs", "sgd", "adam"] 77 solv_lbl = ["L-BFGS-B", "SGD", "Adam"] 78 79 learner_name = Setting("Neural Network") 80 hidden_layers_input = Setting("100,") 81 activation_index = Setting(3) 82 solver_index = Setting(2) 83 max_iterations = Setting(200) 84 alpha_index = Setting(0) 85 replicable = Setting(True) 86 settings_version = 1 87 88 alphas = list(chain([x / 10000 for x in range(1, 10)], 89 [x / 1000 for x in range(1, 10)], 90 [x / 100 for x in range(1, 10)], 91 [x / 10 for x in range(1, 10)], 92 range(1, 10), 93 range(10, 100, 5), 94 range(100, 200, 10), 95 range(100, 1001, 50))) 96 97 def add_main_layout(self): 98 # this is part of init, pylint: disable=attribute-defined-outside-init 99 form = QFormLayout() 100 form.setFieldGrowthPolicy(form.AllNonFixedFieldsGrow) 101 form.setLabelAlignment(Qt.AlignLeft) 102 gui.widgetBox(self.controlArea, True, orientation=form) 103 form.addRow( 104 "Neurons in hidden layers:", 105 gui.lineEdit( 106 None, self, "hidden_layers_input", 107 orientation=Qt.Horizontal, callback=self.settings_changed, 108 tooltip="A list of integers defining neurons. Length of list " 109 "defines the number of layers. E.g. 4, 2, 2, 3.", 110 placeholderText="e.g. 10,")) 111 form.addRow( 112 "Activation:", 113 gui.comboBox( 114 None, self, "activation_index", orientation=Qt.Horizontal, 115 label="Activation:", items=[i for i in self.act_lbl], 116 callback=self.settings_changed)) 117 118 form.addRow( 119 "Solver:", 120 gui.comboBox( 121 None, self, "solver_index", orientation=Qt.Horizontal, 122 label="Solver:", items=[i for i in self.solv_lbl], 123 callback=self.settings_changed)) 124 self.reg_label = QLabel() 125 slider = gui.hSlider( 126 None, self, "alpha_index", 127 minValue=0, maxValue=len(self.alphas) - 1, 128 callback=lambda: (self.set_alpha(), self.settings_changed()), 129 createLabel=False) 130 form.addRow(self.reg_label, slider) 131 self.set_alpha() 132 133 form.addRow( 134 "Maximal number of iterations:", 135 gui.spin( 136 None, self, "max_iterations", 10, 1000000, step=10, 137 label="Max iterations:", orientation=Qt.Horizontal, 138 alignment=Qt.AlignRight, callback=self.settings_changed)) 139 140 form.addRow( 141 gui.checkBox( 142 None, self, "replicable", label="Replicable training", 143 callback=self.settings_changed, attribute=Qt.WA_LayoutUsesWidgetRect) 144 ) 145 146 def set_alpha(self): 147 # called from init, pylint: disable=attribute-defined-outside-init 148 self.strength_C = self.alphas[self.alpha_index] 149 self.reg_label.setText("Regularization, α={}:".format(self.strength_C)) 150 151 @property 152 def alpha(self): 153 return self.alphas[self.alpha_index] 154 155 def setup_layout(self): 156 # this is part of init, pylint: disable=attribute-defined-outside-init 157 super().setup_layout() 158 159 self._task = None # type: Optional[Task] 160 self._executor = ThreadExecutor() 161 162 # just a test cancel button 163 b = gui.button(self.apply_button, self, "Cancel", 164 callback=self.cancel, addToLayout=False) 165 self.apply_button.layout().insertStretch(0, 100) 166 self.apply_button.layout().insertWidget(0, b) 167 168 def create_learner(self): 169 return self.LEARNER( 170 hidden_layer_sizes=self.get_hidden_layers(), 171 activation=self.activation[self.activation_index], 172 solver=self.solver[self.solver_index], 173 alpha=self.alpha, 174 random_state=1 if self.replicable else None, 175 max_iter=self.max_iterations, 176 preprocessors=self.preprocessors) 177 178 def get_learner_parameters(self): 179 return (("Hidden layers", ', '.join(map(str, self.get_hidden_layers()))), 180 ("Activation", self.act_lbl[self.activation_index]), 181 ("Solver", self.solv_lbl[self.solver_index]), 182 ("Alpha", self.alpha), 183 ("Max iterations", self.max_iterations), 184 ("Replicable training", self.replicable)) 185 186 def get_hidden_layers(self): 187 layers = tuple(map(int, re.findall(r'\d+', self.hidden_layers_input))) 188 if not layers: 189 layers = (10,) 190 self.hidden_layers_input = "10," 191 return layers 192 193 def update_model(self): 194 self.show_fitting_failed(None) 195 self.model = None 196 if self.check_data(): 197 self.__update() 198 else: 199 self.Outputs.model.send(self.model) 200 201 @Slot(float) 202 def setProgressValue(self, value): 203 assert self.thread() is QThread.currentThread() 204 self.progressBarSet(value) 205 206 def __update(self): 207 if self._task is not None: 208 # First make sure any pending tasks are cancelled. 209 self.cancel() 210 assert self._task is None 211 212 max_iter = self.learner.kwargs["max_iter"] 213 214 # Setup the task state 215 task = Task() 216 lastemitted = 0. 217 218 def callback(iteration): 219 nonlocal task 220 nonlocal lastemitted 221 if task.isInterruptionRequested(): 222 raise CancelTaskException() 223 progress = round(iteration / max_iter * 100) 224 if progress != lastemitted: 225 task.emitProgressUpdate(progress) 226 lastemitted = progress 227 228 # copy to set the callback so that the learner output is not modified 229 # (currently we can not pass callbacks to learners __call__) 230 learner = copy.copy(self.learner) 231 learner.callback = callback 232 233 def build_model(data, learner): 234 try: 235 return learner(data) 236 except CancelTaskException: 237 return None 238 239 build_model_func = partial(build_model, self.data, learner) 240 241 task.setFuture(self._executor.submit(build_model_func)) 242 task.done.connect(self._task_finished) 243 task.progressChanged.connect(self.setProgressValue) 244 245 # set in setup_layout; pylint: disable=attribute-defined-outside-init 246 self._task = task 247 248 self.progressBarInit() 249 self.setBlocking(True) 250 251 @Slot(concurrent.futures.Future) 252 def _task_finished(self, f): 253 """ 254 Parameters 255 ---------- 256 f : Future 257 The future instance holding the built model 258 """ 259 assert self.thread() is QThread.currentThread() 260 assert self._task is not None 261 assert self._task.future is f 262 assert f.done() 263 self._task.deleteLater() 264 self._task = None # pylint: disable=attribute-defined-outside-init 265 self.setBlocking(False) 266 self.progressBarFinished() 267 268 try: 269 self.model = f.result() 270 except Exception as ex: # pylint: disable=broad-except 271 # Log the exception with a traceback 272 log = logging.getLogger() 273 log.exception(__name__, exc_info=True) 274 self.model = None 275 self.show_fitting_failed(ex) 276 else: 277 self.model.name = self.learner_name 278 self.model.instances = self.data 279 self.model.skl_model.orange_callback = None # remove unpicklable callback 280 self.Outputs.model.send(self.model) 281 282 def cancel(self): 283 """ 284 Cancel the current task (if any). 285 """ 286 if self._task is not None: 287 self._task.cancel() 288 assert self._task.future.done() 289 # disconnect from the task 290 self._task.done.disconnect(self._task_finished) 291 self._task.progressChanged.disconnect(self.setProgressValue) 292 self._task.deleteLater() 293 self._task = None # pylint: disable=attribute-defined-outside-init 294 295 self.progressBarFinished() 296 self.setBlocking(False) 297 298 def onDeleteWidget(self): 299 self.cancel() 300 super().onDeleteWidget() 301 302 @classmethod 303 def migrate_settings(cls, settings, version): 304 if not version: 305 alpha = settings.pop("alpha", None) 306 if alpha is not None: 307 settings["alpha_index"] = \ 308 np.argmin(np.abs(np.array(cls.alphas) - alpha)) 309 310 311 if __name__ == "__main__": # pragma: no cover 312 WidgetPreview(OWNNLearner).run(Table("iris")) 313 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/Orange/widgets/model/owneuralnetwork.py b/Orange/widgets/model/owneuralnetwork.py --- a/Orange/widgets/model/owneuralnetwork.py +++ b/Orange/widgets/model/owneuralnetwork.py @@ -14,6 +14,7 @@ from Orange.data import Table from Orange.modelling import NNLearner from Orange.widgets import gui +from Orange.widgets.widget import Msg from Orange.widgets.settings import Setting from Orange.widgets.utils.owlearnerwidget import OWBaseLearner @@ -81,11 +82,11 @@ activation_index = Setting(3) solver_index = Setting(2) max_iterations = Setting(200) - alpha_index = Setting(0) + alpha_index = Setting(1) replicable = Setting(True) - settings_version = 1 + settings_version = 2 - alphas = list(chain([x / 10000 for x in range(1, 10)], + alphas = list(chain([0], [x / 10000 for x in range(1, 10)], [x / 1000 for x in range(1, 10)], [x / 100 for x in range(1, 10)], [x / 10 for x in range(1, 10)], @@ -94,6 +95,11 @@ range(100, 200, 10), range(100, 1001, 50))) + class Warning(OWBaseLearner.Warning): + no_layers = Msg("ANN without hidden layers is equivalent to logistic " + "regression with worse fitting.\nWe recommend using " + "logistic regression.") + def add_main_layout(self): # this is part of init, pylint: disable=attribute-defined-outside-init form = QFormLayout() @@ -184,10 +190,10 @@ ("Replicable training", self.replicable)) def get_hidden_layers(self): + self.Warning.no_layers.clear() layers = tuple(map(int, re.findall(r'\d+', self.hidden_layers_input))) if not layers: - layers = (10,) - self.hidden_layers_input = "10," + self.Warning.no_layers() return layers def update_model(self): @@ -306,6 +312,8 @@ if alpha is not None: settings["alpha_index"] = \ np.argmin(np.abs(np.array(cls.alphas) - alpha)) + elif version < 2: + settings["alpha_index"] = settings.get("alpha_index", 0) + 1 if __name__ == "__main__": # pragma: no cover
{"golden_diff": "diff --git a/Orange/widgets/model/owneuralnetwork.py b/Orange/widgets/model/owneuralnetwork.py\n--- a/Orange/widgets/model/owneuralnetwork.py\n+++ b/Orange/widgets/model/owneuralnetwork.py\n@@ -14,6 +14,7 @@\n from Orange.data import Table\n from Orange.modelling import NNLearner\n from Orange.widgets import gui\n+from Orange.widgets.widget import Msg\n from Orange.widgets.settings import Setting\n from Orange.widgets.utils.owlearnerwidget import OWBaseLearner\n \n@@ -81,11 +82,11 @@\n activation_index = Setting(3)\n solver_index = Setting(2)\n max_iterations = Setting(200)\n- alpha_index = Setting(0)\n+ alpha_index = Setting(1)\n replicable = Setting(True)\n- settings_version = 1\n+ settings_version = 2\n \n- alphas = list(chain([x / 10000 for x in range(1, 10)],\n+ alphas = list(chain([0], [x / 10000 for x in range(1, 10)],\n [x / 1000 for x in range(1, 10)],\n [x / 100 for x in range(1, 10)],\n [x / 10 for x in range(1, 10)],\n@@ -94,6 +95,11 @@\n range(100, 200, 10),\n range(100, 1001, 50)))\n \n+ class Warning(OWBaseLearner.Warning):\n+ no_layers = Msg(\"ANN without hidden layers is equivalent to logistic \"\n+ \"regression with worse fitting.\\nWe recommend using \"\n+ \"logistic regression.\")\n+\n def add_main_layout(self):\n # this is part of init, pylint: disable=attribute-defined-outside-init\n form = QFormLayout()\n@@ -184,10 +190,10 @@\n (\"Replicable training\", self.replicable))\n \n def get_hidden_layers(self):\n+ self.Warning.no_layers.clear()\n layers = tuple(map(int, re.findall(r'\\d+', self.hidden_layers_input)))\n if not layers:\n- layers = (10,)\n- self.hidden_layers_input = \"10,\"\n+ self.Warning.no_layers()\n return layers\n \n def update_model(self):\n@@ -306,6 +312,8 @@\n if alpha is not None:\n settings[\"alpha_index\"] = \\\n np.argmin(np.abs(np.array(cls.alphas) - alpha))\n+ elif version < 2:\n+ settings[\"alpha_index\"] = settings.get(\"alpha_index\", 0) + 1\n \n \n if __name__ == \"__main__\": # pragma: no cover\n", "issue": "FR: Single-Layer Neural Network possible?\nI could be misreading NN widget help file [1], but is it possible to specify a single-layer NN - i.e. no hidden layers? If I enter 0 for the number of hidden layers neurons, it gives an error (screenshot no 1), while if I leave it blank, it defaults to 10. As an example, for a NN with 4 inputs, one output with logistic, what would I need to input in the neurons count box to get it (screenshot no 2)?\r\n\r\n![image](https://user-images.githubusercontent.com/32625394/152094575-c5fffdb2-00d1-4d7f-930a-6de602b94b66.png) ![image](https://user-images.githubusercontent.com/32625394/152094061-2da1ebca-4ec8-4981-ab70-98c2e32656cf.png)\r\n\r\n[1] Orange Data Mining - Neural Network Widget\r\nhttps://orangedatamining.com/widget-catalog/model/neuralnetwork/\r\n\r\nEnv: Windows + Orange 3.31\n", "before_files": [{"content": "from functools import partial\nimport copy\nimport logging\nimport re\nimport concurrent.futures\nfrom itertools import chain\n\nimport numpy as np\n\nfrom AnyQt.QtWidgets import QFormLayout, QLabel\nfrom AnyQt.QtCore import Qt, QThread, QObject\nfrom AnyQt.QtCore import pyqtSlot as Slot, pyqtSignal as Signal\n\nfrom Orange.data import Table\nfrom Orange.modelling import NNLearner\nfrom Orange.widgets import gui\nfrom Orange.widgets.settings import Setting\nfrom Orange.widgets.utils.owlearnerwidget import OWBaseLearner\n\nfrom Orange.widgets.utils.concurrent import ThreadExecutor, FutureWatcher\nfrom Orange.widgets.utils.widgetpreview import WidgetPreview\n\n\nclass Task(QObject):\n \"\"\"\n A class that will hold the state for an learner evaluation.\n \"\"\"\n done = Signal(object)\n progressChanged = Signal(float)\n\n future = None # type: concurrent.futures.Future\n watcher = None # type: FutureWatcher\n cancelled = False # type: bool\n\n def setFuture(self, future):\n if self.future is not None:\n raise RuntimeError(\"future is already set\")\n self.future = future\n self.watcher = FutureWatcher(future, parent=self)\n self.watcher.done.connect(self.done)\n\n def cancel(self):\n \"\"\"\n Cancel the task.\n\n Set the `cancelled` field to True and block until the future is done.\n \"\"\"\n # set cancelled state\n self.cancelled = True\n self.future.cancel()\n concurrent.futures.wait([self.future])\n\n def emitProgressUpdate(self, value):\n self.progressChanged.emit(value)\n\n def isInterruptionRequested(self):\n return self.cancelled\n\n\nclass CancelTaskException(BaseException):\n pass\n\n\nclass OWNNLearner(OWBaseLearner):\n name = \"Neural Network\"\n description = \"A multi-layer perceptron (MLP) algorithm with \" \\\n \"backpropagation.\"\n icon = \"icons/NN.svg\"\n priority = 90\n keywords = [\"mlp\"]\n\n LEARNER = NNLearner\n\n activation = [\"identity\", \"logistic\", \"tanh\", \"relu\"]\n act_lbl = [\"Identity\", \"Logistic\", \"tanh\", \"ReLu\"]\n solver = [\"lbfgs\", \"sgd\", \"adam\"]\n solv_lbl = [\"L-BFGS-B\", \"SGD\", \"Adam\"]\n\n learner_name = Setting(\"Neural Network\")\n hidden_layers_input = Setting(\"100,\")\n activation_index = Setting(3)\n solver_index = Setting(2)\n max_iterations = Setting(200)\n alpha_index = Setting(0)\n replicable = Setting(True)\n settings_version = 1\n\n alphas = list(chain([x / 10000 for x in range(1, 10)],\n [x / 1000 for x in range(1, 10)],\n [x / 100 for x in range(1, 10)],\n [x / 10 for x in range(1, 10)],\n range(1, 10),\n range(10, 100, 5),\n range(100, 200, 10),\n range(100, 1001, 50)))\n\n def add_main_layout(self):\n # this is part of init, pylint: disable=attribute-defined-outside-init\n form = QFormLayout()\n form.setFieldGrowthPolicy(form.AllNonFixedFieldsGrow)\n form.setLabelAlignment(Qt.AlignLeft)\n gui.widgetBox(self.controlArea, True, orientation=form)\n form.addRow(\n \"Neurons in hidden layers:\",\n gui.lineEdit(\n None, self, \"hidden_layers_input\",\n orientation=Qt.Horizontal, callback=self.settings_changed,\n tooltip=\"A list of integers defining neurons. Length of list \"\n \"defines the number of layers. E.g. 4, 2, 2, 3.\",\n placeholderText=\"e.g. 10,\"))\n form.addRow(\n \"Activation:\",\n gui.comboBox(\n None, self, \"activation_index\", orientation=Qt.Horizontal,\n label=\"Activation:\", items=[i for i in self.act_lbl],\n callback=self.settings_changed))\n\n form.addRow(\n \"Solver:\",\n gui.comboBox(\n None, self, \"solver_index\", orientation=Qt.Horizontal,\n label=\"Solver:\", items=[i for i in self.solv_lbl],\n callback=self.settings_changed))\n self.reg_label = QLabel()\n slider = gui.hSlider(\n None, self, \"alpha_index\",\n minValue=0, maxValue=len(self.alphas) - 1,\n callback=lambda: (self.set_alpha(), self.settings_changed()),\n createLabel=False)\n form.addRow(self.reg_label, slider)\n self.set_alpha()\n\n form.addRow(\n \"Maximal number of iterations:\",\n gui.spin(\n None, self, \"max_iterations\", 10, 1000000, step=10,\n label=\"Max iterations:\", orientation=Qt.Horizontal,\n alignment=Qt.AlignRight, callback=self.settings_changed))\n\n form.addRow(\n gui.checkBox(\n None, self, \"replicable\", label=\"Replicable training\",\n callback=self.settings_changed, attribute=Qt.WA_LayoutUsesWidgetRect)\n )\n\n def set_alpha(self):\n # called from init, pylint: disable=attribute-defined-outside-init\n self.strength_C = self.alphas[self.alpha_index]\n self.reg_label.setText(\"Regularization, \u03b1={}:\".format(self.strength_C))\n\n @property\n def alpha(self):\n return self.alphas[self.alpha_index]\n\n def setup_layout(self):\n # this is part of init, pylint: disable=attribute-defined-outside-init\n super().setup_layout()\n\n self._task = None # type: Optional[Task]\n self._executor = ThreadExecutor()\n\n # just a test cancel button\n b = gui.button(self.apply_button, self, \"Cancel\",\n callback=self.cancel, addToLayout=False)\n self.apply_button.layout().insertStretch(0, 100)\n self.apply_button.layout().insertWidget(0, b)\n\n def create_learner(self):\n return self.LEARNER(\n hidden_layer_sizes=self.get_hidden_layers(),\n activation=self.activation[self.activation_index],\n solver=self.solver[self.solver_index],\n alpha=self.alpha,\n random_state=1 if self.replicable else None,\n max_iter=self.max_iterations,\n preprocessors=self.preprocessors)\n\n def get_learner_parameters(self):\n return ((\"Hidden layers\", ', '.join(map(str, self.get_hidden_layers()))),\n (\"Activation\", self.act_lbl[self.activation_index]),\n (\"Solver\", self.solv_lbl[self.solver_index]),\n (\"Alpha\", self.alpha),\n (\"Max iterations\", self.max_iterations),\n (\"Replicable training\", self.replicable))\n\n def get_hidden_layers(self):\n layers = tuple(map(int, re.findall(r'\\d+', self.hidden_layers_input)))\n if not layers:\n layers = (10,)\n self.hidden_layers_input = \"10,\"\n return layers\n\n def update_model(self):\n self.show_fitting_failed(None)\n self.model = None\n if self.check_data():\n self.__update()\n else:\n self.Outputs.model.send(self.model)\n\n @Slot(float)\n def setProgressValue(self, value):\n assert self.thread() is QThread.currentThread()\n self.progressBarSet(value)\n\n def __update(self):\n if self._task is not None:\n # First make sure any pending tasks are cancelled.\n self.cancel()\n assert self._task is None\n\n max_iter = self.learner.kwargs[\"max_iter\"]\n\n # Setup the task state\n task = Task()\n lastemitted = 0.\n\n def callback(iteration):\n nonlocal task\n nonlocal lastemitted\n if task.isInterruptionRequested():\n raise CancelTaskException()\n progress = round(iteration / max_iter * 100)\n if progress != lastemitted:\n task.emitProgressUpdate(progress)\n lastemitted = progress\n\n # copy to set the callback so that the learner output is not modified\n # (currently we can not pass callbacks to learners __call__)\n learner = copy.copy(self.learner)\n learner.callback = callback\n\n def build_model(data, learner):\n try:\n return learner(data)\n except CancelTaskException:\n return None\n\n build_model_func = partial(build_model, self.data, learner)\n\n task.setFuture(self._executor.submit(build_model_func))\n task.done.connect(self._task_finished)\n task.progressChanged.connect(self.setProgressValue)\n\n # set in setup_layout; pylint: disable=attribute-defined-outside-init\n self._task = task\n\n self.progressBarInit()\n self.setBlocking(True)\n\n @Slot(concurrent.futures.Future)\n def _task_finished(self, f):\n \"\"\"\n Parameters\n ----------\n f : Future\n The future instance holding the built model\n \"\"\"\n assert self.thread() is QThread.currentThread()\n assert self._task is not None\n assert self._task.future is f\n assert f.done()\n self._task.deleteLater()\n self._task = None # pylint: disable=attribute-defined-outside-init\n self.setBlocking(False)\n self.progressBarFinished()\n\n try:\n self.model = f.result()\n except Exception as ex: # pylint: disable=broad-except\n # Log the exception with a traceback\n log = logging.getLogger()\n log.exception(__name__, exc_info=True)\n self.model = None\n self.show_fitting_failed(ex)\n else:\n self.model.name = self.learner_name\n self.model.instances = self.data\n self.model.skl_model.orange_callback = None # remove unpicklable callback\n self.Outputs.model.send(self.model)\n\n def cancel(self):\n \"\"\"\n Cancel the current task (if any).\n \"\"\"\n if self._task is not None:\n self._task.cancel()\n assert self._task.future.done()\n # disconnect from the task\n self._task.done.disconnect(self._task_finished)\n self._task.progressChanged.disconnect(self.setProgressValue)\n self._task.deleteLater()\n self._task = None # pylint: disable=attribute-defined-outside-init\n\n self.progressBarFinished()\n self.setBlocking(False)\n\n def onDeleteWidget(self):\n self.cancel()\n super().onDeleteWidget()\n\n @classmethod\n def migrate_settings(cls, settings, version):\n if not version:\n alpha = settings.pop(\"alpha\", None)\n if alpha is not None:\n settings[\"alpha_index\"] = \\\n np.argmin(np.abs(np.array(cls.alphas) - alpha))\n\n\nif __name__ == \"__main__\": # pragma: no cover\n WidgetPreview(OWNNLearner).run(Table(\"iris\"))\n", "path": "Orange/widgets/model/owneuralnetwork.py"}], "after_files": [{"content": "from functools import partial\nimport copy\nimport logging\nimport re\nimport concurrent.futures\nfrom itertools import chain\n\nimport numpy as np\n\nfrom AnyQt.QtWidgets import QFormLayout, QLabel\nfrom AnyQt.QtCore import Qt, QThread, QObject\nfrom AnyQt.QtCore import pyqtSlot as Slot, pyqtSignal as Signal\n\nfrom Orange.data import Table\nfrom Orange.modelling import NNLearner\nfrom Orange.widgets import gui\nfrom Orange.widgets.widget import Msg\nfrom Orange.widgets.settings import Setting\nfrom Orange.widgets.utils.owlearnerwidget import OWBaseLearner\n\nfrom Orange.widgets.utils.concurrent import ThreadExecutor, FutureWatcher\nfrom Orange.widgets.utils.widgetpreview import WidgetPreview\n\n\nclass Task(QObject):\n \"\"\"\n A class that will hold the state for an learner evaluation.\n \"\"\"\n done = Signal(object)\n progressChanged = Signal(float)\n\n future = None # type: concurrent.futures.Future\n watcher = None # type: FutureWatcher\n cancelled = False # type: bool\n\n def setFuture(self, future):\n if self.future is not None:\n raise RuntimeError(\"future is already set\")\n self.future = future\n self.watcher = FutureWatcher(future, parent=self)\n self.watcher.done.connect(self.done)\n\n def cancel(self):\n \"\"\"\n Cancel the task.\n\n Set the `cancelled` field to True and block until the future is done.\n \"\"\"\n # set cancelled state\n self.cancelled = True\n self.future.cancel()\n concurrent.futures.wait([self.future])\n\n def emitProgressUpdate(self, value):\n self.progressChanged.emit(value)\n\n def isInterruptionRequested(self):\n return self.cancelled\n\n\nclass CancelTaskException(BaseException):\n pass\n\n\nclass OWNNLearner(OWBaseLearner):\n name = \"Neural Network\"\n description = \"A multi-layer perceptron (MLP) algorithm with \" \\\n \"backpropagation.\"\n icon = \"icons/NN.svg\"\n priority = 90\n keywords = [\"mlp\"]\n\n LEARNER = NNLearner\n\n activation = [\"identity\", \"logistic\", \"tanh\", \"relu\"]\n act_lbl = [\"Identity\", \"Logistic\", \"tanh\", \"ReLu\"]\n solver = [\"lbfgs\", \"sgd\", \"adam\"]\n solv_lbl = [\"L-BFGS-B\", \"SGD\", \"Adam\"]\n\n learner_name = Setting(\"Neural Network\")\n hidden_layers_input = Setting(\"100,\")\n activation_index = Setting(3)\n solver_index = Setting(2)\n max_iterations = Setting(200)\n alpha_index = Setting(1)\n replicable = Setting(True)\n settings_version = 2\n\n alphas = list(chain([0], [x / 10000 for x in range(1, 10)],\n [x / 1000 for x in range(1, 10)],\n [x / 100 for x in range(1, 10)],\n [x / 10 for x in range(1, 10)],\n range(1, 10),\n range(10, 100, 5),\n range(100, 200, 10),\n range(100, 1001, 50)))\n\n class Warning(OWBaseLearner.Warning):\n no_layers = Msg(\"ANN without hidden layers is equivalent to logistic \"\n \"regression with worse fitting.\\nWe recommend using \"\n \"logistic regression.\")\n\n def add_main_layout(self):\n # this is part of init, pylint: disable=attribute-defined-outside-init\n form = QFormLayout()\n form.setFieldGrowthPolicy(form.AllNonFixedFieldsGrow)\n form.setLabelAlignment(Qt.AlignLeft)\n gui.widgetBox(self.controlArea, True, orientation=form)\n form.addRow(\n \"Neurons in hidden layers:\",\n gui.lineEdit(\n None, self, \"hidden_layers_input\",\n orientation=Qt.Horizontal, callback=self.settings_changed,\n tooltip=\"A list of integers defining neurons. Length of list \"\n \"defines the number of layers. E.g. 4, 2, 2, 3.\",\n placeholderText=\"e.g. 10,\"))\n form.addRow(\n \"Activation:\",\n gui.comboBox(\n None, self, \"activation_index\", orientation=Qt.Horizontal,\n label=\"Activation:\", items=[i for i in self.act_lbl],\n callback=self.settings_changed))\n\n form.addRow(\n \"Solver:\",\n gui.comboBox(\n None, self, \"solver_index\", orientation=Qt.Horizontal,\n label=\"Solver:\", items=[i for i in self.solv_lbl],\n callback=self.settings_changed))\n self.reg_label = QLabel()\n slider = gui.hSlider(\n None, self, \"alpha_index\",\n minValue=0, maxValue=len(self.alphas) - 1,\n callback=lambda: (self.set_alpha(), self.settings_changed()),\n createLabel=False)\n form.addRow(self.reg_label, slider)\n self.set_alpha()\n\n form.addRow(\n \"Maximal number of iterations:\",\n gui.spin(\n None, self, \"max_iterations\", 10, 1000000, step=10,\n label=\"Max iterations:\", orientation=Qt.Horizontal,\n alignment=Qt.AlignRight, callback=self.settings_changed))\n\n form.addRow(\n gui.checkBox(\n None, self, \"replicable\", label=\"Replicable training\",\n callback=self.settings_changed, attribute=Qt.WA_LayoutUsesWidgetRect)\n )\n\n def set_alpha(self):\n # called from init, pylint: disable=attribute-defined-outside-init\n self.strength_C = self.alphas[self.alpha_index]\n self.reg_label.setText(\"Regularization, \u03b1={}:\".format(self.strength_C))\n\n @property\n def alpha(self):\n return self.alphas[self.alpha_index]\n\n def setup_layout(self):\n # this is part of init, pylint: disable=attribute-defined-outside-init\n super().setup_layout()\n\n self._task = None # type: Optional[Task]\n self._executor = ThreadExecutor()\n\n # just a test cancel button\n b = gui.button(self.apply_button, self, \"Cancel\",\n callback=self.cancel, addToLayout=False)\n self.apply_button.layout().insertStretch(0, 100)\n self.apply_button.layout().insertWidget(0, b)\n\n def create_learner(self):\n return self.LEARNER(\n hidden_layer_sizes=self.get_hidden_layers(),\n activation=self.activation[self.activation_index],\n solver=self.solver[self.solver_index],\n alpha=self.alpha,\n random_state=1 if self.replicable else None,\n max_iter=self.max_iterations,\n preprocessors=self.preprocessors)\n\n def get_learner_parameters(self):\n return ((\"Hidden layers\", ', '.join(map(str, self.get_hidden_layers()))),\n (\"Activation\", self.act_lbl[self.activation_index]),\n (\"Solver\", self.solv_lbl[self.solver_index]),\n (\"Alpha\", self.alpha),\n (\"Max iterations\", self.max_iterations),\n (\"Replicable training\", self.replicable))\n\n def get_hidden_layers(self):\n self.Warning.no_layers.clear()\n layers = tuple(map(int, re.findall(r'\\d+', self.hidden_layers_input)))\n if not layers:\n self.Warning.no_layers()\n return layers\n\n def update_model(self):\n self.show_fitting_failed(None)\n self.model = None\n if self.check_data():\n self.__update()\n else:\n self.Outputs.model.send(self.model)\n\n @Slot(float)\n def setProgressValue(self, value):\n assert self.thread() is QThread.currentThread()\n self.progressBarSet(value)\n\n def __update(self):\n if self._task is not None:\n # First make sure any pending tasks are cancelled.\n self.cancel()\n assert self._task is None\n\n max_iter = self.learner.kwargs[\"max_iter\"]\n\n # Setup the task state\n task = Task()\n lastemitted = 0.\n\n def callback(iteration):\n nonlocal task\n nonlocal lastemitted\n if task.isInterruptionRequested():\n raise CancelTaskException()\n progress = round(iteration / max_iter * 100)\n if progress != lastemitted:\n task.emitProgressUpdate(progress)\n lastemitted = progress\n\n # copy to set the callback so that the learner output is not modified\n # (currently we can not pass callbacks to learners __call__)\n learner = copy.copy(self.learner)\n learner.callback = callback\n\n def build_model(data, learner):\n try:\n return learner(data)\n except CancelTaskException:\n return None\n\n build_model_func = partial(build_model, self.data, learner)\n\n task.setFuture(self._executor.submit(build_model_func))\n task.done.connect(self._task_finished)\n task.progressChanged.connect(self.setProgressValue)\n\n # set in setup_layout; pylint: disable=attribute-defined-outside-init\n self._task = task\n\n self.progressBarInit()\n self.setBlocking(True)\n\n @Slot(concurrent.futures.Future)\n def _task_finished(self, f):\n \"\"\"\n Parameters\n ----------\n f : Future\n The future instance holding the built model\n \"\"\"\n assert self.thread() is QThread.currentThread()\n assert self._task is not None\n assert self._task.future is f\n assert f.done()\n self._task.deleteLater()\n self._task = None # pylint: disable=attribute-defined-outside-init\n self.setBlocking(False)\n self.progressBarFinished()\n\n try:\n self.model = f.result()\n except Exception as ex: # pylint: disable=broad-except\n # Log the exception with a traceback\n log = logging.getLogger()\n log.exception(__name__, exc_info=True)\n self.model = None\n self.show_fitting_failed(ex)\n else:\n self.model.name = self.learner_name\n self.model.instances = self.data\n self.model.skl_model.orange_callback = None # remove unpicklable callback\n self.Outputs.model.send(self.model)\n\n def cancel(self):\n \"\"\"\n Cancel the current task (if any).\n \"\"\"\n if self._task is not None:\n self._task.cancel()\n assert self._task.future.done()\n # disconnect from the task\n self._task.done.disconnect(self._task_finished)\n self._task.progressChanged.disconnect(self.setProgressValue)\n self._task.deleteLater()\n self._task = None # pylint: disable=attribute-defined-outside-init\n\n self.progressBarFinished()\n self.setBlocking(False)\n\n def onDeleteWidget(self):\n self.cancel()\n super().onDeleteWidget()\n\n @classmethod\n def migrate_settings(cls, settings, version):\n if not version:\n alpha = settings.pop(\"alpha\", None)\n if alpha is not None:\n settings[\"alpha_index\"] = \\\n np.argmin(np.abs(np.array(cls.alphas) - alpha))\n elif version < 2:\n settings[\"alpha_index\"] = settings.get(\"alpha_index\", 0) + 1\n\n\nif __name__ == \"__main__\": # pragma: no cover\n WidgetPreview(OWNNLearner).run(Table(\"iris\"))\n", "path": "Orange/widgets/model/owneuralnetwork.py"}]}
3,745
631
gh_patches_debug_25757
rasdani/github-patches
git_diff
StackStorm__st2-2870
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Cannot create action with a name that collides with a Python library We created an action called "elasticsearch.py" (with the appropriate yaml). When we try to run it we get an ST2 exception. We are able to reproduce with another action called "test.py". Our hypothesis is that if the action name is the same as a Python library, it fails to run it. This happens after migrating to 1.4.0 from 0.13.2 (it works fine in 0.13.2). Here's the stacktrace: ``` $ st2 run netflix_samples.elasticsearch app_region=us-east-1 . id: 57211d9b7279e6326f6b025f status: failed parameters: app_region: us-east-1 result: exit_code: 1 result: null stderr: "Traceback (most recent call last): File "/opt/stackstorm/st2/local/lib/python2.7/site-packages/st2actions/runners/python_action_wrapper.py", line 164, in <module> obj.run() File "/opt/stackstorm/st2/local/lib/python2.7/site-packages/st2actions/runners/python_action_wrapper.py", line 102, in run action = self._get_action_instance() File "/opt/stackstorm/st2/local/lib/python2.7/site-packages/st2actions/runners/python_action_wrapper.py", line 116, in _get_action_instance actions_cls = action_loader.register_plugin(Action, self._file_path) File "/opt/stackstorm/st2/lib/python2.7/site-packages/st2common/util/loader.py", line 163, in register_plugin (plugin_abs_file_path)) Exception: Found no classes in plugin file "/opt/stackstorm/packs/netflix_samples/actions/elasticsearch.py" matching requirements. " stdout: '' ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `st2common/st2common/util/loader.py` Content: ``` 1 # Licensed to the StackStorm, Inc ('StackStorm') under one or more 2 # contributor license agreements. See the NOTICE file distributed with 3 # this work for additional information regarding copyright ownership. 4 # The ASF licenses this file to You under the Apache License, Version 2.0 5 # (the "License"); you may not use this file except in compliance with 6 # the License. You may obtain a copy of the License at 7 # 8 # http://www.apache.org/licenses/LICENSE-2.0 9 # 10 # Unless required by applicable law or agreed to in writing, software 11 # distributed under the License is distributed on an "AS IS" BASIS, 12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 # See the License for the specific language governing permissions and 14 # limitations under the License. 15 16 import importlib 17 import inspect 18 import json 19 import os 20 import sys 21 import yaml 22 23 from st2common.exceptions.plugins import IncompatiblePluginException 24 from st2common import log as logging 25 26 __all__ = [ 27 'register_plugin', 28 'register_plugin_class' 29 ] 30 31 32 LOG = logging.getLogger(__name__) 33 PYTHON_EXTENSIONS = ('.py') 34 35 36 def _register_plugin_path(plugin_dir_abs_path): 37 if not os.path.isdir(plugin_dir_abs_path): 38 raise Exception('Directory "%s" with plugins doesn\'t exist' % (plugin_dir_abs_path)) 39 40 for x in sys.path: 41 if plugin_dir_abs_path in (x, x + os.sep): 42 return 43 sys.path.append(plugin_dir_abs_path) 44 45 46 def _get_plugin_module(plugin_file_path): 47 plugin_module = os.path.basename(plugin_file_path) 48 if plugin_module.endswith(PYTHON_EXTENSIONS): 49 plugin_module = plugin_module[:plugin_module.rfind('.py')] 50 else: 51 plugin_module = None 52 return plugin_module 53 54 55 def _get_classes_in_module(module): 56 return [kls for name, kls in inspect.getmembers(module, 57 lambda member: inspect.isclass(member) and member.__module__ == module.__name__)] 58 59 60 def _get_plugin_classes(module_name): 61 return _get_classes_in_module(module_name) 62 63 64 def _get_plugin_methods(plugin_klass): 65 """ 66 Return a list of names of all the methods in the provided class. 67 68 Note: Abstract methods which are not implemented are excluded from the 69 list. 70 71 :rtype: ``list`` of ``str`` 72 """ 73 methods = inspect.getmembers(plugin_klass, inspect.ismethod) 74 75 # Exclude inherited abstract methods from the parent class 76 method_names = [] 77 for name, method in methods: 78 method_properties = method.__dict__ 79 is_abstract = method_properties.get('__isabstractmethod__', False) 80 81 if is_abstract: 82 continue 83 84 method_names.append(name) 85 return method_names 86 87 88 def _validate_methods(plugin_base_class, plugin_klass): 89 ''' 90 XXX: This is hacky but we'd like to validate the methods 91 in plugin_impl at least has all the *abstract* methods in 92 plugin_base_class. 93 ''' 94 expected_methods = plugin_base_class.__abstractmethods__ 95 plugin_methods = _get_plugin_methods(plugin_klass) 96 for method in expected_methods: 97 if method not in plugin_methods: 98 message = 'Class "%s" doesn\'t implement required "%s" method from the base class' 99 raise IncompatiblePluginException(message % (plugin_klass.__name__, method)) 100 101 102 def _register_plugin(plugin_base_class, plugin_impl): 103 _validate_methods(plugin_base_class, plugin_impl) 104 plugin_base_class.register(plugin_impl) 105 106 107 def register_plugin_class(base_class, file_path, class_name): 108 """ 109 Retrieve a register plugin class from the provided file. 110 111 This method also validate that the class implements all the abstract methods 112 from the base plugin class. 113 114 :param base_class: Base plugin class. 115 :param base_class: ``class`` 116 117 :param file_path: File absolute path to the plugin module file. 118 :type file_path: ``str`` 119 120 :param class_name: Class name of a plugin. 121 :type class_name: ``str`` 122 """ 123 plugin_dir = os.path.dirname(os.path.realpath(file_path)) 124 _register_plugin_path(plugin_dir) 125 module_name = _get_plugin_module(file_path) 126 127 if module_name is None: 128 return None 129 130 module = importlib.import_module(module_name) 131 klass = getattr(module, class_name, None) 132 133 if not klass: 134 raise Exception('Plugin file "%s" doesn\'t expose class named "%s"' % 135 (file_path, class_name)) 136 137 _register_plugin(base_class, klass) 138 return klass 139 140 141 def register_plugin(plugin_base_class, plugin_abs_file_path): 142 registered_plugins = [] 143 plugin_dir = os.path.dirname(os.path.realpath(plugin_abs_file_path)) 144 _register_plugin_path(plugin_dir) 145 module_name = _get_plugin_module(plugin_abs_file_path) 146 if module_name is None: 147 return None 148 module = importlib.import_module(module_name) 149 klasses = _get_plugin_classes(module) 150 151 # Try registering classes in plugin file. Some may fail. 152 for klass in klasses: 153 try: 154 _register_plugin(plugin_base_class, klass) 155 registered_plugins.append(klass) 156 except Exception as e: 157 LOG.exception(e) 158 LOG.debug('Skipping class %s as it doesn\'t match specs.', klass) 159 continue 160 161 if len(registered_plugins) == 0: 162 raise Exception('Found no classes in plugin file "%s" matching requirements.' % 163 (plugin_abs_file_path)) 164 165 return registered_plugins 166 167 168 ALLOWED_EXTS = ['.json', '.yaml', '.yml'] 169 PARSER_FUNCS = {'.json': json.load, '.yml': yaml.safe_load, '.yaml': yaml.safe_load} 170 171 172 def load_meta_file(file_path): 173 if not os.path.isfile(file_path): 174 raise Exception('File "%s" does not exist.' % file_path) 175 176 file_name, file_ext = os.path.splitext(file_path) 177 if file_ext not in ALLOWED_EXTS: 178 raise Exception('Unsupported meta type %s, file %s. Allowed: %s' % 179 (file_ext, file_path, ALLOWED_EXTS)) 180 181 with open(file_path, 'r') as f: 182 return PARSER_FUNCS[file_ext](f) 183 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/st2common/st2common/util/loader.py b/st2common/st2common/util/loader.py --- a/st2common/st2common/util/loader.py +++ b/st2common/st2common/util/loader.py @@ -13,7 +13,7 @@ # See the License for the specific language governing permissions and # limitations under the License. -import importlib +import imp import inspect import json import os @@ -127,7 +127,7 @@ if module_name is None: return None - module = importlib.import_module(module_name) + module = imp.load_source(module_name, file_path) klass = getattr(module, class_name, None) if not klass: @@ -142,10 +142,12 @@ registered_plugins = [] plugin_dir = os.path.dirname(os.path.realpath(plugin_abs_file_path)) _register_plugin_path(plugin_dir) + module_name = _get_plugin_module(plugin_abs_file_path) if module_name is None: return None - module = importlib.import_module(module_name) + + module = imp.load_source(module_name, plugin_abs_file_path) klasses = _get_plugin_classes(module) # Try registering classes in plugin file. Some may fail.
{"golden_diff": "diff --git a/st2common/st2common/util/loader.py b/st2common/st2common/util/loader.py\n--- a/st2common/st2common/util/loader.py\n+++ b/st2common/st2common/util/loader.py\n@@ -13,7 +13,7 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n-import importlib\n+import imp\n import inspect\n import json\n import os\n@@ -127,7 +127,7 @@\n if module_name is None:\n return None\n \n- module = importlib.import_module(module_name)\n+ module = imp.load_source(module_name, file_path)\n klass = getattr(module, class_name, None)\n \n if not klass:\n@@ -142,10 +142,12 @@\n registered_plugins = []\n plugin_dir = os.path.dirname(os.path.realpath(plugin_abs_file_path))\n _register_plugin_path(plugin_dir)\n+\n module_name = _get_plugin_module(plugin_abs_file_path)\n if module_name is None:\n return None\n- module = importlib.import_module(module_name)\n+\n+ module = imp.load_source(module_name, plugin_abs_file_path)\n klasses = _get_plugin_classes(module)\n \n # Try registering classes in plugin file. Some may fail.\n", "issue": "Cannot create action with a name that collides with a Python library\nWe created an action called \"elasticsearch.py\" (with the appropriate yaml). When we try to run it we get an ST2 exception. We are able to reproduce with another action called \"test.py\". Our hypothesis is that if the action name is the same as a Python library, it fails to run it. This happens after migrating to 1.4.0 from 0.13.2 (it works fine in 0.13.2).\n\nHere's the stacktrace:\n\n```\n$ st2 run netflix_samples.elasticsearch app_region=us-east-1\n.\nid: 57211d9b7279e6326f6b025f\nstatus: failed\nparameters: \n app_region: us-east-1\nresult: \n exit_code: 1\n result: null\n stderr: \"Traceback (most recent call last):\n File \"/opt/stackstorm/st2/local/lib/python2.7/site-packages/st2actions/runners/python_action_wrapper.py\", line 164, in <module>\n obj.run()\n File \"/opt/stackstorm/st2/local/lib/python2.7/site-packages/st2actions/runners/python_action_wrapper.py\", line 102, in run\n action = self._get_action_instance()\n File \"/opt/stackstorm/st2/local/lib/python2.7/site-packages/st2actions/runners/python_action_wrapper.py\", line 116, in _get_action_instance\n actions_cls = action_loader.register_plugin(Action, self._file_path)\n File \"/opt/stackstorm/st2/lib/python2.7/site-packages/st2common/util/loader.py\", line 163, in register_plugin\n (plugin_abs_file_path))\nException: Found no classes in plugin file \"/opt/stackstorm/packs/netflix_samples/actions/elasticsearch.py\" matching requirements.\n\"\n stdout: ''\n```\n\n", "before_files": [{"content": "# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport importlib\nimport inspect\nimport json\nimport os\nimport sys\nimport yaml\n\nfrom st2common.exceptions.plugins import IncompatiblePluginException\nfrom st2common import log as logging\n\n__all__ = [\n 'register_plugin',\n 'register_plugin_class'\n]\n\n\nLOG = logging.getLogger(__name__)\nPYTHON_EXTENSIONS = ('.py')\n\n\ndef _register_plugin_path(plugin_dir_abs_path):\n if not os.path.isdir(plugin_dir_abs_path):\n raise Exception('Directory \"%s\" with plugins doesn\\'t exist' % (plugin_dir_abs_path))\n\n for x in sys.path:\n if plugin_dir_abs_path in (x, x + os.sep):\n return\n sys.path.append(plugin_dir_abs_path)\n\n\ndef _get_plugin_module(plugin_file_path):\n plugin_module = os.path.basename(plugin_file_path)\n if plugin_module.endswith(PYTHON_EXTENSIONS):\n plugin_module = plugin_module[:plugin_module.rfind('.py')]\n else:\n plugin_module = None\n return plugin_module\n\n\ndef _get_classes_in_module(module):\n return [kls for name, kls in inspect.getmembers(module,\n lambda member: inspect.isclass(member) and member.__module__ == module.__name__)]\n\n\ndef _get_plugin_classes(module_name):\n return _get_classes_in_module(module_name)\n\n\ndef _get_plugin_methods(plugin_klass):\n \"\"\"\n Return a list of names of all the methods in the provided class.\n\n Note: Abstract methods which are not implemented are excluded from the\n list.\n\n :rtype: ``list`` of ``str``\n \"\"\"\n methods = inspect.getmembers(plugin_klass, inspect.ismethod)\n\n # Exclude inherited abstract methods from the parent class\n method_names = []\n for name, method in methods:\n method_properties = method.__dict__\n is_abstract = method_properties.get('__isabstractmethod__', False)\n\n if is_abstract:\n continue\n\n method_names.append(name)\n return method_names\n\n\ndef _validate_methods(plugin_base_class, plugin_klass):\n '''\n XXX: This is hacky but we'd like to validate the methods\n in plugin_impl at least has all the *abstract* methods in\n plugin_base_class.\n '''\n expected_methods = plugin_base_class.__abstractmethods__\n plugin_methods = _get_plugin_methods(plugin_klass)\n for method in expected_methods:\n if method not in plugin_methods:\n message = 'Class \"%s\" doesn\\'t implement required \"%s\" method from the base class'\n raise IncompatiblePluginException(message % (plugin_klass.__name__, method))\n\n\ndef _register_plugin(plugin_base_class, plugin_impl):\n _validate_methods(plugin_base_class, plugin_impl)\n plugin_base_class.register(plugin_impl)\n\n\ndef register_plugin_class(base_class, file_path, class_name):\n \"\"\"\n Retrieve a register plugin class from the provided file.\n\n This method also validate that the class implements all the abstract methods\n from the base plugin class.\n\n :param base_class: Base plugin class.\n :param base_class: ``class``\n\n :param file_path: File absolute path to the plugin module file.\n :type file_path: ``str``\n\n :param class_name: Class name of a plugin.\n :type class_name: ``str``\n \"\"\"\n plugin_dir = os.path.dirname(os.path.realpath(file_path))\n _register_plugin_path(plugin_dir)\n module_name = _get_plugin_module(file_path)\n\n if module_name is None:\n return None\n\n module = importlib.import_module(module_name)\n klass = getattr(module, class_name, None)\n\n if not klass:\n raise Exception('Plugin file \"%s\" doesn\\'t expose class named \"%s\"' %\n (file_path, class_name))\n\n _register_plugin(base_class, klass)\n return klass\n\n\ndef register_plugin(plugin_base_class, plugin_abs_file_path):\n registered_plugins = []\n plugin_dir = os.path.dirname(os.path.realpath(plugin_abs_file_path))\n _register_plugin_path(plugin_dir)\n module_name = _get_plugin_module(plugin_abs_file_path)\n if module_name is None:\n return None\n module = importlib.import_module(module_name)\n klasses = _get_plugin_classes(module)\n\n # Try registering classes in plugin file. Some may fail.\n for klass in klasses:\n try:\n _register_plugin(plugin_base_class, klass)\n registered_plugins.append(klass)\n except Exception as e:\n LOG.exception(e)\n LOG.debug('Skipping class %s as it doesn\\'t match specs.', klass)\n continue\n\n if len(registered_plugins) == 0:\n raise Exception('Found no classes in plugin file \"%s\" matching requirements.' %\n (plugin_abs_file_path))\n\n return registered_plugins\n\n\nALLOWED_EXTS = ['.json', '.yaml', '.yml']\nPARSER_FUNCS = {'.json': json.load, '.yml': yaml.safe_load, '.yaml': yaml.safe_load}\n\n\ndef load_meta_file(file_path):\n if not os.path.isfile(file_path):\n raise Exception('File \"%s\" does not exist.' % file_path)\n\n file_name, file_ext = os.path.splitext(file_path)\n if file_ext not in ALLOWED_EXTS:\n raise Exception('Unsupported meta type %s, file %s. Allowed: %s' %\n (file_ext, file_path, ALLOWED_EXTS))\n\n with open(file_path, 'r') as f:\n return PARSER_FUNCS[file_ext](f)\n", "path": "st2common/st2common/util/loader.py"}], "after_files": [{"content": "# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport imp\nimport inspect\nimport json\nimport os\nimport sys\nimport yaml\n\nfrom st2common.exceptions.plugins import IncompatiblePluginException\nfrom st2common import log as logging\n\n__all__ = [\n 'register_plugin',\n 'register_plugin_class'\n]\n\n\nLOG = logging.getLogger(__name__)\nPYTHON_EXTENSIONS = ('.py')\n\n\ndef _register_plugin_path(plugin_dir_abs_path):\n if not os.path.isdir(plugin_dir_abs_path):\n raise Exception('Directory \"%s\" with plugins doesn\\'t exist' % (plugin_dir_abs_path))\n\n for x in sys.path:\n if plugin_dir_abs_path in (x, x + os.sep):\n return\n sys.path.append(plugin_dir_abs_path)\n\n\ndef _get_plugin_module(plugin_file_path):\n plugin_module = os.path.basename(plugin_file_path)\n if plugin_module.endswith(PYTHON_EXTENSIONS):\n plugin_module = plugin_module[:plugin_module.rfind('.py')]\n else:\n plugin_module = None\n return plugin_module\n\n\ndef _get_classes_in_module(module):\n return [kls for name, kls in inspect.getmembers(module,\n lambda member: inspect.isclass(member) and member.__module__ == module.__name__)]\n\n\ndef _get_plugin_classes(module_name):\n return _get_classes_in_module(module_name)\n\n\ndef _get_plugin_methods(plugin_klass):\n \"\"\"\n Return a list of names of all the methods in the provided class.\n\n Note: Abstract methods which are not implemented are excluded from the\n list.\n\n :rtype: ``list`` of ``str``\n \"\"\"\n methods = inspect.getmembers(plugin_klass, inspect.ismethod)\n\n # Exclude inherited abstract methods from the parent class\n method_names = []\n for name, method in methods:\n method_properties = method.__dict__\n is_abstract = method_properties.get('__isabstractmethod__', False)\n\n if is_abstract:\n continue\n\n method_names.append(name)\n return method_names\n\n\ndef _validate_methods(plugin_base_class, plugin_klass):\n '''\n XXX: This is hacky but we'd like to validate the methods\n in plugin_impl at least has all the *abstract* methods in\n plugin_base_class.\n '''\n expected_methods = plugin_base_class.__abstractmethods__\n plugin_methods = _get_plugin_methods(plugin_klass)\n for method in expected_methods:\n if method not in plugin_methods:\n message = 'Class \"%s\" doesn\\'t implement required \"%s\" method from the base class'\n raise IncompatiblePluginException(message % (plugin_klass.__name__, method))\n\n\ndef _register_plugin(plugin_base_class, plugin_impl):\n _validate_methods(plugin_base_class, plugin_impl)\n plugin_base_class.register(plugin_impl)\n\n\ndef register_plugin_class(base_class, file_path, class_name):\n \"\"\"\n Retrieve a register plugin class from the provided file.\n\n This method also validate that the class implements all the abstract methods\n from the base plugin class.\n\n :param base_class: Base plugin class.\n :param base_class: ``class``\n\n :param file_path: File absolute path to the plugin module file.\n :type file_path: ``str``\n\n :param class_name: Class name of a plugin.\n :type class_name: ``str``\n \"\"\"\n plugin_dir = os.path.dirname(os.path.realpath(file_path))\n _register_plugin_path(plugin_dir)\n module_name = _get_plugin_module(file_path)\n\n if module_name is None:\n return None\n\n module = imp.load_source(module_name, file_path)\n klass = getattr(module, class_name, None)\n\n if not klass:\n raise Exception('Plugin file \"%s\" doesn\\'t expose class named \"%s\"' %\n (file_path, class_name))\n\n _register_plugin(base_class, klass)\n return klass\n\n\ndef register_plugin(plugin_base_class, plugin_abs_file_path):\n registered_plugins = []\n plugin_dir = os.path.dirname(os.path.realpath(plugin_abs_file_path))\n _register_plugin_path(plugin_dir)\n\n module_name = _get_plugin_module(plugin_abs_file_path)\n if module_name is None:\n return None\n\n module = imp.load_source(module_name, plugin_abs_file_path)\n klasses = _get_plugin_classes(module)\n\n # Try registering classes in plugin file. Some may fail.\n for klass in klasses:\n try:\n _register_plugin(plugin_base_class, klass)\n registered_plugins.append(klass)\n except Exception as e:\n LOG.exception(e)\n LOG.debug('Skipping class %s as it doesn\\'t match specs.', klass)\n continue\n\n if len(registered_plugins) == 0:\n raise Exception('Found no classes in plugin file \"%s\" matching requirements.' %\n (plugin_abs_file_path))\n\n return registered_plugins\n\n\nALLOWED_EXTS = ['.json', '.yaml', '.yml']\nPARSER_FUNCS = {'.json': json.load, '.yml': yaml.safe_load, '.yaml': yaml.safe_load}\n\n\ndef load_meta_file(file_path):\n if not os.path.isfile(file_path):\n raise Exception('File \"%s\" does not exist.' % file_path)\n\n file_name, file_ext = os.path.splitext(file_path)\n if file_ext not in ALLOWED_EXTS:\n raise Exception('Unsupported meta type %s, file %s. Allowed: %s' %\n (file_ext, file_path, ALLOWED_EXTS))\n\n with open(file_path, 'r') as f:\n return PARSER_FUNCS[file_ext](f)\n", "path": "st2common/st2common/util/loader.py"}]}
2,460
282
gh_patches_debug_13345
rasdani/github-patches
git_diff
paperless-ngx__paperless-ngx-1645
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [BUG] 1.9 OCR output mode "skip_noarchive" is not valid ### Description With version 1.9 the following env trow errors: ```ini PAPERLESS_OCR_MODE=skip_noarchive ``` ### Steps to reproduce 1. set above docker env variable 2. start container ### Webserver logs ```bash Paperless-ngx docker container starting... Mapping UID and GID for paperless:paperless to 50014:50014 Creating directory /tmp/paperless Adjusting permissions of paperless files. This may take a while. Waiting for PostgreSQL to start... Waiting for Redis... Connected to Redis broker. Apply database migrations... SystemCheckError: System check identified some issues: ERRORS: ?: OCR output mode "skip_noarchive" is not valid ``` ### Paperless-ngx version 1.9 ### Host OS Ubuntu 22.04 x86_64 ### Installation method Docker - official image ### Browser _No response_ ### Configuration changes _No response_ ### Other _No response_ --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `src/paperless/checks.py` Content: ``` 1 import grp 2 import os 3 import pwd 4 import shutil 5 import stat 6 7 from django.conf import settings 8 from django.core.checks import Error 9 from django.core.checks import register 10 from django.core.checks import Warning 11 12 exists_message = "{} is set but doesn't exist." 13 exists_hint = "Create a directory at {}" 14 writeable_message = "{} is not writeable" 15 writeable_hint = ( 16 "Set the permissions of {} to be writeable by the user running the " 17 "Paperless services" 18 ) 19 20 21 def path_check(var, directory): 22 messages = [] 23 if directory: 24 if not os.path.isdir(directory): 25 messages.append( 26 Error(exists_message.format(var), exists_hint.format(directory)), 27 ) 28 else: 29 test_file = os.path.join( 30 directory, 31 f"__paperless_write_test_{os.getpid()}__", 32 ) 33 try: 34 with open(test_file, "w"): 35 pass 36 except PermissionError: 37 dir_stat = os.stat(directory) 38 dir_mode = stat.filemode(dir_stat.st_mode) 39 dir_owner = pwd.getpwuid(dir_stat.st_uid).pw_name 40 dir_group = grp.getgrgid(dir_stat.st_gid).gr_name 41 messages.append( 42 Error( 43 writeable_message.format(var), 44 writeable_hint.format( 45 f"\n{dir_mode} {dir_owner} {dir_group} " f"{directory}\n", 46 ), 47 ), 48 ) 49 finally: 50 if os.path.isfile(test_file): 51 os.remove(test_file) 52 53 return messages 54 55 56 @register() 57 def paths_check(app_configs, **kwargs): 58 """ 59 Check the various paths for existence, readability and writeability 60 """ 61 62 return ( 63 path_check("PAPERLESS_DATA_DIR", settings.DATA_DIR) 64 + path_check("PAPERLESS_TRASH_DIR", settings.TRASH_DIR) 65 + path_check("PAPERLESS_MEDIA_ROOT", settings.MEDIA_ROOT) 66 + path_check("PAPERLESS_CONSUMPTION_DIR", settings.CONSUMPTION_DIR) 67 ) 68 69 70 @register() 71 def binaries_check(app_configs, **kwargs): 72 """ 73 Paperless requires the existence of a few binaries, so we do some checks 74 for those here. 75 """ 76 77 error = "Paperless can't find {}. Without it, consumption is impossible." 78 hint = "Either it's not in your ${PATH} or it's not installed." 79 80 binaries = (settings.CONVERT_BINARY, "tesseract") 81 82 check_messages = [] 83 for binary in binaries: 84 if shutil.which(binary) is None: 85 check_messages.append(Warning(error.format(binary), hint)) 86 87 return check_messages 88 89 90 @register() 91 def debug_mode_check(app_configs, **kwargs): 92 if settings.DEBUG: 93 return [ 94 Warning( 95 "DEBUG mode is enabled. Disable Debug mode. This is a serious " 96 "security issue, since it puts security overides in place which " 97 "are meant to be only used during development. This " 98 "also means that paperless will tell anyone various " 99 "debugging information when something goes wrong.", 100 ), 101 ] 102 else: 103 return [] 104 105 106 @register() 107 def settings_values_check(app_configs, **kwargs): 108 """ 109 Validates at least some of the user provided settings 110 """ 111 112 def _ocrmypdf_settings_check(): 113 """ 114 Validates some of the arguments which will be provided to ocrmypdf 115 against the valid options. Use "ocrmypdf --help" to see the valid 116 inputs 117 """ 118 msgs = [] 119 if settings.OCR_OUTPUT_TYPE not in { 120 "pdfa", 121 "pdf", 122 "pdfa-1", 123 "pdfa-2", 124 "pdfa-3", 125 }: 126 msgs.append( 127 Error(f'OCR output type "{settings.OCR_OUTPUT_TYPE}" is not valid'), 128 ) 129 130 if settings.OCR_MODE not in {"force", "skip", "redo_ocr"}: 131 msgs.append(Error(f'OCR output mode "{settings.OCR_MODE}" is not valid')) 132 133 if settings.OCR_CLEAN not in {"clean", "clean_final"}: 134 msgs.append(Error(f'OCR clean mode "{settings.OCR_CLEAN}" is not valid')) 135 return msgs 136 137 def _timezone_validate(): 138 """ 139 Validates the user provided timezone is a valid timezone 140 """ 141 try: 142 import zoneinfo 143 except ImportError: # pragma: nocover 144 import backports.zoneinfo as zoneinfo 145 msgs = [] 146 if settings.TIME_ZONE not in zoneinfo.available_timezones(): 147 msgs.append( 148 Error(f'Timezone "{settings.TIME_ZONE}" is not a valid timezone'), 149 ) 150 return msgs 151 152 return _ocrmypdf_settings_check() + _timezone_validate() 153 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/src/paperless/checks.py b/src/paperless/checks.py --- a/src/paperless/checks.py +++ b/src/paperless/checks.py @@ -127,10 +127,10 @@ Error(f'OCR output type "{settings.OCR_OUTPUT_TYPE}" is not valid'), ) - if settings.OCR_MODE not in {"force", "skip", "redo_ocr"}: + if settings.OCR_MODE not in {"force", "skip", "redo", "skip_noarchive"}: msgs.append(Error(f'OCR output mode "{settings.OCR_MODE}" is not valid')) - if settings.OCR_CLEAN not in {"clean", "clean_final"}: + if settings.OCR_CLEAN not in {"clean", "clean-final"}: msgs.append(Error(f'OCR clean mode "{settings.OCR_CLEAN}" is not valid')) return msgs
{"golden_diff": "diff --git a/src/paperless/checks.py b/src/paperless/checks.py\n--- a/src/paperless/checks.py\n+++ b/src/paperless/checks.py\n@@ -127,10 +127,10 @@\n Error(f'OCR output type \"{settings.OCR_OUTPUT_TYPE}\" is not valid'),\n )\n \n- if settings.OCR_MODE not in {\"force\", \"skip\", \"redo_ocr\"}:\n+ if settings.OCR_MODE not in {\"force\", \"skip\", \"redo\", \"skip_noarchive\"}:\n msgs.append(Error(f'OCR output mode \"{settings.OCR_MODE}\" is not valid'))\n \n- if settings.OCR_CLEAN not in {\"clean\", \"clean_final\"}:\n+ if settings.OCR_CLEAN not in {\"clean\", \"clean-final\"}:\n msgs.append(Error(f'OCR clean mode \"{settings.OCR_CLEAN}\" is not valid'))\n return msgs\n", "issue": "[BUG] 1.9 OCR output mode \"skip_noarchive\" is not valid\n### Description\n\nWith version 1.9 the following env trow errors:\r\n```ini\r\nPAPERLESS_OCR_MODE=skip_noarchive\r\n```\n\n### Steps to reproduce\n\n1. set above docker env variable\r\n2. start container\n\n### Webserver logs\n\n```bash\nPaperless-ngx docker container starting...\r\nMapping UID and GID for paperless:paperless to 50014:50014\r\nCreating directory /tmp/paperless\r\nAdjusting permissions of paperless files. This may take a while.\r\nWaiting for PostgreSQL to start...\r\nWaiting for Redis...\r\nConnected to Redis broker.\r\nApply database migrations...\r\nSystemCheckError: System check identified some issues:\r\nERRORS:\r\n?: OCR output mode \"skip_noarchive\" is not valid\n```\n\n\n### Paperless-ngx version\n\n1.9\n\n### Host OS\n\nUbuntu 22.04 x86_64\n\n### Installation method\n\nDocker - official image\n\n### Browser\n\n_No response_\n\n### Configuration changes\n\n_No response_\n\n### Other\n\n_No response_\n", "before_files": [{"content": "import grp\nimport os\nimport pwd\nimport shutil\nimport stat\n\nfrom django.conf import settings\nfrom django.core.checks import Error\nfrom django.core.checks import register\nfrom django.core.checks import Warning\n\nexists_message = \"{} is set but doesn't exist.\"\nexists_hint = \"Create a directory at {}\"\nwriteable_message = \"{} is not writeable\"\nwriteable_hint = (\n \"Set the permissions of {} to be writeable by the user running the \"\n \"Paperless services\"\n)\n\n\ndef path_check(var, directory):\n messages = []\n if directory:\n if not os.path.isdir(directory):\n messages.append(\n Error(exists_message.format(var), exists_hint.format(directory)),\n )\n else:\n test_file = os.path.join(\n directory,\n f\"__paperless_write_test_{os.getpid()}__\",\n )\n try:\n with open(test_file, \"w\"):\n pass\n except PermissionError:\n dir_stat = os.stat(directory)\n dir_mode = stat.filemode(dir_stat.st_mode)\n dir_owner = pwd.getpwuid(dir_stat.st_uid).pw_name\n dir_group = grp.getgrgid(dir_stat.st_gid).gr_name\n messages.append(\n Error(\n writeable_message.format(var),\n writeable_hint.format(\n f\"\\n{dir_mode} {dir_owner} {dir_group} \" f\"{directory}\\n\",\n ),\n ),\n )\n finally:\n if os.path.isfile(test_file):\n os.remove(test_file)\n\n return messages\n\n\n@register()\ndef paths_check(app_configs, **kwargs):\n \"\"\"\n Check the various paths for existence, readability and writeability\n \"\"\"\n\n return (\n path_check(\"PAPERLESS_DATA_DIR\", settings.DATA_DIR)\n + path_check(\"PAPERLESS_TRASH_DIR\", settings.TRASH_DIR)\n + path_check(\"PAPERLESS_MEDIA_ROOT\", settings.MEDIA_ROOT)\n + path_check(\"PAPERLESS_CONSUMPTION_DIR\", settings.CONSUMPTION_DIR)\n )\n\n\n@register()\ndef binaries_check(app_configs, **kwargs):\n \"\"\"\n Paperless requires the existence of a few binaries, so we do some checks\n for those here.\n \"\"\"\n\n error = \"Paperless can't find {}. Without it, consumption is impossible.\"\n hint = \"Either it's not in your ${PATH} or it's not installed.\"\n\n binaries = (settings.CONVERT_BINARY, \"tesseract\")\n\n check_messages = []\n for binary in binaries:\n if shutil.which(binary) is None:\n check_messages.append(Warning(error.format(binary), hint))\n\n return check_messages\n\n\n@register()\ndef debug_mode_check(app_configs, **kwargs):\n if settings.DEBUG:\n return [\n Warning(\n \"DEBUG mode is enabled. Disable Debug mode. This is a serious \"\n \"security issue, since it puts security overides in place which \"\n \"are meant to be only used during development. This \"\n \"also means that paperless will tell anyone various \"\n \"debugging information when something goes wrong.\",\n ),\n ]\n else:\n return []\n\n\n@register()\ndef settings_values_check(app_configs, **kwargs):\n \"\"\"\n Validates at least some of the user provided settings\n \"\"\"\n\n def _ocrmypdf_settings_check():\n \"\"\"\n Validates some of the arguments which will be provided to ocrmypdf\n against the valid options. Use \"ocrmypdf --help\" to see the valid\n inputs\n \"\"\"\n msgs = []\n if settings.OCR_OUTPUT_TYPE not in {\n \"pdfa\",\n \"pdf\",\n \"pdfa-1\",\n \"pdfa-2\",\n \"pdfa-3\",\n }:\n msgs.append(\n Error(f'OCR output type \"{settings.OCR_OUTPUT_TYPE}\" is not valid'),\n )\n\n if settings.OCR_MODE not in {\"force\", \"skip\", \"redo_ocr\"}:\n msgs.append(Error(f'OCR output mode \"{settings.OCR_MODE}\" is not valid'))\n\n if settings.OCR_CLEAN not in {\"clean\", \"clean_final\"}:\n msgs.append(Error(f'OCR clean mode \"{settings.OCR_CLEAN}\" is not valid'))\n return msgs\n\n def _timezone_validate():\n \"\"\"\n Validates the user provided timezone is a valid timezone\n \"\"\"\n try:\n import zoneinfo\n except ImportError: # pragma: nocover\n import backports.zoneinfo as zoneinfo\n msgs = []\n if settings.TIME_ZONE not in zoneinfo.available_timezones():\n msgs.append(\n Error(f'Timezone \"{settings.TIME_ZONE}\" is not a valid timezone'),\n )\n return msgs\n\n return _ocrmypdf_settings_check() + _timezone_validate()\n", "path": "src/paperless/checks.py"}], "after_files": [{"content": "import grp\nimport os\nimport pwd\nimport shutil\nimport stat\n\nfrom django.conf import settings\nfrom django.core.checks import Error\nfrom django.core.checks import register\nfrom django.core.checks import Warning\n\nexists_message = \"{} is set but doesn't exist.\"\nexists_hint = \"Create a directory at {}\"\nwriteable_message = \"{} is not writeable\"\nwriteable_hint = (\n \"Set the permissions of {} to be writeable by the user running the \"\n \"Paperless services\"\n)\n\n\ndef path_check(var, directory):\n messages = []\n if directory:\n if not os.path.isdir(directory):\n messages.append(\n Error(exists_message.format(var), exists_hint.format(directory)),\n )\n else:\n test_file = os.path.join(\n directory,\n f\"__paperless_write_test_{os.getpid()}__\",\n )\n try:\n with open(test_file, \"w\"):\n pass\n except PermissionError:\n dir_stat = os.stat(directory)\n dir_mode = stat.filemode(dir_stat.st_mode)\n dir_owner = pwd.getpwuid(dir_stat.st_uid).pw_name\n dir_group = grp.getgrgid(dir_stat.st_gid).gr_name\n messages.append(\n Error(\n writeable_message.format(var),\n writeable_hint.format(\n f\"\\n{dir_mode} {dir_owner} {dir_group} \" f\"{directory}\\n\",\n ),\n ),\n )\n finally:\n if os.path.isfile(test_file):\n os.remove(test_file)\n\n return messages\n\n\n@register()\ndef paths_check(app_configs, **kwargs):\n \"\"\"\n Check the various paths for existence, readability and writeability\n \"\"\"\n\n return (\n path_check(\"PAPERLESS_DATA_DIR\", settings.DATA_DIR)\n + path_check(\"PAPERLESS_TRASH_DIR\", settings.TRASH_DIR)\n + path_check(\"PAPERLESS_MEDIA_ROOT\", settings.MEDIA_ROOT)\n + path_check(\"PAPERLESS_CONSUMPTION_DIR\", settings.CONSUMPTION_DIR)\n )\n\n\n@register()\ndef binaries_check(app_configs, **kwargs):\n \"\"\"\n Paperless requires the existence of a few binaries, so we do some checks\n for those here.\n \"\"\"\n\n error = \"Paperless can't find {}. Without it, consumption is impossible.\"\n hint = \"Either it's not in your ${PATH} or it's not installed.\"\n\n binaries = (settings.CONVERT_BINARY, \"tesseract\")\n\n check_messages = []\n for binary in binaries:\n if shutil.which(binary) is None:\n check_messages.append(Warning(error.format(binary), hint))\n\n return check_messages\n\n\n@register()\ndef debug_mode_check(app_configs, **kwargs):\n if settings.DEBUG:\n return [\n Warning(\n \"DEBUG mode is enabled. Disable Debug mode. This is a serious \"\n \"security issue, since it puts security overides in place which \"\n \"are meant to be only used during development. This \"\n \"also means that paperless will tell anyone various \"\n \"debugging information when something goes wrong.\",\n ),\n ]\n else:\n return []\n\n\n@register()\ndef settings_values_check(app_configs, **kwargs):\n \"\"\"\n Validates at least some of the user provided settings\n \"\"\"\n\n def _ocrmypdf_settings_check():\n \"\"\"\n Validates some of the arguments which will be provided to ocrmypdf\n against the valid options. Use \"ocrmypdf --help\" to see the valid\n inputs\n \"\"\"\n msgs = []\n if settings.OCR_OUTPUT_TYPE not in {\n \"pdfa\",\n \"pdf\",\n \"pdfa-1\",\n \"pdfa-2\",\n \"pdfa-3\",\n }:\n msgs.append(\n Error(f'OCR output type \"{settings.OCR_OUTPUT_TYPE}\" is not valid'),\n )\n\n if settings.OCR_MODE not in {\"force\", \"skip\", \"redo\", \"skip_noarchive\"}:\n msgs.append(Error(f'OCR output mode \"{settings.OCR_MODE}\" is not valid'))\n\n if settings.OCR_CLEAN not in {\"clean\", \"clean-final\"}:\n msgs.append(Error(f'OCR clean mode \"{settings.OCR_CLEAN}\" is not valid'))\n return msgs\n\n def _timezone_validate():\n \"\"\"\n Validates the user provided timezone is a valid timezone\n \"\"\"\n try:\n import zoneinfo\n except ImportError: # pragma: nocover\n import backports.zoneinfo as zoneinfo\n msgs = []\n if settings.TIME_ZONE not in zoneinfo.available_timezones():\n msgs.append(\n Error(f'Timezone \"{settings.TIME_ZONE}\" is not a valid timezone'),\n )\n return msgs\n\n return _ocrmypdf_settings_check() + _timezone_validate()\n", "path": "src/paperless/checks.py"}]}
1,852
199
gh_patches_debug_34751
rasdani/github-patches
git_diff
tournesol-app__tournesol-1330
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [dev-end] load_public_dataset is broken Hi everyone, `load_public_dataset` seems to be broken. Current branch: `main` commit `c9c3b9dab` How to reproduce: ``` $ ./run-docker-compose.sh download --user-sampling 0.1 ⠿ Container tournesol-dev-api Started 25.9s Waiting for api to be ready... Waiting for api to be ready... Waiting for api to be ready... Waiting for api to be ready... Waiting for api to be ready... Waiting for api to be ready... Created 534 users Created 5113 video entities Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/pandas/core/indexes/base.py", line 3621, in get_loc return self._engine.get_loc(casted_key) File "pandas/_libs/index.pyx", line 136, in pandas._libs.index.IndexEngine.get_loc File "pandas/_libs/index.pyx", line 163, in pandas._libs.index.IndexEngine.get_loc File "pandas/_libs/hashtable_class_helper.pxi", line 5198, in pandas._libs.hashtable.PyObjectHashTable.get_item File "pandas/_libs/hashtable_class_helper.pxi", line 5206, in pandas._libs.hashtable.PyObjectHashTable.get_item KeyError: 'weight' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/backend/manage.py", line 22, in <module> main() File "/backend/manage.py", line 18, in main execute_from_command_line(sys.argv) File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line utility.execute() File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 440, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 414, in run_from_argv self.execute(*args, **cmd_options) File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 460, in execute output = self.handle(*args, **options) File "/backend/tournesol/management/commands/load_public_dataset.py", line 113, in handle weight=values["weight"], File "/usr/local/lib/python3.9/site-packages/pandas/core/series.py", line 958, in __getitem__ return self._get_value(key) File "/usr/local/lib/python3.9/site-packages/pandas/core/series.py", line 1069, in _get_value loc = self.index.get_loc(label) File "/usr/local/lib/python3.9/site-packages/pandas/core/indexes/base.py", line 3623, in get_loc raise KeyError(key) from err ``` I will investigate but any help is welcome! --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `backend/tournesol/management/commands/load_public_dataset.py` Content: ``` 1 import concurrent 2 import random 3 from concurrent.futures import ThreadPoolExecutor 4 5 import pandas as pd 6 from django.conf import settings 7 from django.core.management import call_command 8 from django.core.management.base import BaseCommand 9 from django.db import transaction 10 11 from core.models import User 12 from core.models.user import EmailDomain 13 from tournesol.models import Comparison, ComparisonCriteriaScore, ContributorRating, Entity, Poll 14 from tournesol.models.poll import ALGORITHM_MEHESTAN 15 16 PUBLIC_DATASET_URL = "https://api.tournesol.app/exports/comparisons/" 17 RANDOM_SEED = 0 18 SEED_USERS = ["aidjango", "le_science4all", "lpfaucon", "biscuissec", "amatissart"] 19 PRETRUSTED_PROBABILITY = 0.1 20 21 thread_pool = ThreadPoolExecutor(max_workers=10) 22 23 24 class Command(BaseCommand): 25 help = "Generate a new database for dev purposes, derived from the public dataset" 26 27 def add_arguments(self, parser): 28 parser.add_argument("--user-sampling", type=float, default=None) 29 30 def create_user(self, username): 31 is_pretrusted = ( 32 username in SEED_USERS 33 ) or random.random() < PRETRUSTED_PROBABILITY # nosec B311 34 email = f"{username}@trusted.example" if is_pretrusted else f"{username}@example.com" 35 user = User.objects.create_user( 36 username=username, 37 email=email, 38 is_staff=username in SEED_USERS 39 ) 40 if user.is_staff: 41 # Set a default password for staff accounts (used in e2e tests, etc.) 42 user.set_password("tournesol") 43 user.save() 44 return user 45 46 def create_videos(self, video_ids): 47 videos = {} 48 for video_id in video_ids: 49 videos[video_id] = Entity.create_from_video_id(video_id, fetch_metadata=False) 50 return videos 51 52 def fetch_video_metadata(self, videos): 53 def refresh_metadata(video): 54 video.inner.refresh_metadata(force=True, compute_language=True) 55 56 futures = (thread_pool.submit(refresh_metadata, video) for video in videos.values()) 57 for future in concurrent.futures.as_completed(futures): 58 # .result() will reraise any exception occured during refresh 59 future.result() 60 61 def create_test_user(self): 62 User.objects.create_user( # hardcoded password is delibarate # nosec B106 63 username="user1", 64 password="tournesol", 65 email="[email protected]" 66 ) 67 68 def handle(self, *args, **options): 69 random.seed(RANDOM_SEED) 70 71 public_dataset = pd.read_csv(PUBLIC_DATASET_URL) 72 nb_comparisons = 0 73 74 with transaction.atomic(): 75 poll = Poll.default_poll() 76 poll.algorithm = ALGORITHM_MEHESTAN 77 poll.save() 78 79 usernames = public_dataset.public_username.unique() 80 if options["user_sampling"]: 81 usernames = set( 82 pd.Series(usernames) 83 .sample(frac=options["user_sampling"], random_state=RANDOM_SEED) 84 .values 85 ).union(SEED_USERS) 86 public_dataset = public_dataset[public_dataset.public_username.isin(usernames)] 87 88 EmailDomain.objects.create( 89 domain="@trusted.example", 90 status=EmailDomain.STATUS_ACCEPTED 91 ) 92 93 users = {username: self.create_user(username) for username in usernames} 94 print(f"Created {len(users)} users") 95 96 videos = self.create_videos(set(public_dataset.video_a) | set(public_dataset.video_b)) 97 print(f"Created {len(videos)} video entities") 98 99 for ((username, video_a, video_b), rows) in public_dataset.groupby( 100 ["public_username", "video_a", "video_b"] 101 ): 102 comparison = Comparison.objects.create( 103 user=users[username], 104 poll=poll, 105 entity_1=videos[video_a], 106 entity_2=videos[video_b], 107 ) 108 for _, values in rows.iterrows(): 109 ComparisonCriteriaScore.objects.create( 110 comparison=comparison, 111 criteria=values["criteria"], 112 score=values["score"], 113 weight=values["weight"], 114 ) 115 nb_comparisons += 1 116 print(f"Created {nb_comparisons} comparisons") 117 118 for entity in Entity.objects.iterator(): 119 entity.update_n_ratings() 120 121 self.create_test_user() 122 ContributorRating.objects.update(is_public=True) 123 124 if settings.YOUTUBE_API_KEY: 125 print("Fetching video metadata from Youtube...") 126 self.fetch_video_metadata(videos) 127 print("Done.") 128 129 print("Running ml-train...") 130 call_command("ml_train") 131 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/backend/tournesol/management/commands/load_public_dataset.py b/backend/tournesol/management/commands/load_public_dataset.py --- a/backend/tournesol/management/commands/load_public_dataset.py +++ b/backend/tournesol/management/commands/load_public_dataset.py @@ -26,6 +26,7 @@ def add_arguments(self, parser): parser.add_argument("--user-sampling", type=float, default=None) + parser.add_argument("--comparisons-url", type=str, default=PUBLIC_DATASET_URL) def create_user(self, username): is_pretrusted = ( @@ -55,11 +56,13 @@ futures = (thread_pool.submit(refresh_metadata, video) for video in videos.values()) for future in concurrent.futures.as_completed(futures): - # .result() will reraise any exception occured during refresh + # .result() will reraise any exception occurred during refresh future.result() + thread_pool.shutdown() + def create_test_user(self): - User.objects.create_user( # hardcoded password is delibarate # nosec B106 + User.objects.create_user( # hardcoded password is deliberate # nosec B106 username="user1", password="tournesol", email="[email protected]" @@ -68,7 +71,7 @@ def handle(self, *args, **options): random.seed(RANDOM_SEED) - public_dataset = pd.read_csv(PUBLIC_DATASET_URL) + public_dataset = pd.read_csv(options["comparisons_url"]) nb_comparisons = 0 with transaction.atomic(): @@ -110,7 +113,6 @@ comparison=comparison, criteria=values["criteria"], score=values["score"], - weight=values["weight"], ) nb_comparisons += 1 print(f"Created {nb_comparisons} comparisons")
{"golden_diff": "diff --git a/backend/tournesol/management/commands/load_public_dataset.py b/backend/tournesol/management/commands/load_public_dataset.py\n--- a/backend/tournesol/management/commands/load_public_dataset.py\n+++ b/backend/tournesol/management/commands/load_public_dataset.py\n@@ -26,6 +26,7 @@\n \n def add_arguments(self, parser):\n parser.add_argument(\"--user-sampling\", type=float, default=None)\n+ parser.add_argument(\"--comparisons-url\", type=str, default=PUBLIC_DATASET_URL)\n \n def create_user(self, username):\n is_pretrusted = (\n@@ -55,11 +56,13 @@\n \n futures = (thread_pool.submit(refresh_metadata, video) for video in videos.values())\n for future in concurrent.futures.as_completed(futures):\n- # .result() will reraise any exception occured during refresh\n+ # .result() will reraise any exception occurred during refresh\n future.result()\n \n+ thread_pool.shutdown()\n+\n def create_test_user(self):\n- User.objects.create_user( # hardcoded password is delibarate # nosec B106\n+ User.objects.create_user( # hardcoded password is deliberate # nosec B106\n username=\"user1\",\n password=\"tournesol\",\n email=\"[email protected]\"\n@@ -68,7 +71,7 @@\n def handle(self, *args, **options):\n random.seed(RANDOM_SEED)\n \n- public_dataset = pd.read_csv(PUBLIC_DATASET_URL)\n+ public_dataset = pd.read_csv(options[\"comparisons_url\"])\n nb_comparisons = 0\n \n with transaction.atomic():\n@@ -110,7 +113,6 @@\n comparison=comparison,\n criteria=values[\"criteria\"],\n score=values[\"score\"],\n- weight=values[\"weight\"],\n )\n nb_comparisons += 1\n print(f\"Created {nb_comparisons} comparisons\")\n", "issue": "[dev-end] load_public_dataset is broken\nHi everyone,\r\n\r\n`load_public_dataset` seems to be broken.\r\n\r\nCurrent branch: `main` commit `c9c3b9dab`\r\nHow to reproduce:\r\n\r\n```\r\n$ ./run-docker-compose.sh download --user-sampling 0.1 \r\n \u283f Container tournesol-dev-api Started 25.9s\r\nWaiting for api to be ready...\r\nWaiting for api to be ready...\r\nWaiting for api to be ready...\r\nWaiting for api to be ready...\r\nWaiting for api to be ready...\r\nWaiting for api to be ready...\r\n\r\n Created 534 users\r\nCreated 5113 video entities\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.9/site-packages/pandas/core/indexes/base.py\", line 3621, in get_loc\r\n return self._engine.get_loc(casted_key)\r\n File \"pandas/_libs/index.pyx\", line 136, in pandas._libs.index.IndexEngine.get_loc\r\n File \"pandas/_libs/index.pyx\", line 163, in pandas._libs.index.IndexEngine.get_loc\r\n File \"pandas/_libs/hashtable_class_helper.pxi\", line 5198, in pandas._libs.hashtable.PyObjectHashTable.get_item\r\n File \"pandas/_libs/hashtable_class_helper.pxi\", line 5206, in pandas._libs.hashtable.PyObjectHashTable.get_item\r\nKeyError: 'weight'\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/backend/manage.py\", line 22, in <module>\r\n main()\r\n File \"/backend/manage.py\", line 18, in main\r\n execute_from_command_line(sys.argv)\r\n File \"/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py\", line 446, in execute_from_command_line\r\n utility.execute()\r\n File \"/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py\", line 440, in execute\r\n self.fetch_command(subcommand).run_from_argv(self.argv)\r\n File \"/usr/local/lib/python3.9/site-packages/django/core/management/base.py\", line 414, in run_from_argv\r\n self.execute(*args, **cmd_options)\r\n File \"/usr/local/lib/python3.9/site-packages/django/core/management/base.py\", line 460, in execute\r\n output = self.handle(*args, **options)\r\n File \"/backend/tournesol/management/commands/load_public_dataset.py\", line 113, in handle\r\n weight=values[\"weight\"],\r\n File \"/usr/local/lib/python3.9/site-packages/pandas/core/series.py\", line 958, in __getitem__\r\n return self._get_value(key)\r\n File \"/usr/local/lib/python3.9/site-packages/pandas/core/series.py\", line 1069, in _get_value\r\n loc = self.index.get_loc(label)\r\n File \"/usr/local/lib/python3.9/site-packages/pandas/core/indexes/base.py\", line 3623, in get_loc\r\n raise KeyError(key) from err\r\n```\r\n\r\nI will investigate but any help is welcome!\n", "before_files": [{"content": "import concurrent\nimport random\nfrom concurrent.futures import ThreadPoolExecutor\n\nimport pandas as pd\nfrom django.conf import settings\nfrom django.core.management import call_command\nfrom django.core.management.base import BaseCommand\nfrom django.db import transaction\n\nfrom core.models import User\nfrom core.models.user import EmailDomain\nfrom tournesol.models import Comparison, ComparisonCriteriaScore, ContributorRating, Entity, Poll\nfrom tournesol.models.poll import ALGORITHM_MEHESTAN\n\nPUBLIC_DATASET_URL = \"https://api.tournesol.app/exports/comparisons/\"\nRANDOM_SEED = 0\nSEED_USERS = [\"aidjango\", \"le_science4all\", \"lpfaucon\", \"biscuissec\", \"amatissart\"]\nPRETRUSTED_PROBABILITY = 0.1\n\nthread_pool = ThreadPoolExecutor(max_workers=10)\n\n\nclass Command(BaseCommand):\n help = \"Generate a new database for dev purposes, derived from the public dataset\"\n\n def add_arguments(self, parser):\n parser.add_argument(\"--user-sampling\", type=float, default=None)\n\n def create_user(self, username):\n is_pretrusted = (\n username in SEED_USERS\n ) or random.random() < PRETRUSTED_PROBABILITY # nosec B311\n email = f\"{username}@trusted.example\" if is_pretrusted else f\"{username}@example.com\"\n user = User.objects.create_user(\n username=username,\n email=email,\n is_staff=username in SEED_USERS\n )\n if user.is_staff:\n # Set a default password for staff accounts (used in e2e tests, etc.)\n user.set_password(\"tournesol\")\n user.save()\n return user\n\n def create_videos(self, video_ids):\n videos = {}\n for video_id in video_ids:\n videos[video_id] = Entity.create_from_video_id(video_id, fetch_metadata=False)\n return videos\n\n def fetch_video_metadata(self, videos):\n def refresh_metadata(video):\n video.inner.refresh_metadata(force=True, compute_language=True)\n\n futures = (thread_pool.submit(refresh_metadata, video) for video in videos.values())\n for future in concurrent.futures.as_completed(futures):\n # .result() will reraise any exception occured during refresh\n future.result()\n\n def create_test_user(self):\n User.objects.create_user( # hardcoded password is delibarate # nosec B106\n username=\"user1\",\n password=\"tournesol\",\n email=\"[email protected]\"\n )\n\n def handle(self, *args, **options):\n random.seed(RANDOM_SEED)\n\n public_dataset = pd.read_csv(PUBLIC_DATASET_URL)\n nb_comparisons = 0\n\n with transaction.atomic():\n poll = Poll.default_poll()\n poll.algorithm = ALGORITHM_MEHESTAN\n poll.save()\n\n usernames = public_dataset.public_username.unique()\n if options[\"user_sampling\"]:\n usernames = set(\n pd.Series(usernames)\n .sample(frac=options[\"user_sampling\"], random_state=RANDOM_SEED)\n .values\n ).union(SEED_USERS)\n public_dataset = public_dataset[public_dataset.public_username.isin(usernames)]\n\n EmailDomain.objects.create(\n domain=\"@trusted.example\",\n status=EmailDomain.STATUS_ACCEPTED\n )\n\n users = {username: self.create_user(username) for username in usernames}\n print(f\"Created {len(users)} users\")\n\n videos = self.create_videos(set(public_dataset.video_a) | set(public_dataset.video_b))\n print(f\"Created {len(videos)} video entities\")\n\n for ((username, video_a, video_b), rows) in public_dataset.groupby(\n [\"public_username\", \"video_a\", \"video_b\"]\n ):\n comparison = Comparison.objects.create(\n user=users[username],\n poll=poll,\n entity_1=videos[video_a],\n entity_2=videos[video_b],\n )\n for _, values in rows.iterrows():\n ComparisonCriteriaScore.objects.create(\n comparison=comparison,\n criteria=values[\"criteria\"],\n score=values[\"score\"],\n weight=values[\"weight\"],\n )\n nb_comparisons += 1\n print(f\"Created {nb_comparisons} comparisons\")\n\n for entity in Entity.objects.iterator():\n entity.update_n_ratings()\n\n self.create_test_user()\n ContributorRating.objects.update(is_public=True)\n\n if settings.YOUTUBE_API_KEY:\n print(\"Fetching video metadata from Youtube...\")\n self.fetch_video_metadata(videos)\n print(\"Done.\")\n\n print(\"Running ml-train...\")\n call_command(\"ml_train\")\n", "path": "backend/tournesol/management/commands/load_public_dataset.py"}], "after_files": [{"content": "import concurrent\nimport random\nfrom concurrent.futures import ThreadPoolExecutor\n\nimport pandas as pd\nfrom django.conf import settings\nfrom django.core.management import call_command\nfrom django.core.management.base import BaseCommand\nfrom django.db import transaction\n\nfrom core.models import User\nfrom core.models.user import EmailDomain\nfrom tournesol.models import Comparison, ComparisonCriteriaScore, ContributorRating, Entity, Poll\nfrom tournesol.models.poll import ALGORITHM_MEHESTAN\n\nPUBLIC_DATASET_URL = \"https://api.tournesol.app/exports/comparisons/\"\nRANDOM_SEED = 0\nSEED_USERS = [\"aidjango\", \"le_science4all\", \"lpfaucon\", \"biscuissec\", \"amatissart\"]\nPRETRUSTED_PROBABILITY = 0.1\n\nthread_pool = ThreadPoolExecutor(max_workers=10)\n\n\nclass Command(BaseCommand):\n help = \"Generate a new database for dev purposes, derived from the public dataset\"\n\n def add_arguments(self, parser):\n parser.add_argument(\"--user-sampling\", type=float, default=None)\n parser.add_argument(\"--comparisons-url\", type=str, default=PUBLIC_DATASET_URL)\n\n def create_user(self, username):\n is_pretrusted = (\n username in SEED_USERS\n ) or random.random() < PRETRUSTED_PROBABILITY # nosec B311\n email = f\"{username}@trusted.example\" if is_pretrusted else f\"{username}@example.com\"\n user = User.objects.create_user(\n username=username,\n email=email,\n is_staff=username in SEED_USERS\n )\n if user.is_staff:\n # Set a default password for staff accounts (used in e2e tests, etc.)\n user.set_password(\"tournesol\")\n user.save()\n return user\n\n def create_videos(self, video_ids):\n videos = {}\n for video_id in video_ids:\n videos[video_id] = Entity.create_from_video_id(video_id, fetch_metadata=False)\n return videos\n\n def fetch_video_metadata(self, videos):\n def refresh_metadata(video):\n video.inner.refresh_metadata(force=True, compute_language=True)\n\n futures = (thread_pool.submit(refresh_metadata, video) for video in videos.values())\n for future in concurrent.futures.as_completed(futures):\n # .result() will reraise any exception occurred during refresh\n future.result()\n\n thread_pool.shutdown()\n\n def create_test_user(self):\n User.objects.create_user( # hardcoded password is deliberate # nosec B106\n username=\"user1\",\n password=\"tournesol\",\n email=\"[email protected]\"\n )\n\n def handle(self, *args, **options):\n random.seed(RANDOM_SEED)\n\n public_dataset = pd.read_csv(options[\"comparisons_url\"])\n nb_comparisons = 0\n\n with transaction.atomic():\n poll = Poll.default_poll()\n poll.algorithm = ALGORITHM_MEHESTAN\n poll.save()\n\n usernames = public_dataset.public_username.unique()\n if options[\"user_sampling\"]:\n usernames = set(\n pd.Series(usernames)\n .sample(frac=options[\"user_sampling\"], random_state=RANDOM_SEED)\n .values\n ).union(SEED_USERS)\n public_dataset = public_dataset[public_dataset.public_username.isin(usernames)]\n\n EmailDomain.objects.create(\n domain=\"@trusted.example\",\n status=EmailDomain.STATUS_ACCEPTED\n )\n\n users = {username: self.create_user(username) for username in usernames}\n print(f\"Created {len(users)} users\")\n\n videos = self.create_videos(set(public_dataset.video_a) | set(public_dataset.video_b))\n print(f\"Created {len(videos)} video entities\")\n\n for ((username, video_a, video_b), rows) in public_dataset.groupby(\n [\"public_username\", \"video_a\", \"video_b\"]\n ):\n comparison = Comparison.objects.create(\n user=users[username],\n poll=poll,\n entity_1=videos[video_a],\n entity_2=videos[video_b],\n )\n for _, values in rows.iterrows():\n ComparisonCriteriaScore.objects.create(\n comparison=comparison,\n criteria=values[\"criteria\"],\n score=values[\"score\"],\n )\n nb_comparisons += 1\n print(f\"Created {nb_comparisons} comparisons\")\n\n for entity in Entity.objects.iterator():\n entity.update_n_ratings()\n\n self.create_test_user()\n ContributorRating.objects.update(is_public=True)\n\n if settings.YOUTUBE_API_KEY:\n print(\"Fetching video metadata from Youtube...\")\n self.fetch_video_metadata(videos)\n print(\"Done.\")\n\n print(\"Running ml-train...\")\n call_command(\"ml_train\")\n", "path": "backend/tournesol/management/commands/load_public_dataset.py"}]}
2,259
444
gh_patches_debug_18925
rasdani/github-patches
git_diff
mozmeao__snippets-service-1214
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Default to `default` distribution if client is not part of an experiment. We want all Firefox clients to get the default bundle for the locale / channel combination, unless they are part of an experiment when using pregen bundles in https://github.com/mozmeao/snippets-service/blob/master/snippets/base/views.py#L63 This is because non-Mozilla distributors of Firefox (e.g. Linux Distributions) override the distribution field with their identification. We 'll only use client's distribution if it starts with `experiment-` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `snippets/base/views.py` Content: ``` 1 import json 2 from urllib.parse import urljoin, urlparse 3 4 from distutils.util import strtobool 5 from django.conf import settings 6 from django.contrib.auth.decorators import permission_required 7 from django.core.exceptions import ValidationError 8 from django.core.files.storage import default_storage 9 from django.http import Http404, HttpResponse, HttpResponseBadRequest, HttpResponseRedirect 10 from django.shortcuts import get_object_or_404, render 11 from django.utils.functional import lazy 12 from django.views.decorators.cache import cache_control 13 from django.views.decorators.csrf import csrf_exempt 14 from django.views.decorators.http import require_POST 15 from django.views.generic import TemplateView 16 17 import sentry_sdk 18 from django_filters.views import FilterView 19 from django_statsd.clients import statsd 20 21 from snippets.base import util 22 from snippets.base.bundles import ASRSnippetBundle, SnippetBundle 23 from snippets.base.decorators import access_control 24 from snippets.base.filters import JobFilter 25 from snippets.base.models import CHANNELS, ASRSnippet, Client, Snippet, SnippetTemplate 26 from snippets.base.util import get_object_or_none 27 28 29 def _bundle_timeout(): 30 return getattr(settings, 'SNIPPET_BUNDLE_TIMEOUT') 31 SNIPPET_BUNDLE_TIMEOUT = lazy(_bundle_timeout, int)() # noqa 32 33 34 class HomeView(TemplateView): 35 template_name = 'base/home.jinja' 36 37 38 class JobListView(FilterView): 39 filterset_class = JobFilter 40 41 @property 42 def template_name(self): 43 if self.request.GET.get('calendar', 'false') == 'true': 44 return 'base/jobs_list_calendar.jinja' 45 46 return 'base/jobs_list_table.jinja' 47 48 49 def fetch_snippets(request, **kwargs): 50 if settings.USE_PREGEN_BUNDLES and kwargs['startpage_version'] == 6: 51 return fetch_snippet_pregen_bundle(request, **kwargs) 52 return fetch_snippet_bundle(request, **kwargs) 53 54 55 @cache_control(public=True, max_age=settings.SNIPPET_BUNDLE_PREGEN_REDIRECT_TIMEOUT) 56 def fetch_snippet_pregen_bundle(request, **kwargs): 57 statsd.incr('serve.bundle_pregen') 58 client = Client(**kwargs) 59 product = 'Firefox' 60 channel = client.channel.lower() 61 channel = next((item for item in CHANNELS if channel.startswith(item)), None) or 'release' 62 locale = client.locale.lower() 63 distribution = client.distribution.lower() 64 filename = ( 65 f'{settings.MEDIA_BUNDLES_PREGEN_ROOT}/{product}/{channel}/' 66 f'{locale}/{distribution}.json' 67 ) 68 69 full_url = urljoin(settings.CDN_URL or settings.SITE_URL, 70 urlparse(default_storage.url(filename)).path) 71 # Remove AWS S3 parameters 72 full_url = full_url.split('?')[0] 73 74 return HttpResponseRedirect(full_url) 75 76 77 @cache_control(public=True, max_age=SNIPPET_BUNDLE_TIMEOUT) 78 @access_control(max_age=SNIPPET_BUNDLE_TIMEOUT) 79 def fetch_snippet_bundle(request, **kwargs): 80 """ 81 Return one of the following responses: 82 - 200 with empty body when the bundle is empty 83 - 302 to a bundle URL after generating it if not cached. 84 """ 85 statsd.incr('serve.snippets') 86 87 client = Client(**kwargs) 88 if client.startpage_version == 6: 89 bundle = ASRSnippetBundle(client) 90 else: 91 bundle = SnippetBundle(client) 92 if bundle.empty: 93 statsd.incr('bundle.empty') 94 95 if client.startpage_version == 6: 96 # Return valid JSON for Activity Stream Router 97 return HttpResponse(status=200, content='{}', content_type='application/json') 98 99 # This is not a 204 because Activity Stream expects content, even if 100 # it's empty. 101 return HttpResponse(status=200, content='') 102 elif bundle.cached: 103 statsd.incr('bundle.cached') 104 else: 105 statsd.incr('bundle.generate') 106 bundle.generate() 107 108 return HttpResponseRedirect(bundle.url) 109 110 111 def preview_asr_snippet(request, uuid): 112 try: 113 snippet = get_object_or_404(ASRSnippet, uuid=uuid) 114 except ValidationError: 115 # Raised when UUID is a badly formed hexadecimal UUID string 116 raise Http404() 117 118 bundle_content = json.dumps({ 119 'messages': [snippet.render(preview=True)], 120 }) 121 return HttpResponse(bundle_content, content_type='application/json') 122 123 124 @csrf_exempt 125 @permission_required('base.change_snippet') 126 def preview_snippet(request): 127 """ 128 Build a snippet using info from the POST parameters, and preview that 129 snippet on a mock about:home page. 130 """ 131 try: 132 template_id = int(request.POST.get('template_id', None)) 133 except (TypeError, ValueError): 134 return HttpResponseBadRequest() 135 136 template = get_object_or_none(SnippetTemplate, id=template_id) 137 data = request.POST.get('data', None) 138 139 # Validate that data is JSON. 140 try: 141 json.loads(data) 142 except (TypeError, ValueError): 143 data = None 144 145 # If your parameters are wrong, I have no sympathy for you. 146 if data is None or template is None: 147 return HttpResponseBadRequest() 148 149 # Build a snippet that isn't saved so we can render it. 150 snippet = Snippet(template=template, data=data) 151 152 if strtobool(request.POST.get('activity_stream', 'false')): 153 template_name = 'base/preview_as.jinja' 154 preview_client = Client(5, 'Firefox', '57.0', 'default', 'default', 'en-US', 155 'release', 'default', 'default', 'default') 156 else: 157 template_name = 'base/preview.jinja' 158 preview_client = Client(4, 'Firefox', '24.0', 'default', 'default', 'en-US', 159 'release', 'default', 'default', 'default') 160 161 skip_boilerplate = request.POST.get('skip_boilerplate', 'false') 162 skip_boilerplate = strtobool(skip_boilerplate) 163 if skip_boilerplate: 164 template_name = 'base/preview_without_shell.jinja' 165 166 return render(request, template_name, { 167 'snippets_json': json.dumps([snippet.to_dict()]), 168 'client': preview_client, 169 'preview': True, 170 'current_firefox_major_version': util.current_firefox_major_version(), 171 }) 172 173 174 def show_snippet(request, snippet_id, uuid=False): 175 preview_client = Client(4, 'Firefox', '24.0', 'default', 'default', 'en-US', 176 'release', 'default', 'default', 'default') 177 178 if uuid: 179 snippet = get_object_or_404(Snippet, uuid=snippet_id) 180 else: 181 snippet = get_object_or_404(Snippet, pk=snippet_id) 182 if not snippet.published and not request.user.is_authenticated: 183 raise Http404() 184 185 template = 'base/preview.jinja' 186 if snippet.on_startpage_5: 187 template = 'base/preview_as.jinja' 188 return render(request, template, { 189 'snippets_json': json.dumps([snippet.to_dict()]), 190 'client': preview_client, 191 'preview': True, 192 'current_firefox_major_version': util.current_firefox_major_version(), 193 }) 194 195 196 @csrf_exempt 197 @require_POST 198 def csp_violation_capture(request): 199 try: 200 csp_data = json.loads(request.body) 201 except ValueError: 202 # Cannot decode CSP violation data, ignore 203 return HttpResponseBadRequest('Invalid CSP Report') 204 205 try: 206 blocked_uri = csp_data['csp-report']['blocked-uri'] 207 except KeyError: 208 # Incomplete CSP report 209 return HttpResponseBadRequest('Incomplete CSP Report') 210 211 with sentry_sdk.configure_scope() as scope: 212 scope.level = 'info' 213 scope.set_tag('logger', 'csp') 214 215 sentry_sdk.capture_message( 216 message='CSP Violation: {}'.format(blocked_uri)) 217 218 return HttpResponse('Captured CSP violation, thanks for reporting.') 219 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/snippets/base/views.py b/snippets/base/views.py --- a/snippets/base/views.py +++ b/snippets/base/views.py @@ -60,7 +60,20 @@ channel = client.channel.lower() channel = next((item for item in CHANNELS if channel.startswith(item)), None) or 'release' locale = client.locale.lower() + + # Distribution populated by client's distribution if it starts with + # `experiment-`. Otherwise default to `default`. + # + # This is because non-Mozilla distributors of Firefox (e.g. Linux + # Distributions) override the distribution field with their identification. + # We want all Firefox clients to get the default bundle for the locale / + # channel combination, unless they are part of an experiment. distribution = client.distribution.lower() + if distribution.startswith('experiment-'): + distribution = distribution[11:] + else: + distribution = 'default' + filename = ( f'{settings.MEDIA_BUNDLES_PREGEN_ROOT}/{product}/{channel}/' f'{locale}/{distribution}.json'
{"golden_diff": "diff --git a/snippets/base/views.py b/snippets/base/views.py\n--- a/snippets/base/views.py\n+++ b/snippets/base/views.py\n@@ -60,7 +60,20 @@\n channel = client.channel.lower()\n channel = next((item for item in CHANNELS if channel.startswith(item)), None) or 'release'\n locale = client.locale.lower()\n+\n+ # Distribution populated by client's distribution if it starts with\n+ # `experiment-`. Otherwise default to `default`.\n+ #\n+ # This is because non-Mozilla distributors of Firefox (e.g. Linux\n+ # Distributions) override the distribution field with their identification.\n+ # We want all Firefox clients to get the default bundle for the locale /\n+ # channel combination, unless they are part of an experiment.\n distribution = client.distribution.lower()\n+ if distribution.startswith('experiment-'):\n+ distribution = distribution[11:]\n+ else:\n+ distribution = 'default'\n+\n filename = (\n f'{settings.MEDIA_BUNDLES_PREGEN_ROOT}/{product}/{channel}/'\n f'{locale}/{distribution}.json'\n", "issue": "Default to `default` distribution if client is not part of an experiment.\nWe want all Firefox clients to get the default bundle for the locale / channel combination, unless they are part of an experiment when using pregen bundles in https://github.com/mozmeao/snippets-service/blob/master/snippets/base/views.py#L63\r\n\r\nThis is because non-Mozilla distributors of Firefox (e.g. Linux Distributions) override the distribution field with their identification. \r\n\r\nWe 'll only use client's distribution if it starts with `experiment-`\n", "before_files": [{"content": "import json\nfrom urllib.parse import urljoin, urlparse\n\nfrom distutils.util import strtobool\nfrom django.conf import settings\nfrom django.contrib.auth.decorators import permission_required\nfrom django.core.exceptions import ValidationError\nfrom django.core.files.storage import default_storage\nfrom django.http import Http404, HttpResponse, HttpResponseBadRequest, HttpResponseRedirect\nfrom django.shortcuts import get_object_or_404, render\nfrom django.utils.functional import lazy\nfrom django.views.decorators.cache import cache_control\nfrom django.views.decorators.csrf import csrf_exempt\nfrom django.views.decorators.http import require_POST\nfrom django.views.generic import TemplateView\n\nimport sentry_sdk\nfrom django_filters.views import FilterView\nfrom django_statsd.clients import statsd\n\nfrom snippets.base import util\nfrom snippets.base.bundles import ASRSnippetBundle, SnippetBundle\nfrom snippets.base.decorators import access_control\nfrom snippets.base.filters import JobFilter\nfrom snippets.base.models import CHANNELS, ASRSnippet, Client, Snippet, SnippetTemplate\nfrom snippets.base.util import get_object_or_none\n\n\ndef _bundle_timeout():\n return getattr(settings, 'SNIPPET_BUNDLE_TIMEOUT')\nSNIPPET_BUNDLE_TIMEOUT = lazy(_bundle_timeout, int)() # noqa\n\n\nclass HomeView(TemplateView):\n template_name = 'base/home.jinja'\n\n\nclass JobListView(FilterView):\n filterset_class = JobFilter\n\n @property\n def template_name(self):\n if self.request.GET.get('calendar', 'false') == 'true':\n return 'base/jobs_list_calendar.jinja'\n\n return 'base/jobs_list_table.jinja'\n\n\ndef fetch_snippets(request, **kwargs):\n if settings.USE_PREGEN_BUNDLES and kwargs['startpage_version'] == 6:\n return fetch_snippet_pregen_bundle(request, **kwargs)\n return fetch_snippet_bundle(request, **kwargs)\n\n\n@cache_control(public=True, max_age=settings.SNIPPET_BUNDLE_PREGEN_REDIRECT_TIMEOUT)\ndef fetch_snippet_pregen_bundle(request, **kwargs):\n statsd.incr('serve.bundle_pregen')\n client = Client(**kwargs)\n product = 'Firefox'\n channel = client.channel.lower()\n channel = next((item for item in CHANNELS if channel.startswith(item)), None) or 'release'\n locale = client.locale.lower()\n distribution = client.distribution.lower()\n filename = (\n f'{settings.MEDIA_BUNDLES_PREGEN_ROOT}/{product}/{channel}/'\n f'{locale}/{distribution}.json'\n )\n\n full_url = urljoin(settings.CDN_URL or settings.SITE_URL,\n urlparse(default_storage.url(filename)).path)\n # Remove AWS S3 parameters\n full_url = full_url.split('?')[0]\n\n return HttpResponseRedirect(full_url)\n\n\n@cache_control(public=True, max_age=SNIPPET_BUNDLE_TIMEOUT)\n@access_control(max_age=SNIPPET_BUNDLE_TIMEOUT)\ndef fetch_snippet_bundle(request, **kwargs):\n \"\"\"\n Return one of the following responses:\n - 200 with empty body when the bundle is empty\n - 302 to a bundle URL after generating it if not cached.\n \"\"\"\n statsd.incr('serve.snippets')\n\n client = Client(**kwargs)\n if client.startpage_version == 6:\n bundle = ASRSnippetBundle(client)\n else:\n bundle = SnippetBundle(client)\n if bundle.empty:\n statsd.incr('bundle.empty')\n\n if client.startpage_version == 6:\n # Return valid JSON for Activity Stream Router\n return HttpResponse(status=200, content='{}', content_type='application/json')\n\n # This is not a 204 because Activity Stream expects content, even if\n # it's empty.\n return HttpResponse(status=200, content='')\n elif bundle.cached:\n statsd.incr('bundle.cached')\n else:\n statsd.incr('bundle.generate')\n bundle.generate()\n\n return HttpResponseRedirect(bundle.url)\n\n\ndef preview_asr_snippet(request, uuid):\n try:\n snippet = get_object_or_404(ASRSnippet, uuid=uuid)\n except ValidationError:\n # Raised when UUID is a badly formed hexadecimal UUID string\n raise Http404()\n\n bundle_content = json.dumps({\n 'messages': [snippet.render(preview=True)],\n })\n return HttpResponse(bundle_content, content_type='application/json')\n\n\n@csrf_exempt\n@permission_required('base.change_snippet')\ndef preview_snippet(request):\n \"\"\"\n Build a snippet using info from the POST parameters, and preview that\n snippet on a mock about:home page.\n \"\"\"\n try:\n template_id = int(request.POST.get('template_id', None))\n except (TypeError, ValueError):\n return HttpResponseBadRequest()\n\n template = get_object_or_none(SnippetTemplate, id=template_id)\n data = request.POST.get('data', None)\n\n # Validate that data is JSON.\n try:\n json.loads(data)\n except (TypeError, ValueError):\n data = None\n\n # If your parameters are wrong, I have no sympathy for you.\n if data is None or template is None:\n return HttpResponseBadRequest()\n\n # Build a snippet that isn't saved so we can render it.\n snippet = Snippet(template=template, data=data)\n\n if strtobool(request.POST.get('activity_stream', 'false')):\n template_name = 'base/preview_as.jinja'\n preview_client = Client(5, 'Firefox', '57.0', 'default', 'default', 'en-US',\n 'release', 'default', 'default', 'default')\n else:\n template_name = 'base/preview.jinja'\n preview_client = Client(4, 'Firefox', '24.0', 'default', 'default', 'en-US',\n 'release', 'default', 'default', 'default')\n\n skip_boilerplate = request.POST.get('skip_boilerplate', 'false')\n skip_boilerplate = strtobool(skip_boilerplate)\n if skip_boilerplate:\n template_name = 'base/preview_without_shell.jinja'\n\n return render(request, template_name, {\n 'snippets_json': json.dumps([snippet.to_dict()]),\n 'client': preview_client,\n 'preview': True,\n 'current_firefox_major_version': util.current_firefox_major_version(),\n })\n\n\ndef show_snippet(request, snippet_id, uuid=False):\n preview_client = Client(4, 'Firefox', '24.0', 'default', 'default', 'en-US',\n 'release', 'default', 'default', 'default')\n\n if uuid:\n snippet = get_object_or_404(Snippet, uuid=snippet_id)\n else:\n snippet = get_object_or_404(Snippet, pk=snippet_id)\n if not snippet.published and not request.user.is_authenticated:\n raise Http404()\n\n template = 'base/preview.jinja'\n if snippet.on_startpage_5:\n template = 'base/preview_as.jinja'\n return render(request, template, {\n 'snippets_json': json.dumps([snippet.to_dict()]),\n 'client': preview_client,\n 'preview': True,\n 'current_firefox_major_version': util.current_firefox_major_version(),\n })\n\n\n@csrf_exempt\n@require_POST\ndef csp_violation_capture(request):\n try:\n csp_data = json.loads(request.body)\n except ValueError:\n # Cannot decode CSP violation data, ignore\n return HttpResponseBadRequest('Invalid CSP Report')\n\n try:\n blocked_uri = csp_data['csp-report']['blocked-uri']\n except KeyError:\n # Incomplete CSP report\n return HttpResponseBadRequest('Incomplete CSP Report')\n\n with sentry_sdk.configure_scope() as scope:\n scope.level = 'info'\n scope.set_tag('logger', 'csp')\n\n sentry_sdk.capture_message(\n message='CSP Violation: {}'.format(blocked_uri))\n\n return HttpResponse('Captured CSP violation, thanks for reporting.')\n", "path": "snippets/base/views.py"}], "after_files": [{"content": "import json\nfrom urllib.parse import urljoin, urlparse\n\nfrom distutils.util import strtobool\nfrom django.conf import settings\nfrom django.contrib.auth.decorators import permission_required\nfrom django.core.exceptions import ValidationError\nfrom django.core.files.storage import default_storage\nfrom django.http import Http404, HttpResponse, HttpResponseBadRequest, HttpResponseRedirect\nfrom django.shortcuts import get_object_or_404, render\nfrom django.utils.functional import lazy\nfrom django.views.decorators.cache import cache_control\nfrom django.views.decorators.csrf import csrf_exempt\nfrom django.views.decorators.http import require_POST\nfrom django.views.generic import TemplateView\n\nimport sentry_sdk\nfrom django_filters.views import FilterView\nfrom django_statsd.clients import statsd\n\nfrom snippets.base import util\nfrom snippets.base.bundles import ASRSnippetBundle, SnippetBundle\nfrom snippets.base.decorators import access_control\nfrom snippets.base.filters import JobFilter\nfrom snippets.base.models import CHANNELS, ASRSnippet, Client, Snippet, SnippetTemplate\nfrom snippets.base.util import get_object_or_none\n\n\ndef _bundle_timeout():\n return getattr(settings, 'SNIPPET_BUNDLE_TIMEOUT')\nSNIPPET_BUNDLE_TIMEOUT = lazy(_bundle_timeout, int)() # noqa\n\n\nclass HomeView(TemplateView):\n template_name = 'base/home.jinja'\n\n\nclass JobListView(FilterView):\n filterset_class = JobFilter\n\n @property\n def template_name(self):\n if self.request.GET.get('calendar', 'false') == 'true':\n return 'base/jobs_list_calendar.jinja'\n\n return 'base/jobs_list_table.jinja'\n\n\ndef fetch_snippets(request, **kwargs):\n if settings.USE_PREGEN_BUNDLES and kwargs['startpage_version'] == 6:\n return fetch_snippet_pregen_bundle(request, **kwargs)\n return fetch_snippet_bundle(request, **kwargs)\n\n\n@cache_control(public=True, max_age=settings.SNIPPET_BUNDLE_PREGEN_REDIRECT_TIMEOUT)\ndef fetch_snippet_pregen_bundle(request, **kwargs):\n statsd.incr('serve.bundle_pregen')\n client = Client(**kwargs)\n product = 'Firefox'\n channel = client.channel.lower()\n channel = next((item for item in CHANNELS if channel.startswith(item)), None) or 'release'\n locale = client.locale.lower()\n\n # Distribution populated by client's distribution if it starts with\n # `experiment-`. Otherwise default to `default`.\n #\n # This is because non-Mozilla distributors of Firefox (e.g. Linux\n # Distributions) override the distribution field with their identification.\n # We want all Firefox clients to get the default bundle for the locale /\n # channel combination, unless they are part of an experiment.\n distribution = client.distribution.lower()\n if distribution.startswith('experiment-'):\n distribution = distribution[11:]\n else:\n distribution = 'default'\n\n filename = (\n f'{settings.MEDIA_BUNDLES_PREGEN_ROOT}/{product}/{channel}/'\n f'{locale}/{distribution}.json'\n )\n\n full_url = urljoin(settings.CDN_URL or settings.SITE_URL,\n urlparse(default_storage.url(filename)).path)\n # Remove AWS S3 parameters\n full_url = full_url.split('?')[0]\n\n return HttpResponseRedirect(full_url)\n\n\n@cache_control(public=True, max_age=SNIPPET_BUNDLE_TIMEOUT)\n@access_control(max_age=SNIPPET_BUNDLE_TIMEOUT)\ndef fetch_snippet_bundle(request, **kwargs):\n \"\"\"\n Return one of the following responses:\n - 200 with empty body when the bundle is empty\n - 302 to a bundle URL after generating it if not cached.\n \"\"\"\n statsd.incr('serve.snippets')\n\n client = Client(**kwargs)\n if client.startpage_version == 6:\n bundle = ASRSnippetBundle(client)\n else:\n bundle = SnippetBundle(client)\n if bundle.empty:\n statsd.incr('bundle.empty')\n\n if client.startpage_version == 6:\n # Return valid JSON for Activity Stream Router\n return HttpResponse(status=200, content='{}', content_type='application/json')\n\n # This is not a 204 because Activity Stream expects content, even if\n # it's empty.\n return HttpResponse(status=200, content='')\n elif bundle.cached:\n statsd.incr('bundle.cached')\n else:\n statsd.incr('bundle.generate')\n bundle.generate()\n\n return HttpResponseRedirect(bundle.url)\n\n\ndef preview_asr_snippet(request, uuid):\n try:\n snippet = get_object_or_404(ASRSnippet, uuid=uuid)\n except ValidationError:\n # Raised when UUID is a badly formed hexadecimal UUID string\n raise Http404()\n\n bundle_content = json.dumps({\n 'messages': [snippet.render(preview=True)],\n })\n return HttpResponse(bundle_content, content_type='application/json')\n\n\n@csrf_exempt\n@permission_required('base.change_snippet')\ndef preview_snippet(request):\n \"\"\"\n Build a snippet using info from the POST parameters, and preview that\n snippet on a mock about:home page.\n \"\"\"\n try:\n template_id = int(request.POST.get('template_id', None))\n except (TypeError, ValueError):\n return HttpResponseBadRequest()\n\n template = get_object_or_none(SnippetTemplate, id=template_id)\n data = request.POST.get('data', None)\n\n # Validate that data is JSON.\n try:\n json.loads(data)\n except (TypeError, ValueError):\n data = None\n\n # If your parameters are wrong, I have no sympathy for you.\n if data is None or template is None:\n return HttpResponseBadRequest()\n\n # Build a snippet that isn't saved so we can render it.\n snippet = Snippet(template=template, data=data)\n\n if strtobool(request.POST.get('activity_stream', 'false')):\n template_name = 'base/preview_as.jinja'\n preview_client = Client(5, 'Firefox', '57.0', 'default', 'default', 'en-US',\n 'release', 'default', 'default', 'default')\n else:\n template_name = 'base/preview.jinja'\n preview_client = Client(4, 'Firefox', '24.0', 'default', 'default', 'en-US',\n 'release', 'default', 'default', 'default')\n\n skip_boilerplate = request.POST.get('skip_boilerplate', 'false')\n skip_boilerplate = strtobool(skip_boilerplate)\n if skip_boilerplate:\n template_name = 'base/preview_without_shell.jinja'\n\n return render(request, template_name, {\n 'snippets_json': json.dumps([snippet.to_dict()]),\n 'client': preview_client,\n 'preview': True,\n 'current_firefox_major_version': util.current_firefox_major_version(),\n })\n\n\ndef show_snippet(request, snippet_id, uuid=False):\n preview_client = Client(4, 'Firefox', '24.0', 'default', 'default', 'en-US',\n 'release', 'default', 'default', 'default')\n\n if uuid:\n snippet = get_object_or_404(Snippet, uuid=snippet_id)\n else:\n snippet = get_object_or_404(Snippet, pk=snippet_id)\n if not snippet.published and not request.user.is_authenticated:\n raise Http404()\n\n template = 'base/preview.jinja'\n if snippet.on_startpage_5:\n template = 'base/preview_as.jinja'\n return render(request, template, {\n 'snippets_json': json.dumps([snippet.to_dict()]),\n 'client': preview_client,\n 'preview': True,\n 'current_firefox_major_version': util.current_firefox_major_version(),\n })\n\n\n@csrf_exempt\n@require_POST\ndef csp_violation_capture(request):\n try:\n csp_data = json.loads(request.body)\n except ValueError:\n # Cannot decode CSP violation data, ignore\n return HttpResponseBadRequest('Invalid CSP Report')\n\n try:\n blocked_uri = csp_data['csp-report']['blocked-uri']\n except KeyError:\n # Incomplete CSP report\n return HttpResponseBadRequest('Incomplete CSP Report')\n\n with sentry_sdk.configure_scope() as scope:\n scope.level = 'info'\n scope.set_tag('logger', 'csp')\n\n sentry_sdk.capture_message(\n message='CSP Violation: {}'.format(blocked_uri))\n\n return HttpResponse('Captured CSP violation, thanks for reporting.')\n", "path": "snippets/base/views.py"}]}
2,649
243
gh_patches_debug_5394
rasdani/github-patches
git_diff
cupy__cupy-1717
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- `cupy.nextafter` is wrong on a=0 `test_nextafter_combination` is failing. `{'dtype_b': <type 'numpy.float32'>, 'dtype_a': <type 'numpy.bool_'>}` ``` x = array([[1.1754944e-38, 1.0000001e+00, 1.1754944e-38], [1.0000001e+00, 1.1754944e-38, 1.0000000e+00]], dtype=float32) y = array([[1.4012985e-45, 1.0000001e+00, 1.4012985e-45], [1.0000001e+00, 1.4012985e-45, 1.0000000e+00]], dtype=float32) ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `cupy/math/floating.py` Content: ``` 1 from cupy import core 2 from cupy.math import ufunc 3 4 5 signbit = core.create_ufunc( 6 'cupy_signbit', 7 ('e->?', 'f->?', 'd->?'), 8 'out0 = signbit(in0)', 9 doc='''Tests elementwise if the sign bit is set (i.e. less than zero). 10 11 .. seealso:: :data:`numpy.signbit` 12 13 ''') 14 15 16 copysign = ufunc.create_math_ufunc( 17 'copysign', 2, 'cupy_copysign', 18 '''Returns the first argument with the sign bit of the second elementwise. 19 20 .. seealso:: :data:`numpy.copysign` 21 22 ''') 23 24 25 ldexp = core.create_ufunc( 26 'cupy_ldexp', 27 ('ei->e', 'fi->f', 'el->e', 'fl->f', 'di->d', 'dq->d'), 28 'out0 = ldexp(in0, in1)', 29 doc='''Computes ``x1 * 2 ** x2`` elementwise. 30 31 .. seealso:: :data:`numpy.ldexp` 32 33 ''') 34 35 36 frexp = core.create_ufunc( 37 'cupy_frexp', 38 ('e->ei', 'f->fi', 'd->di'), 39 'int nptr; out0 = frexp(in0, &nptr); out1 = nptr', 40 doc='''Decomposes each element to mantissa and two's exponent. 41 42 This ufunc outputs two arrays of the input dtype and the ``int`` dtype. 43 44 .. seealso:: :data:`numpy.frexp` 45 46 ''') 47 48 49 nextafter = ufunc.create_math_ufunc( 50 'nextafter', 2, 'cupy_nextafter', 51 '''Computes the nearest neighbor float values towards the second argument. 52 53 .. seealso:: :data:`numpy.nextafter` 54 55 ''') 56 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/cupy/math/floating.py b/cupy/math/floating.py --- a/cupy/math/floating.py +++ b/cupy/math/floating.py @@ -50,6 +50,11 @@ 'nextafter', 2, 'cupy_nextafter', '''Computes the nearest neighbor float values towards the second argument. + .. note:: + For values that are close to zero (or denormal numbers), + results of :func:`cupy.nextafter` may be different from those of + :func:`numpy.nextafter`, because CuPy sets ``-ftz=true``. + .. seealso:: :data:`numpy.nextafter` ''')
{"golden_diff": "diff --git a/cupy/math/floating.py b/cupy/math/floating.py\n--- a/cupy/math/floating.py\n+++ b/cupy/math/floating.py\n@@ -50,6 +50,11 @@\n 'nextafter', 2, 'cupy_nextafter',\n '''Computes the nearest neighbor float values towards the second argument.\n \n+ .. note::\n+ For values that are close to zero (or denormal numbers),\n+ results of :func:`cupy.nextafter` may be different from those of\n+ :func:`numpy.nextafter`, because CuPy sets ``-ftz=true``.\n+\n .. seealso:: :data:`numpy.nextafter`\n \n ''')\n", "issue": "`cupy.nextafter` is wrong on a=0\n`test_nextafter_combination` is failing.\r\n\r\n`{'dtype_b': <type 'numpy.float32'>, 'dtype_a': <type 'numpy.bool_'>}`\r\n```\r\nx = array([[1.1754944e-38, 1.0000001e+00, 1.1754944e-38],\r\n [1.0000001e+00, 1.1754944e-38, 1.0000000e+00]], dtype=float32)\r\ny = array([[1.4012985e-45, 1.0000001e+00, 1.4012985e-45],\r\n [1.0000001e+00, 1.4012985e-45, 1.0000000e+00]], dtype=float32)\r\n```\n", "before_files": [{"content": "from cupy import core\nfrom cupy.math import ufunc\n\n\nsignbit = core.create_ufunc(\n 'cupy_signbit',\n ('e->?', 'f->?', 'd->?'),\n 'out0 = signbit(in0)',\n doc='''Tests elementwise if the sign bit is set (i.e. less than zero).\n\n .. seealso:: :data:`numpy.signbit`\n\n ''')\n\n\ncopysign = ufunc.create_math_ufunc(\n 'copysign', 2, 'cupy_copysign',\n '''Returns the first argument with the sign bit of the second elementwise.\n\n .. seealso:: :data:`numpy.copysign`\n\n ''')\n\n\nldexp = core.create_ufunc(\n 'cupy_ldexp',\n ('ei->e', 'fi->f', 'el->e', 'fl->f', 'di->d', 'dq->d'),\n 'out0 = ldexp(in0, in1)',\n doc='''Computes ``x1 * 2 ** x2`` elementwise.\n\n .. seealso:: :data:`numpy.ldexp`\n\n ''')\n\n\nfrexp = core.create_ufunc(\n 'cupy_frexp',\n ('e->ei', 'f->fi', 'd->di'),\n 'int nptr; out0 = frexp(in0, &nptr); out1 = nptr',\n doc='''Decomposes each element to mantissa and two's exponent.\n\n This ufunc outputs two arrays of the input dtype and the ``int`` dtype.\n\n .. seealso:: :data:`numpy.frexp`\n\n ''')\n\n\nnextafter = ufunc.create_math_ufunc(\n 'nextafter', 2, 'cupy_nextafter',\n '''Computes the nearest neighbor float values towards the second argument.\n\n .. seealso:: :data:`numpy.nextafter`\n\n ''')\n", "path": "cupy/math/floating.py"}], "after_files": [{"content": "from cupy import core\nfrom cupy.math import ufunc\n\n\nsignbit = core.create_ufunc(\n 'cupy_signbit',\n ('e->?', 'f->?', 'd->?'),\n 'out0 = signbit(in0)',\n doc='''Tests elementwise if the sign bit is set (i.e. less than zero).\n\n .. seealso:: :data:`numpy.signbit`\n\n ''')\n\n\ncopysign = ufunc.create_math_ufunc(\n 'copysign', 2, 'cupy_copysign',\n '''Returns the first argument with the sign bit of the second elementwise.\n\n .. seealso:: :data:`numpy.copysign`\n\n ''')\n\n\nldexp = core.create_ufunc(\n 'cupy_ldexp',\n ('ei->e', 'fi->f', 'el->e', 'fl->f', 'di->d', 'dq->d'),\n 'out0 = ldexp(in0, in1)',\n doc='''Computes ``x1 * 2 ** x2`` elementwise.\n\n .. seealso:: :data:`numpy.ldexp`\n\n ''')\n\n\nfrexp = core.create_ufunc(\n 'cupy_frexp',\n ('e->ei', 'f->fi', 'd->di'),\n 'int nptr; out0 = frexp(in0, &nptr); out1 = nptr',\n doc='''Decomposes each element to mantissa and two's exponent.\n\n This ufunc outputs two arrays of the input dtype and the ``int`` dtype.\n\n .. seealso:: :data:`numpy.frexp`\n\n ''')\n\n\nnextafter = ufunc.create_math_ufunc(\n 'nextafter', 2, 'cupy_nextafter',\n '''Computes the nearest neighbor float values towards the second argument.\n\n .. note::\n For values that are close to zero (or denormal numbers),\n results of :func:`cupy.nextafter` may be different from those of\n :func:`numpy.nextafter`, because CuPy sets ``-ftz=true``.\n\n .. seealso:: :data:`numpy.nextafter`\n\n ''')\n", "path": "cupy/math/floating.py"}]}
1,044
154
gh_patches_debug_34472
rasdani/github-patches
git_diff
alltheplaces__alltheplaces-3951
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Spider dominos_pizza_jp is broken During the global build at 2021-06-30-14-42-26, spider **dominos_pizza_jp** failed with **0 features** and **0 errors**. Here's [the log](https://data.alltheplaces.xyz/runs/2021-06-30-14-42-26/logs/dominos_pizza_jp.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-30-14-42-26/output/dominos_pizza_jp.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-30-14-42-26/output/dominos_pizza_jp.geojson)) --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `locations/spiders/dominos_pizza_jp.py` Content: ``` 1 # -*- coding: utf-8 -*- 2 import re 3 4 import scrapy 5 6 from locations.items import GeojsonPointItem 7 8 9 class DominosPizzaJPSpider(scrapy.Spider): 10 name = "dominos_pizza_jp" 11 item_attributes = {"brand": "Domino's", "brand_wikidata": "Q839466"} 12 allowed_domains = ["dominos.jp"] 13 start_urls = [ 14 "https://www.dominos.jp/sitemap.aspx", 15 ] 16 download_delay = 0.3 17 18 def parse(self, response): 19 response.selector.remove_namespaces() 20 store_urls = response.xpath('//url/loc/text()[contains(.,"/store/")]').extract() 21 for url in store_urls: 22 yield scrapy.Request(url, callback=self.parse_store) 23 24 def parse_store(self, response): 25 ref = re.search(r".+/(.+?)/?(?:\.html|$)", response.url).group(1) 26 27 properties = { 28 "ref": ref, 29 "name": response.xpath( 30 'normalize-space(//div[@class="storetitle"][1]/text())' 31 ).extract_first(), 32 "addr_full": response.xpath( 33 'normalize-space(//span[@id="store-address-info"]/p/a/text())' 34 ).extract_first(), 35 "postcode": re.search( 36 r"([\d-]*)$", 37 response.xpath( 38 'normalize-space(//div[@class="store-details-text"][1]/p/text())' 39 ).extract_first(), 40 ).group(1), 41 "country": "JP", 42 "lat": response.xpath( 43 'normalize-space(//input[@id="store-lat"]/@value)' 44 ).extract_first(), 45 "lon": response.xpath( 46 'normalize-space(//input[@id="store-lon"]/@value)' 47 ).extract_first(), 48 "phone": re.search( 49 r"\s([\d-]*)$", 50 response.xpath('//div[@id="store-tel"]/a/text()').extract_first(), 51 ).group(1), 52 "website": response.url, 53 } 54 55 yield GeojsonPointItem(**properties) 56 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/locations/spiders/dominos_pizza_jp.py b/locations/spiders/dominos_pizza_jp.py --- a/locations/spiders/dominos_pizza_jp.py +++ b/locations/spiders/dominos_pizza_jp.py @@ -8,7 +8,11 @@ class DominosPizzaJPSpider(scrapy.Spider): name = "dominos_pizza_jp" - item_attributes = {"brand": "Domino's", "brand_wikidata": "Q839466"} + item_attributes = { + "brand": "Domino's", + "brand_wikidata": "Q839466", + "country": "JP", + } allowed_domains = ["dominos.jp"] start_urls = [ "https://www.dominos.jp/sitemap.aspx", @@ -27,28 +31,24 @@ properties = { "ref": ref, "name": response.xpath( - 'normalize-space(//div[@class="storetitle"][1]/text())' + 'normalize-space(//h1[@class="storetitle"][1]/text())' ).extract_first(), "addr_full": response.xpath( - 'normalize-space(//span[@id="store-address-info"]/p/a/text())' + 'normalize-space(//span[@id="store-address-info"]/p/text()[4])' ).extract_first(), "postcode": re.search( r"([\d-]*)$", response.xpath( - 'normalize-space(//div[@class="store-details-text"][1]/p/text())' + 'normalize-space(//div[@class="store-details-text"]/span/p/text()[2])' ).extract_first(), ).group(1), - "country": "JP", "lat": response.xpath( 'normalize-space(//input[@id="store-lat"]/@value)' ).extract_first(), "lon": response.xpath( 'normalize-space(//input[@id="store-lon"]/@value)' ).extract_first(), - "phone": re.search( - r"\s([\d-]*)$", - response.xpath('//div[@id="store-tel"]/a/text()').extract_first(), - ).group(1), + "phone": response.xpath('//div[@id="store-tel"]/a/text()').extract_first(), "website": response.url, }
{"golden_diff": "diff --git a/locations/spiders/dominos_pizza_jp.py b/locations/spiders/dominos_pizza_jp.py\n--- a/locations/spiders/dominos_pizza_jp.py\n+++ b/locations/spiders/dominos_pizza_jp.py\n@@ -8,7 +8,11 @@\n \n class DominosPizzaJPSpider(scrapy.Spider):\n name = \"dominos_pizza_jp\"\n- item_attributes = {\"brand\": \"Domino's\", \"brand_wikidata\": \"Q839466\"}\n+ item_attributes = {\n+ \"brand\": \"Domino's\",\n+ \"brand_wikidata\": \"Q839466\",\n+ \"country\": \"JP\",\n+ }\n allowed_domains = [\"dominos.jp\"]\n start_urls = [\n \"https://www.dominos.jp/sitemap.aspx\",\n@@ -27,28 +31,24 @@\n properties = {\n \"ref\": ref,\n \"name\": response.xpath(\n- 'normalize-space(//div[@class=\"storetitle\"][1]/text())'\n+ 'normalize-space(//h1[@class=\"storetitle\"][1]/text())'\n ).extract_first(),\n \"addr_full\": response.xpath(\n- 'normalize-space(//span[@id=\"store-address-info\"]/p/a/text())'\n+ 'normalize-space(//span[@id=\"store-address-info\"]/p/text()[4])'\n ).extract_first(),\n \"postcode\": re.search(\n r\"([\\d-]*)$\",\n response.xpath(\n- 'normalize-space(//div[@class=\"store-details-text\"][1]/p/text())'\n+ 'normalize-space(//div[@class=\"store-details-text\"]/span/p/text()[2])'\n ).extract_first(),\n ).group(1),\n- \"country\": \"JP\",\n \"lat\": response.xpath(\n 'normalize-space(//input[@id=\"store-lat\"]/@value)'\n ).extract_first(),\n \"lon\": response.xpath(\n 'normalize-space(//input[@id=\"store-lon\"]/@value)'\n ).extract_first(),\n- \"phone\": re.search(\n- r\"\\s([\\d-]*)$\",\n- response.xpath('//div[@id=\"store-tel\"]/a/text()').extract_first(),\n- ).group(1),\n+ \"phone\": response.xpath('//div[@id=\"store-tel\"]/a/text()').extract_first(),\n \"website\": response.url,\n }\n", "issue": "Spider dominos_pizza_jp is broken\nDuring the global build at 2021-06-30-14-42-26, spider **dominos_pizza_jp** failed with **0 features** and **0 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-06-30-14-42-26/logs/dominos_pizza_jp.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-30-14-42-26/output/dominos_pizza_jp.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-30-14-42-26/output/dominos_pizza_jp.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport re\n\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\n\n\nclass DominosPizzaJPSpider(scrapy.Spider):\n name = \"dominos_pizza_jp\"\n item_attributes = {\"brand\": \"Domino's\", \"brand_wikidata\": \"Q839466\"}\n allowed_domains = [\"dominos.jp\"]\n start_urls = [\n \"https://www.dominos.jp/sitemap.aspx\",\n ]\n download_delay = 0.3\n\n def parse(self, response):\n response.selector.remove_namespaces()\n store_urls = response.xpath('//url/loc/text()[contains(.,\"/store/\")]').extract()\n for url in store_urls:\n yield scrapy.Request(url, callback=self.parse_store)\n\n def parse_store(self, response):\n ref = re.search(r\".+/(.+?)/?(?:\\.html|$)\", response.url).group(1)\n\n properties = {\n \"ref\": ref,\n \"name\": response.xpath(\n 'normalize-space(//div[@class=\"storetitle\"][1]/text())'\n ).extract_first(),\n \"addr_full\": response.xpath(\n 'normalize-space(//span[@id=\"store-address-info\"]/p/a/text())'\n ).extract_first(),\n \"postcode\": re.search(\n r\"([\\d-]*)$\",\n response.xpath(\n 'normalize-space(//div[@class=\"store-details-text\"][1]/p/text())'\n ).extract_first(),\n ).group(1),\n \"country\": \"JP\",\n \"lat\": response.xpath(\n 'normalize-space(//input[@id=\"store-lat\"]/@value)'\n ).extract_first(),\n \"lon\": response.xpath(\n 'normalize-space(//input[@id=\"store-lon\"]/@value)'\n ).extract_first(),\n \"phone\": re.search(\n r\"\\s([\\d-]*)$\",\n response.xpath('//div[@id=\"store-tel\"]/a/text()').extract_first(),\n ).group(1),\n \"website\": response.url,\n }\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/dominos_pizza_jp.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport re\n\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\n\n\nclass DominosPizzaJPSpider(scrapy.Spider):\n name = \"dominos_pizza_jp\"\n item_attributes = {\n \"brand\": \"Domino's\",\n \"brand_wikidata\": \"Q839466\",\n \"country\": \"JP\",\n }\n allowed_domains = [\"dominos.jp\"]\n start_urls = [\n \"https://www.dominos.jp/sitemap.aspx\",\n ]\n download_delay = 0.3\n\n def parse(self, response):\n response.selector.remove_namespaces()\n store_urls = response.xpath('//url/loc/text()[contains(.,\"/store/\")]').extract()\n for url in store_urls:\n yield scrapy.Request(url, callback=self.parse_store)\n\n def parse_store(self, response):\n ref = re.search(r\".+/(.+?)/?(?:\\.html|$)\", response.url).group(1)\n\n properties = {\n \"ref\": ref,\n \"name\": response.xpath(\n 'normalize-space(//h1[@class=\"storetitle\"][1]/text())'\n ).extract_first(),\n \"addr_full\": response.xpath(\n 'normalize-space(//span[@id=\"store-address-info\"]/p/text()[4])'\n ).extract_first(),\n \"postcode\": re.search(\n r\"([\\d-]*)$\",\n response.xpath(\n 'normalize-space(//div[@class=\"store-details-text\"]/span/p/text()[2])'\n ).extract_first(),\n ).group(1),\n \"lat\": response.xpath(\n 'normalize-space(//input[@id=\"store-lat\"]/@value)'\n ).extract_first(),\n \"lon\": response.xpath(\n 'normalize-space(//input[@id=\"store-lon\"]/@value)'\n ).extract_first(),\n \"phone\": response.xpath('//div[@id=\"store-tel\"]/a/text()').extract_first(),\n \"website\": response.url,\n }\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/dominos_pizza_jp.py"}]}
1,025
541
gh_patches_debug_11441
rasdani/github-patches
git_diff
fossasia__open-event-server-5559
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Allow GET access to ticket attendees for an order, for non logged in users parent issue #5546 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `app/api/attendees.py` Content: ``` 1 from datetime import datetime 2 3 from flask import Blueprint, request, jsonify, abort, make_response 4 from flask_jwt import current_identity, jwt_required 5 from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship 6 from flask_rest_jsonapi.exceptions import ObjectNotFound 7 from sqlalchemy.orm.exc import NoResultFound 8 9 from app.api.bootstrap import api 10 from app.api.helpers.db import safe_query, get_count 11 from app.api.helpers.exceptions import ( 12 ConflictException, 13 ForbiddenException, 14 UnprocessableEntity, 15 ) 16 from app.api.helpers.mail import send_email_to_attendees 17 from app.api.helpers.permission_manager import has_access 18 from app.api.helpers.permissions import jwt_required 19 from app.api.helpers.query import event_query 20 from app.api.helpers.utilities import require_relationship 21 from app.api.schema.attendees import AttendeeSchema 22 from app.models import db 23 from app.models.order import Order 24 from app.models.ticket import Ticket 25 from app.models.ticket_holder import TicketHolder 26 from app.models.user import User 27 28 attendee_misc_routes = Blueprint('attendee_misc', __name__, url_prefix='/v1') 29 30 31 class AttendeeListPost(ResourceList): 32 """ 33 List and create Attendees through direct URL 34 """ 35 36 def before_post(self, args, kwargs, data): 37 """ 38 Before post method to check for required relationship and proper permissions 39 :param args: 40 :param kwargs: 41 :param data: 42 :return: 43 """ 44 require_relationship(['ticket', 'event'], data) 45 46 ticket = db.session.query(Ticket).filter_by( 47 id=int(data['ticket']), deleted_at=None 48 ).first() 49 if ticket is None: 50 raise UnprocessableEntity( 51 {'pointer': '/data/relationships/ticket'}, "Invalid Ticket" 52 ) 53 if ticket.event_id != int(data['event']): 54 raise UnprocessableEntity( 55 {'pointer': '/data/relationships/ticket'}, 56 "Ticket belongs to a different Event" 57 ) 58 # Check if the ticket is already sold out or not. 59 if get_count(db.session.query(TicketHolder.id). 60 filter_by(ticket_id=int(data['ticket']), deleted_at=None)) >= ticket.quantity: 61 raise ConflictException( 62 {'pointer': '/data/attributes/ticket_id'}, 63 "Ticket already sold out" 64 ) 65 66 if 'device_name_checkin' in data and data['device_name_checkin'] is not None: 67 if 'is_checked_in' not in data or not data['is_checked_in']: 68 raise UnprocessableEntity( 69 {'pointer': '/data/attributes/device_name_checkin'}, 70 "Attendee needs to be checked in first" 71 ) 72 elif 'checkin_times' not in data or data['checkin_times'] is None: 73 raise UnprocessableEntity( 74 {'pointer': '/data/attributes/device_name_checkin'}, 75 "Check in Times missing" 76 ) 77 elif len(data['checkin_times'].split(",")) != len(data['device_name_checkin'].split(",")): 78 raise UnprocessableEntity( 79 {'pointer': '/data/attributes/device_name_checkin'}, 80 "Check in Times missing for the corresponding device name" 81 ) 82 83 if 'checkin_times' in data: 84 if 'device_name_checkin' not in data or data['device_name_checkin'] is None: 85 data['device_name_checkin'] = '-' 86 87 decorators = (jwt_required,) 88 methods = ['POST'] 89 schema = AttendeeSchema 90 data_layer = {'session': db.session, 91 'model': TicketHolder} 92 93 94 class AttendeeList(ResourceList): 95 """ 96 List Attendees 97 """ 98 def query(self, view_kwargs): 99 """ 100 query method for Attendees List 101 :param view_kwargs: 102 :return: 103 """ 104 query_ = self.session.query(TicketHolder) 105 106 if view_kwargs.get('order_identifier'): 107 order = safe_query(self, Order, 'identifier', view_kwargs['order_identifier'], 'order_identifier') 108 if not has_access('is_registrar', event_id=order.event_id) and not has_access('is_user_itself', 109 user_id=order.user_id): 110 raise ForbiddenException({'source': ''}, 'Access Forbidden') 111 query_ = query_.join(Order).filter(Order.id == order.id) 112 113 if view_kwargs.get('ticket_id'): 114 ticket = safe_query(self, Ticket, 'id', view_kwargs['ticket_id'], 'ticket_id') 115 if not has_access('is_registrar', event_id=ticket.event_id): 116 raise ForbiddenException({'source': ''}, 'Access Forbidden') 117 query_ = query_.join(Ticket).filter(Ticket.id == ticket.id) 118 119 if view_kwargs.get('user_id'): 120 user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id') 121 if not has_access('is_user_itself', user_id=user.id): 122 raise ForbiddenException({'source': ''}, 'Access Forbidden') 123 query_ = query_.join(User, User.email == TicketHolder.email).filter(User.id == user.id) 124 125 query_ = event_query(self, query_, view_kwargs, permission='is_registrar') 126 return query_ 127 128 view_kwargs = True 129 methods = ['GET', ] 130 schema = AttendeeSchema 131 data_layer = {'session': db.session, 132 'model': TicketHolder, 133 'methods': { 134 'query': query 135 }} 136 137 138 class AttendeeDetail(ResourceDetail): 139 """ 140 Attendee detail by id 141 """ 142 def before_get_object(self, view_kwargs): 143 """ 144 before get object method for attendee detail 145 :param view_kwargs: 146 :return: 147 """ 148 attendee = safe_query(self, TicketHolder, 'id', view_kwargs['id'], 'attendee_id') 149 if not has_access('is_registrar_or_user_itself', user_id=current_identity.id, event_id=attendee.event_id): 150 raise ForbiddenException({'source': 'User'}, 'You are not authorized to access this.') 151 152 def before_delete_object(self, obj, kwargs): 153 """ 154 before delete object method for attendee detail 155 :param obj: 156 :param kwargs: 157 :return: 158 """ 159 if not has_access('is_registrar', event_id=obj.event_id): 160 raise ForbiddenException({'source': 'User'}, 'You are not authorized to access this.') 161 162 def before_update_object(self, obj, data, kwargs): 163 """ 164 before update object method for attendee detail 165 :param obj: 166 :param data: 167 :param kwargs: 168 :return: 169 """ 170 # if not has_access('is_registrar', event_id=obj.event_id): 171 # raise ForbiddenException({'source': 'User'}, 'You are not authorized to access this.') 172 173 if 'device_name_checkin' in data: 174 if 'checkin_times' not in data or data['checkin_times'] is None: 175 raise UnprocessableEntity( 176 {'pointer': '/data/attributes/device_name_checkin'}, 177 "Check in Times missing" 178 ) 179 180 if 'is_checked_in' in data and data['is_checked_in']: 181 if 'checkin_times' not in data or data['checkin_times'] is None: 182 raise UnprocessableEntity({'pointer': '/data/attributes/checkin_times'}, 183 "Check in time missing while trying to check in attendee") 184 else: 185 if obj.checkin_times and data['checkin_times'] not in obj.checkin_times.split(","): 186 data['checkin_times'] = '{},{}'.format(obj.checkin_times, data['checkin_times']) 187 elif obj.checkin_times and data['checkin_times'] in obj.checkin_times.split(","): 188 raise UnprocessableEntity( 189 {'pointer': '/data/attributes/checkin_times'}, 190 "Check in time already present" 191 ) 192 193 if 'device_name_checkin' in data and data['device_name_checkin'] is not None: 194 if obj.device_name_checkin is not None: 195 data['device_name_checkin'] = '{},{}'.format(obj.device_name_checkin, 196 data['device_name_checkin']) 197 198 if len(data['checkin_times'].split(",")) != len(data['device_name_checkin'].split(",")): 199 raise UnprocessableEntity( 200 {'pointer': '/data/attributes/device_name_checkin'}, 201 "Check in Time missing for the corresponding device name" 202 ) 203 else: 204 if obj.device_name_checkin is not None: 205 data['device_name_checkin'] = '{},{}'.format(obj.device_name_checkin, '-') 206 else: 207 data['device_name_checkin'] = '-' 208 209 if 'is_checked_out' in data and data['is_checked_out']: 210 attendee = safe_query(db, TicketHolder, 'id', kwargs['id'], 'attendee_id') 211 if not attendee.is_checked_out: 212 checkout_times = obj.checkout_times.split(',') if obj.checkout_times else [] 213 checkout_times.append(str(datetime.utcnow())) 214 data['checkout_times'] = ','.join(checkout_times) 215 216 if 'attendee_notes' in data: 217 if obj.attendee_notes and data['attendee_notes'] not in obj.attendee_notes.split(","): 218 data['attendee_notes'] = '{},{}'.format(obj.attendee_notes, data['attendee_notes']) 219 220 decorators = (jwt_required,) 221 schema = AttendeeSchema 222 data_layer = {'session': db.session, 223 'model': TicketHolder, 224 'methods': { 225 'before_get_object': before_get_object, 226 'before_update_object': before_update_object, 227 'before_delete_object': before_delete_object 228 }} 229 230 231 class AttendeeRelationshipRequired(ResourceRelationship): 232 """ 233 Attendee Relationship (Required) 234 """ 235 decorators = (jwt_required,) 236 methods = ['GET', 'PATCH'] 237 schema = AttendeeSchema 238 data_layer = {'session': db.session, 239 'model': TicketHolder} 240 241 242 class AttendeeRelationshipOptional(ResourceRelationship): 243 """ 244 Attendee Relationship(Optional) 245 """ 246 decorators = (api.has_permission('is_user_itself', fetch="user_id", fetch_as="id", model=TicketHolder),) 247 schema = AttendeeSchema 248 data_layer = {'session': db.session, 249 'model': TicketHolder} 250 251 252 @attendee_misc_routes.route('/attendees/send-receipt', methods=['POST']) 253 @jwt_required 254 def send_receipt(): 255 """ 256 Send receipts to attendees related to the provided order. 257 :return: 258 """ 259 order_identifier = request.json.get('order-identifier') 260 if order_identifier: 261 try: 262 order = db.session.query(Order).filter_by(identifier=order_identifier).one() 263 except NoResultFound: 264 raise ObjectNotFound({'parameter': '{identifier}'}, "Order not found") 265 266 if (order.user_id != current_identity.id) and (not has_access('is_registrar', event_id=order.event_id)): 267 abort( 268 make_response(jsonify(error="You need to be the event organizer or order buyer to send receipts."), 403) 269 ) 270 elif order.status != 'completed': 271 abort( 272 make_response(jsonify(error="Cannot send receipt for an incomplete order"), 409) 273 ) 274 else: 275 send_email_to_attendees(order, current_identity.id) 276 return jsonify(message="receipt sent to attendees") 277 else: 278 abort( 279 make_response(jsonify(error="Order identifier missing"), 422) 280 ) 281 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/app/api/attendees.py b/app/api/attendees.py --- a/app/api/attendees.py +++ b/app/api/attendees.py @@ -112,8 +112,8 @@ if view_kwargs.get('ticket_id'): ticket = safe_query(self, Ticket, 'id', view_kwargs['ticket_id'], 'ticket_id') - if not has_access('is_registrar', event_id=ticket.event_id): - raise ForbiddenException({'source': ''}, 'Access Forbidden') + # if not has_access('is_registrar', event_id=ticket.event_id): + # raise ForbiddenException({'source': ''}, 'Access Forbidden') query_ = query_.join(Ticket).filter(Ticket.id == ticket.id) if view_kwargs.get('user_id'):
{"golden_diff": "diff --git a/app/api/attendees.py b/app/api/attendees.py\n--- a/app/api/attendees.py\n+++ b/app/api/attendees.py\n@@ -112,8 +112,8 @@\n \n if view_kwargs.get('ticket_id'):\n ticket = safe_query(self, Ticket, 'id', view_kwargs['ticket_id'], 'ticket_id')\n- if not has_access('is_registrar', event_id=ticket.event_id):\n- raise ForbiddenException({'source': ''}, 'Access Forbidden')\n+ # if not has_access('is_registrar', event_id=ticket.event_id):\n+ # raise ForbiddenException({'source': ''}, 'Access Forbidden')\n query_ = query_.join(Ticket).filter(Ticket.id == ticket.id)\n \n if view_kwargs.get('user_id'):\n", "issue": "Allow GET access to ticket attendees for an order, for non logged in users\nparent issue #5546 \n", "before_files": [{"content": "from datetime import datetime\n\nfrom flask import Blueprint, request, jsonify, abort, make_response\nfrom flask_jwt import current_identity, jwt_required\nfrom flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom app.api.bootstrap import api\nfrom app.api.helpers.db import safe_query, get_count\nfrom app.api.helpers.exceptions import (\n ConflictException,\n ForbiddenException,\n UnprocessableEntity,\n)\nfrom app.api.helpers.mail import send_email_to_attendees\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.permissions import jwt_required\nfrom app.api.helpers.query import event_query\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.schema.attendees import AttendeeSchema\nfrom app.models import db\nfrom app.models.order import Order\nfrom app.models.ticket import Ticket\nfrom app.models.ticket_holder import TicketHolder\nfrom app.models.user import User\n\nattendee_misc_routes = Blueprint('attendee_misc', __name__, url_prefix='/v1')\n\n\nclass AttendeeListPost(ResourceList):\n \"\"\"\n List and create Attendees through direct URL\n \"\"\"\n\n def before_post(self, args, kwargs, data):\n \"\"\"\n Before post method to check for required relationship and proper permissions\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['ticket', 'event'], data)\n\n ticket = db.session.query(Ticket).filter_by(\n id=int(data['ticket']), deleted_at=None\n ).first()\n if ticket is None:\n raise UnprocessableEntity(\n {'pointer': '/data/relationships/ticket'}, \"Invalid Ticket\"\n )\n if ticket.event_id != int(data['event']):\n raise UnprocessableEntity(\n {'pointer': '/data/relationships/ticket'},\n \"Ticket belongs to a different Event\"\n )\n # Check if the ticket is already sold out or not.\n if get_count(db.session.query(TicketHolder.id).\n filter_by(ticket_id=int(data['ticket']), deleted_at=None)) >= ticket.quantity:\n raise ConflictException(\n {'pointer': '/data/attributes/ticket_id'},\n \"Ticket already sold out\"\n )\n\n if 'device_name_checkin' in data and data['device_name_checkin'] is not None:\n if 'is_checked_in' not in data or not data['is_checked_in']:\n raise UnprocessableEntity(\n {'pointer': '/data/attributes/device_name_checkin'},\n \"Attendee needs to be checked in first\"\n )\n elif 'checkin_times' not in data or data['checkin_times'] is None:\n raise UnprocessableEntity(\n {'pointer': '/data/attributes/device_name_checkin'},\n \"Check in Times missing\"\n )\n elif len(data['checkin_times'].split(\",\")) != len(data['device_name_checkin'].split(\",\")):\n raise UnprocessableEntity(\n {'pointer': '/data/attributes/device_name_checkin'},\n \"Check in Times missing for the corresponding device name\"\n )\n\n if 'checkin_times' in data:\n if 'device_name_checkin' not in data or data['device_name_checkin'] is None:\n data['device_name_checkin'] = '-'\n\n decorators = (jwt_required,)\n methods = ['POST']\n schema = AttendeeSchema\n data_layer = {'session': db.session,\n 'model': TicketHolder}\n\n\nclass AttendeeList(ResourceList):\n \"\"\"\n List Attendees\n \"\"\"\n def query(self, view_kwargs):\n \"\"\"\n query method for Attendees List\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(TicketHolder)\n\n if view_kwargs.get('order_identifier'):\n order = safe_query(self, Order, 'identifier', view_kwargs['order_identifier'], 'order_identifier')\n if not has_access('is_registrar', event_id=order.event_id) and not has_access('is_user_itself',\n user_id=order.user_id):\n raise ForbiddenException({'source': ''}, 'Access Forbidden')\n query_ = query_.join(Order).filter(Order.id == order.id)\n\n if view_kwargs.get('ticket_id'):\n ticket = safe_query(self, Ticket, 'id', view_kwargs['ticket_id'], 'ticket_id')\n if not has_access('is_registrar', event_id=ticket.event_id):\n raise ForbiddenException({'source': ''}, 'Access Forbidden')\n query_ = query_.join(Ticket).filter(Ticket.id == ticket.id)\n\n if view_kwargs.get('user_id'):\n user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')\n if not has_access('is_user_itself', user_id=user.id):\n raise ForbiddenException({'source': ''}, 'Access Forbidden')\n query_ = query_.join(User, User.email == TicketHolder.email).filter(User.id == user.id)\n\n query_ = event_query(self, query_, view_kwargs, permission='is_registrar')\n return query_\n\n view_kwargs = True\n methods = ['GET', ]\n schema = AttendeeSchema\n data_layer = {'session': db.session,\n 'model': TicketHolder,\n 'methods': {\n 'query': query\n }}\n\n\nclass AttendeeDetail(ResourceDetail):\n \"\"\"\n Attendee detail by id\n \"\"\"\n def before_get_object(self, view_kwargs):\n \"\"\"\n before get object method for attendee detail\n :param view_kwargs:\n :return:\n \"\"\"\n attendee = safe_query(self, TicketHolder, 'id', view_kwargs['id'], 'attendee_id')\n if not has_access('is_registrar_or_user_itself', user_id=current_identity.id, event_id=attendee.event_id):\n raise ForbiddenException({'source': 'User'}, 'You are not authorized to access this.')\n\n def before_delete_object(self, obj, kwargs):\n \"\"\"\n before delete object method for attendee detail\n :param obj:\n :param kwargs:\n :return:\n \"\"\"\n if not has_access('is_registrar', event_id=obj.event_id):\n raise ForbiddenException({'source': 'User'}, 'You are not authorized to access this.')\n\n def before_update_object(self, obj, data, kwargs):\n \"\"\"\n before update object method for attendee detail\n :param obj:\n :param data:\n :param kwargs:\n :return:\n \"\"\"\n# if not has_access('is_registrar', event_id=obj.event_id):\n# raise ForbiddenException({'source': 'User'}, 'You are not authorized to access this.')\n\n if 'device_name_checkin' in data:\n if 'checkin_times' not in data or data['checkin_times'] is None:\n raise UnprocessableEntity(\n {'pointer': '/data/attributes/device_name_checkin'},\n \"Check in Times missing\"\n )\n\n if 'is_checked_in' in data and data['is_checked_in']:\n if 'checkin_times' not in data or data['checkin_times'] is None:\n raise UnprocessableEntity({'pointer': '/data/attributes/checkin_times'},\n \"Check in time missing while trying to check in attendee\")\n else:\n if obj.checkin_times and data['checkin_times'] not in obj.checkin_times.split(\",\"):\n data['checkin_times'] = '{},{}'.format(obj.checkin_times, data['checkin_times'])\n elif obj.checkin_times and data['checkin_times'] in obj.checkin_times.split(\",\"):\n raise UnprocessableEntity(\n {'pointer': '/data/attributes/checkin_times'},\n \"Check in time already present\"\n )\n\n if 'device_name_checkin' in data and data['device_name_checkin'] is not None:\n if obj.device_name_checkin is not None:\n data['device_name_checkin'] = '{},{}'.format(obj.device_name_checkin,\n data['device_name_checkin'])\n\n if len(data['checkin_times'].split(\",\")) != len(data['device_name_checkin'].split(\",\")):\n raise UnprocessableEntity(\n {'pointer': '/data/attributes/device_name_checkin'},\n \"Check in Time missing for the corresponding device name\"\n )\n else:\n if obj.device_name_checkin is not None:\n data['device_name_checkin'] = '{},{}'.format(obj.device_name_checkin, '-')\n else:\n data['device_name_checkin'] = '-'\n\n if 'is_checked_out' in data and data['is_checked_out']:\n attendee = safe_query(db, TicketHolder, 'id', kwargs['id'], 'attendee_id')\n if not attendee.is_checked_out:\n checkout_times = obj.checkout_times.split(',') if obj.checkout_times else []\n checkout_times.append(str(datetime.utcnow()))\n data['checkout_times'] = ','.join(checkout_times)\n\n if 'attendee_notes' in data:\n if obj.attendee_notes and data['attendee_notes'] not in obj.attendee_notes.split(\",\"):\n data['attendee_notes'] = '{},{}'.format(obj.attendee_notes, data['attendee_notes'])\n\n decorators = (jwt_required,)\n schema = AttendeeSchema\n data_layer = {'session': db.session,\n 'model': TicketHolder,\n 'methods': {\n 'before_get_object': before_get_object,\n 'before_update_object': before_update_object,\n 'before_delete_object': before_delete_object\n }}\n\n\nclass AttendeeRelationshipRequired(ResourceRelationship):\n \"\"\"\n Attendee Relationship (Required)\n \"\"\"\n decorators = (jwt_required,)\n methods = ['GET', 'PATCH']\n schema = AttendeeSchema\n data_layer = {'session': db.session,\n 'model': TicketHolder}\n\n\nclass AttendeeRelationshipOptional(ResourceRelationship):\n \"\"\"\n Attendee Relationship(Optional)\n \"\"\"\n decorators = (api.has_permission('is_user_itself', fetch=\"user_id\", fetch_as=\"id\", model=TicketHolder),)\n schema = AttendeeSchema\n data_layer = {'session': db.session,\n 'model': TicketHolder}\n\n\n@attendee_misc_routes.route('/attendees/send-receipt', methods=['POST'])\n@jwt_required\ndef send_receipt():\n \"\"\"\n Send receipts to attendees related to the provided order.\n :return:\n \"\"\"\n order_identifier = request.json.get('order-identifier')\n if order_identifier:\n try:\n order = db.session.query(Order).filter_by(identifier=order_identifier).one()\n except NoResultFound:\n raise ObjectNotFound({'parameter': '{identifier}'}, \"Order not found\")\n\n if (order.user_id != current_identity.id) and (not has_access('is_registrar', event_id=order.event_id)):\n abort(\n make_response(jsonify(error=\"You need to be the event organizer or order buyer to send receipts.\"), 403)\n )\n elif order.status != 'completed':\n abort(\n make_response(jsonify(error=\"Cannot send receipt for an incomplete order\"), 409)\n )\n else:\n send_email_to_attendees(order, current_identity.id)\n return jsonify(message=\"receipt sent to attendees\")\n else:\n abort(\n make_response(jsonify(error=\"Order identifier missing\"), 422)\n )\n", "path": "app/api/attendees.py"}], "after_files": [{"content": "from datetime import datetime\n\nfrom flask import Blueprint, request, jsonify, abort, make_response\nfrom flask_jwt import current_identity, jwt_required\nfrom flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom app.api.bootstrap import api\nfrom app.api.helpers.db import safe_query, get_count\nfrom app.api.helpers.exceptions import (\n ConflictException,\n ForbiddenException,\n UnprocessableEntity,\n)\nfrom app.api.helpers.mail import send_email_to_attendees\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.permissions import jwt_required\nfrom app.api.helpers.query import event_query\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.schema.attendees import AttendeeSchema\nfrom app.models import db\nfrom app.models.order import Order\nfrom app.models.ticket import Ticket\nfrom app.models.ticket_holder import TicketHolder\nfrom app.models.user import User\n\nattendee_misc_routes = Blueprint('attendee_misc', __name__, url_prefix='/v1')\n\n\nclass AttendeeListPost(ResourceList):\n \"\"\"\n List and create Attendees through direct URL\n \"\"\"\n\n def before_post(self, args, kwargs, data):\n \"\"\"\n Before post method to check for required relationship and proper permissions\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['ticket', 'event'], data)\n\n ticket = db.session.query(Ticket).filter_by(\n id=int(data['ticket']), deleted_at=None\n ).first()\n if ticket is None:\n raise UnprocessableEntity(\n {'pointer': '/data/relationships/ticket'}, \"Invalid Ticket\"\n )\n if ticket.event_id != int(data['event']):\n raise UnprocessableEntity(\n {'pointer': '/data/relationships/ticket'},\n \"Ticket belongs to a different Event\"\n )\n # Check if the ticket is already sold out or not.\n if get_count(db.session.query(TicketHolder.id).\n filter_by(ticket_id=int(data['ticket']), deleted_at=None)) >= ticket.quantity:\n raise ConflictException(\n {'pointer': '/data/attributes/ticket_id'},\n \"Ticket already sold out\"\n )\n\n if 'device_name_checkin' in data and data['device_name_checkin'] is not None:\n if 'is_checked_in' not in data or not data['is_checked_in']:\n raise UnprocessableEntity(\n {'pointer': '/data/attributes/device_name_checkin'},\n \"Attendee needs to be checked in first\"\n )\n elif 'checkin_times' not in data or data['checkin_times'] is None:\n raise UnprocessableEntity(\n {'pointer': '/data/attributes/device_name_checkin'},\n \"Check in Times missing\"\n )\n elif len(data['checkin_times'].split(\",\")) != len(data['device_name_checkin'].split(\",\")):\n raise UnprocessableEntity(\n {'pointer': '/data/attributes/device_name_checkin'},\n \"Check in Times missing for the corresponding device name\"\n )\n\n if 'checkin_times' in data:\n if 'device_name_checkin' not in data or data['device_name_checkin'] is None:\n data['device_name_checkin'] = '-'\n\n decorators = (jwt_required,)\n methods = ['POST']\n schema = AttendeeSchema\n data_layer = {'session': db.session,\n 'model': TicketHolder}\n\n\nclass AttendeeList(ResourceList):\n \"\"\"\n List Attendees\n \"\"\"\n def query(self, view_kwargs):\n \"\"\"\n query method for Attendees List\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(TicketHolder)\n\n if view_kwargs.get('order_identifier'):\n order = safe_query(self, Order, 'identifier', view_kwargs['order_identifier'], 'order_identifier')\n if not has_access('is_registrar', event_id=order.event_id) and not has_access('is_user_itself',\n user_id=order.user_id):\n raise ForbiddenException({'source': ''}, 'Access Forbidden')\n query_ = query_.join(Order).filter(Order.id == order.id)\n\n if view_kwargs.get('ticket_id'):\n ticket = safe_query(self, Ticket, 'id', view_kwargs['ticket_id'], 'ticket_id')\n # if not has_access('is_registrar', event_id=ticket.event_id):\n # raise ForbiddenException({'source': ''}, 'Access Forbidden')\n query_ = query_.join(Ticket).filter(Ticket.id == ticket.id)\n\n if view_kwargs.get('user_id'):\n user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')\n if not has_access('is_user_itself', user_id=user.id):\n raise ForbiddenException({'source': ''}, 'Access Forbidden')\n query_ = query_.join(User, User.email == TicketHolder.email).filter(User.id == user.id)\n\n query_ = event_query(self, query_, view_kwargs, permission='is_registrar')\n return query_\n\n view_kwargs = True\n methods = ['GET', ]\n schema = AttendeeSchema\n data_layer = {'session': db.session,\n 'model': TicketHolder,\n 'methods': {\n 'query': query\n }}\n\n\nclass AttendeeDetail(ResourceDetail):\n \"\"\"\n Attendee detail by id\n \"\"\"\n def before_get_object(self, view_kwargs):\n \"\"\"\n before get object method for attendee detail\n :param view_kwargs:\n :return:\n \"\"\"\n attendee = safe_query(self, TicketHolder, 'id', view_kwargs['id'], 'attendee_id')\n if not has_access('is_registrar_or_user_itself', user_id=current_identity.id, event_id=attendee.event_id):\n raise ForbiddenException({'source': 'User'}, 'You are not authorized to access this.')\n\n def before_delete_object(self, obj, kwargs):\n \"\"\"\n before delete object method for attendee detail\n :param obj:\n :param kwargs:\n :return:\n \"\"\"\n if not has_access('is_registrar', event_id=obj.event_id):\n raise ForbiddenException({'source': 'User'}, 'You are not authorized to access this.')\n\n def before_update_object(self, obj, data, kwargs):\n \"\"\"\n before update object method for attendee detail\n :param obj:\n :param data:\n :param kwargs:\n :return:\n \"\"\"\n# if not has_access('is_registrar', event_id=obj.event_id):\n# raise ForbiddenException({'source': 'User'}, 'You are not authorized to access this.')\n\n if 'device_name_checkin' in data:\n if 'checkin_times' not in data or data['checkin_times'] is None:\n raise UnprocessableEntity(\n {'pointer': '/data/attributes/device_name_checkin'},\n \"Check in Times missing\"\n )\n\n if 'is_checked_in' in data and data['is_checked_in']:\n if 'checkin_times' not in data or data['checkin_times'] is None:\n raise UnprocessableEntity({'pointer': '/data/attributes/checkin_times'},\n \"Check in time missing while trying to check in attendee\")\n else:\n if obj.checkin_times and data['checkin_times'] not in obj.checkin_times.split(\",\"):\n data['checkin_times'] = '{},{}'.format(obj.checkin_times, data['checkin_times'])\n elif obj.checkin_times and data['checkin_times'] in obj.checkin_times.split(\",\"):\n raise UnprocessableEntity(\n {'pointer': '/data/attributes/checkin_times'},\n \"Check in time already present\"\n )\n\n if 'device_name_checkin' in data and data['device_name_checkin'] is not None:\n if obj.device_name_checkin is not None:\n data['device_name_checkin'] = '{},{}'.format(obj.device_name_checkin,\n data['device_name_checkin'])\n\n if len(data['checkin_times'].split(\",\")) != len(data['device_name_checkin'].split(\",\")):\n raise UnprocessableEntity(\n {'pointer': '/data/attributes/device_name_checkin'},\n \"Check in Time missing for the corresponding device name\"\n )\n else:\n if obj.device_name_checkin is not None:\n data['device_name_checkin'] = '{},{}'.format(obj.device_name_checkin, '-')\n else:\n data['device_name_checkin'] = '-'\n\n if 'is_checked_out' in data and data['is_checked_out']:\n attendee = safe_query(db, TicketHolder, 'id', kwargs['id'], 'attendee_id')\n if not attendee.is_checked_out:\n checkout_times = obj.checkout_times.split(',') if obj.checkout_times else []\n checkout_times.append(str(datetime.utcnow()))\n data['checkout_times'] = ','.join(checkout_times)\n\n if 'attendee_notes' in data:\n if obj.attendee_notes and data['attendee_notes'] not in obj.attendee_notes.split(\",\"):\n data['attendee_notes'] = '{},{}'.format(obj.attendee_notes, data['attendee_notes'])\n\n decorators = (jwt_required,)\n schema = AttendeeSchema\n data_layer = {'session': db.session,\n 'model': TicketHolder,\n 'methods': {\n 'before_get_object': before_get_object,\n 'before_update_object': before_update_object,\n 'before_delete_object': before_delete_object\n }}\n\n\nclass AttendeeRelationshipRequired(ResourceRelationship):\n \"\"\"\n Attendee Relationship (Required)\n \"\"\"\n decorators = (jwt_required,)\n methods = ['GET', 'PATCH']\n schema = AttendeeSchema\n data_layer = {'session': db.session,\n 'model': TicketHolder}\n\n\nclass AttendeeRelationshipOptional(ResourceRelationship):\n \"\"\"\n Attendee Relationship(Optional)\n \"\"\"\n decorators = (api.has_permission('is_user_itself', fetch=\"user_id\", fetch_as=\"id\", model=TicketHolder),)\n schema = AttendeeSchema\n data_layer = {'session': db.session,\n 'model': TicketHolder}\n\n\n@attendee_misc_routes.route('/attendees/send-receipt', methods=['POST'])\n@jwt_required\ndef send_receipt():\n \"\"\"\n Send receipts to attendees related to the provided order.\n :return:\n \"\"\"\n order_identifier = request.json.get('order-identifier')\n if order_identifier:\n try:\n order = db.session.query(Order).filter_by(identifier=order_identifier).one()\n except NoResultFound:\n raise ObjectNotFound({'parameter': '{identifier}'}, \"Order not found\")\n\n if (order.user_id != current_identity.id) and (not has_access('is_registrar', event_id=order.event_id)):\n abort(\n make_response(jsonify(error=\"You need to be the event organizer or order buyer to send receipts.\"), 403)\n )\n elif order.status != 'completed':\n abort(\n make_response(jsonify(error=\"Cannot send receipt for an incomplete order\"), 409)\n )\n else:\n send_email_to_attendees(order, current_identity.id)\n return jsonify(message=\"receipt sent to attendees\")\n else:\n abort(\n make_response(jsonify(error=\"Order identifier missing\"), 422)\n )\n", "path": "app/api/attendees.py"}]}
3,411
177
gh_patches_debug_8208
rasdani/github-patches
git_diff
learningequality__kolibri-1529
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- <Enhancement> A pop up on the screen when a learner masters an exercise would be great. ## Summary <Enhancement> A pop up on the screen when a learner masters an exercise would be great. Presently the learner is not aware if he/she has mastered the topic, since the point addition happens very subtly. Hence a Pop up on screen that would read "You have Mastered the Exercise and earned 500 points, Keep Going" would be great. Anything even shorter would be great. Its just an on screen pop up would be great. ## System information - Version: 0.4.0Beta1 - Operating system: Linus LTS 14.04 - Browser: Chrome ## How to reproduce 1. Master an exercise on a tablet. 2. You not know if you have mastered the topic clearly . ## Screenshots ![img-20170511-wa0002](https://cloud.githubusercontent.com/assets/13453618/25945726/a11480a4-3665-11e7-9621-52a26a8be9c2.jpg) ![img-20170511-wa0001](https://cloud.githubusercontent.com/assets/13453618/25945727/a2af45b6-3665-11e7-87db-d9d6003a41f0.jpg) --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `kolibri/content/api.py` Content: ``` 1 from functools import reduce 2 from random import sample 3 4 from django.core.cache import cache 5 from django.db.models import Q 6 from django.db.models.aggregates import Count 7 from kolibri.content import models, serializers 8 from kolibri.content.content_db_router import get_active_content_database 9 from kolibri.logger.models import ContentSessionLog, ContentSummaryLog 10 from le_utils.constants import content_kinds 11 from rest_framework import filters, pagination, viewsets 12 from rest_framework.decorators import detail_route 13 from rest_framework.response import Response 14 15 from .utils.search import fuzz 16 17 def _join_with_logical_operator(lst, operator): 18 op = ") {operator} (".format(operator=operator) 19 return "(({items}))".format(items=op.join(lst)) 20 21 class ChannelMetadataCacheViewSet(viewsets.ModelViewSet): 22 serializer_class = serializers.ChannelMetadataCacheSerializer 23 24 def get_queryset(self): 25 return models.ChannelMetadataCache.objects.all() 26 27 28 class ContentNodeFilter(filters.FilterSet): 29 search = filters.django_filters.MethodFilter(action='title_description_filter') 30 recommendations_for = filters.django_filters.MethodFilter() 31 next_steps = filters.django_filters.MethodFilter() 32 popular = filters.django_filters.MethodFilter() 33 resume = filters.django_filters.MethodFilter() 34 kind = filters.django_filters.MethodFilter() 35 ids = filters.django_filters.MethodFilter() 36 37 class Meta: 38 model = models.ContentNode 39 fields = ['parent', 'search', 'prerequisite_for', 'has_prerequisite', 'related', 'recommendations_for', 'ids', 'content_id'] 40 41 def title_description_filter(self, queryset, value): 42 """ 43 search for title or description that contains the keywords that are not necessary in adjacent 44 """ 45 exact_match = queryset.filter(Q(parent__isnull=False), Q(title__icontains=value) | Q(description__icontains=value)) 46 if exact_match: 47 return exact_match 48 # if no exact match, fuzzy search using the stemmed_metaphone field in ContentNode that covers the title and description 49 fuzzed_tokens = [fuzz(word) for word in value.split()] 50 if not fuzzed_tokens[0]: 51 return [] 52 token_queries = [reduce(lambda x, y: x | y, [Q(stemmed_metaphone__contains=token) for token in tokens]) for tokens in fuzzed_tokens] 53 return queryset.filter( 54 Q(parent__isnull=False), 55 reduce(lambda x, y: x & y, token_queries)) 56 57 def filter_recommendations_for(self, queryset, value): 58 """ 59 Recommend items that are similar to this piece of content. 60 """ 61 recc_node = queryset.get(pk=value) 62 descendants = recc_node.get_descendants(include_self=False).exclude(kind__in=['topic', '']) 63 siblings = recc_node.get_siblings(include_self=False).exclude(kind__in=['topic', '']) 64 data = descendants | siblings # concatenates different querysets 65 return data 66 67 def filter_next_steps(self, queryset, value): 68 """ 69 Recommend uncompleted content, content that has user completed content as a prerequisite. 70 71 :param queryset: all content nodes for this channel 72 :param value: id of currently logged in user, or none if user is anonymous 73 :return: uncompleted content nodes, or empty queryset if user is anonymous 74 """ 75 76 # if user is anonymous, don't return any nodes 77 if not value: 78 return queryset.none() 79 80 tables = [ 81 '"{summarylog_table}" AS "complete_log"', 82 '"{summarylog_table}" AS "incomplete_log"', 83 '"{content_table}" AS "complete_node"', 84 '"{content_table}" AS "incomplete_node"', 85 ] 86 table_names = { 87 "summarylog_table": ContentSummaryLog._meta.db_table, 88 "content_table": models.ContentNode._meta.db_table, 89 } 90 # aliases for sql table names 91 sql_tables_and_aliases = [table.format(**table_names) for table in tables] 92 # where conditions joined by ANDs 93 where_statements = ["NOT (incomplete_log.progress < 1 AND incomplete_log.content_id = incomplete_node.content_id)", 94 "complete_log.user_id = '{user_id}'".format(user_id=value), 95 "incomplete_log.user_id = '{user_id}'".format(user_id=value), 96 "complete_log.progress = 1", 97 "complete_node.rght = incomplete_node.lft - 1", 98 "complete_log.content_id = complete_node.content_id"] 99 # custom SQL query to get uncompleted content based on mptt algorithm 100 next_steps_recommendations = "SELECT incomplete_node.* FROM {tables} WHERE {where}".format( 101 tables=", ".join(sql_tables_and_aliases), 102 where=_join_with_logical_operator(where_statements, "AND") 103 ) 104 return models.ContentNode.objects.raw(next_steps_recommendations) 105 106 def filter_popular(self, queryset, value): 107 """ 108 Recommend content that is popular with all users. 109 110 :param queryset: all content nodes for this channel 111 :param value: id of currently logged in user, or none if user is anonymous 112 :return: 10 most popular content nodes 113 """ 114 if ContentSessionLog.objects.count() < 50: 115 # return 25 random content nodes if not enough session logs 116 pks = queryset.values_list('pk', flat=True).exclude(kind__in=['topic', '']) 117 count = min(pks.count(), 25) 118 return queryset.filter(pk__in=sample(list(pks), count)) 119 120 cache_key = 'popular_for_{}'.format(get_active_content_database()) 121 if cache.get(cache_key): 122 return cache.get(cache_key) 123 124 # get the most accessed content nodes 125 content_counts_sorted = ContentSessionLog.objects \ 126 .filter(channel_id=get_active_content_database()) \ 127 .values_list('content_id', flat=True) \ 128 .annotate(Count('content_id')) \ 129 .order_by('-content_id__count') 130 131 most_popular = queryset.filter(content_id__in=list(content_counts_sorted[:10])) 132 133 # cache the popular results queryset for 10 minutes, for efficiency 134 cache.set(cache_key, most_popular, 60 * 10) 135 return most_popular 136 137 def filter_resume(self, queryset, value): 138 """ 139 Recommend content that the user has recently engaged with, but not finished. 140 141 :param queryset: all content nodes for this channel 142 :param value: id of currently logged in user, or none if user is anonymous 143 :return: 10 most recently viewed content nodes 144 """ 145 146 # if user is anonymous, return no nodes 147 if not value: 148 return queryset.none() 149 150 # get the most recently viewed, but not finished, content nodes 151 content_ids = ContentSummaryLog.objects \ 152 .filter(user=value, channel_id=get_active_content_database()) \ 153 .exclude(progress=1) \ 154 .order_by('end_timestamp') \ 155 .values_list('content_id', flat=True) \ 156 .distinct() 157 158 resume = queryset.filter(content_id__in=list(content_ids[:10])) 159 160 return resume 161 162 def filter_kind(self, queryset, value): 163 """ 164 Show only content of a given kind. 165 166 :param queryset: all content nodes for this channel 167 :param value: 'content' for everything except topics, or one of the content kind constants 168 :return: content nodes of the given kind 169 """ 170 if value == 'content': 171 return queryset.exclude(kind=content_kinds.TOPIC).order_by("lft") 172 return queryset.filter(kind=value).order_by("lft") 173 174 def filter_ids(self, queryset, value): 175 return queryset.filter(pk__in=value.split(',')) 176 177 178 class OptionalPageNumberPagination(pagination.PageNumberPagination): 179 """ 180 Pagination class that allows for page number-style pagination, when requested. 181 To activate, the `page_size` argument must be set. For example, to request the first 20 records: 182 `?page_size=20&page=1` 183 """ 184 page_size = None 185 page_size_query_param = "page_size" 186 187 188 class ContentNodeViewset(viewsets.ModelViewSet): 189 serializer_class = serializers.ContentNodeSerializer 190 filter_backends = (filters.DjangoFilterBackend,) 191 filter_class = ContentNodeFilter 192 pagination_class = OptionalPageNumberPagination 193 194 def get_queryset(self): 195 return models.ContentNode.objects.all().select_related( 196 'parent', 197 'license', 198 ).prefetch_related( 199 'assessmentmetadata', 200 'files', 201 ) 202 203 @detail_route(methods=['get']) 204 def descendants(self, request, **kwargs): 205 node = self.get_object() 206 kind = self.request.query_params.get('descendant_kind', None) 207 descendants = node.get_descendants() 208 if kind: 209 descendants = descendants.filter(kind=kind) 210 211 serializer = self.get_serializer(descendants, many=True) 212 return Response(serializer.data) 213 214 @detail_route(methods=['get']) 215 def ancestors(self, request, **kwargs): 216 return Response(self.get_object().get_ancestors().values('pk', 'title')) 217 218 @detail_route(methods=['get']) 219 def next_content(self, request, **kwargs): 220 # retrieve the "next" content node, according to depth-first tree traversal 221 this_item = self.get_object() 222 next_item = models.ContentNode.objects.filter(tree_id=this_item.tree_id, lft__gt=this_item.rght).order_by("lft").first() 223 if not next_item: 224 next_item = this_item.get_root() 225 return Response({'kind': next_item.kind, 'id': next_item.id}) 226 227 228 class FileViewset(viewsets.ModelViewSet): 229 serializer_class = serializers.FileSerializer 230 pagination_class = OptionalPageNumberPagination 231 232 def get_queryset(self): 233 return models.File.objects.all() 234 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/kolibri/content/api.py b/kolibri/content/api.py --- a/kolibri/content/api.py +++ b/kolibri/content/api.py @@ -222,7 +222,7 @@ next_item = models.ContentNode.objects.filter(tree_id=this_item.tree_id, lft__gt=this_item.rght).order_by("lft").first() if not next_item: next_item = this_item.get_root() - return Response({'kind': next_item.kind, 'id': next_item.id}) + return Response({'kind': next_item.kind, 'id': next_item.id, 'title': next_item.title}) class FileViewset(viewsets.ModelViewSet):
{"golden_diff": "diff --git a/kolibri/content/api.py b/kolibri/content/api.py\n--- a/kolibri/content/api.py\n+++ b/kolibri/content/api.py\n@@ -222,7 +222,7 @@\n next_item = models.ContentNode.objects.filter(tree_id=this_item.tree_id, lft__gt=this_item.rght).order_by(\"lft\").first()\n if not next_item:\n next_item = this_item.get_root()\n- return Response({'kind': next_item.kind, 'id': next_item.id})\n+ return Response({'kind': next_item.kind, 'id': next_item.id, 'title': next_item.title})\n \n \n class FileViewset(viewsets.ModelViewSet):\n", "issue": "<Enhancement> A pop up on the screen when a learner masters an exercise would be great.\n## Summary\r\n\r\n<Enhancement> A pop up on the screen when a learner masters an exercise would be great.\r\n\r\nPresently the learner is not aware if he/she has mastered the topic, since the point addition happens very subtly.\r\nHence a Pop up on screen that would read \"You have Mastered the Exercise and earned 500 points, Keep Going\" would be great. Anything even shorter would be great. \r\n\r\nIts just an on screen pop up would be great.\r\n\r\n## System information\r\n\r\n - Version: 0.4.0Beta1\r\n - Operating system: Linus LTS 14.04\r\n - Browser: Chrome\r\n\r\n## How to reproduce\r\n\r\n1. Master an exercise on a tablet.\r\n2. You not know if you have mastered the topic clearly .\r\n\r\n## Screenshots\r\n\r\n![img-20170511-wa0002](https://cloud.githubusercontent.com/assets/13453618/25945726/a11480a4-3665-11e7-9621-52a26a8be9c2.jpg)\r\n\r\n![img-20170511-wa0001](https://cloud.githubusercontent.com/assets/13453618/25945727/a2af45b6-3665-11e7-87db-d9d6003a41f0.jpg)\r\n\n", "before_files": [{"content": "from functools import reduce\nfrom random import sample\n\nfrom django.core.cache import cache\nfrom django.db.models import Q\nfrom django.db.models.aggregates import Count\nfrom kolibri.content import models, serializers\nfrom kolibri.content.content_db_router import get_active_content_database\nfrom kolibri.logger.models import ContentSessionLog, ContentSummaryLog\nfrom le_utils.constants import content_kinds\nfrom rest_framework import filters, pagination, viewsets\nfrom rest_framework.decorators import detail_route\nfrom rest_framework.response import Response\n\nfrom .utils.search import fuzz\n\ndef _join_with_logical_operator(lst, operator):\n op = \") {operator} (\".format(operator=operator)\n return \"(({items}))\".format(items=op.join(lst))\n\nclass ChannelMetadataCacheViewSet(viewsets.ModelViewSet):\n serializer_class = serializers.ChannelMetadataCacheSerializer\n\n def get_queryset(self):\n return models.ChannelMetadataCache.objects.all()\n\n\nclass ContentNodeFilter(filters.FilterSet):\n search = filters.django_filters.MethodFilter(action='title_description_filter')\n recommendations_for = filters.django_filters.MethodFilter()\n next_steps = filters.django_filters.MethodFilter()\n popular = filters.django_filters.MethodFilter()\n resume = filters.django_filters.MethodFilter()\n kind = filters.django_filters.MethodFilter()\n ids = filters.django_filters.MethodFilter()\n\n class Meta:\n model = models.ContentNode\n fields = ['parent', 'search', 'prerequisite_for', 'has_prerequisite', 'related', 'recommendations_for', 'ids', 'content_id']\n\n def title_description_filter(self, queryset, value):\n \"\"\"\n search for title or description that contains the keywords that are not necessary in adjacent\n \"\"\"\n exact_match = queryset.filter(Q(parent__isnull=False), Q(title__icontains=value) | Q(description__icontains=value))\n if exact_match:\n return exact_match\n # if no exact match, fuzzy search using the stemmed_metaphone field in ContentNode that covers the title and description\n fuzzed_tokens = [fuzz(word) for word in value.split()]\n if not fuzzed_tokens[0]:\n return []\n token_queries = [reduce(lambda x, y: x | y, [Q(stemmed_metaphone__contains=token) for token in tokens]) for tokens in fuzzed_tokens]\n return queryset.filter(\n Q(parent__isnull=False),\n reduce(lambda x, y: x & y, token_queries))\n\n def filter_recommendations_for(self, queryset, value):\n \"\"\"\n Recommend items that are similar to this piece of content.\n \"\"\"\n recc_node = queryset.get(pk=value)\n descendants = recc_node.get_descendants(include_self=False).exclude(kind__in=['topic', ''])\n siblings = recc_node.get_siblings(include_self=False).exclude(kind__in=['topic', ''])\n data = descendants | siblings # concatenates different querysets\n return data\n\n def filter_next_steps(self, queryset, value):\n \"\"\"\n Recommend uncompleted content, content that has user completed content as a prerequisite.\n\n :param queryset: all content nodes for this channel\n :param value: id of currently logged in user, or none if user is anonymous\n :return: uncompleted content nodes, or empty queryset if user is anonymous\n \"\"\"\n\n # if user is anonymous, don't return any nodes\n if not value:\n return queryset.none()\n\n tables = [\n '\"{summarylog_table}\" AS \"complete_log\"',\n '\"{summarylog_table}\" AS \"incomplete_log\"',\n '\"{content_table}\" AS \"complete_node\"',\n '\"{content_table}\" AS \"incomplete_node\"',\n ]\n table_names = {\n \"summarylog_table\": ContentSummaryLog._meta.db_table,\n \"content_table\": models.ContentNode._meta.db_table,\n }\n # aliases for sql table names\n sql_tables_and_aliases = [table.format(**table_names) for table in tables]\n # where conditions joined by ANDs\n where_statements = [\"NOT (incomplete_log.progress < 1 AND incomplete_log.content_id = incomplete_node.content_id)\",\n \"complete_log.user_id = '{user_id}'\".format(user_id=value),\n \"incomplete_log.user_id = '{user_id}'\".format(user_id=value),\n \"complete_log.progress = 1\",\n \"complete_node.rght = incomplete_node.lft - 1\",\n \"complete_log.content_id = complete_node.content_id\"]\n # custom SQL query to get uncompleted content based on mptt algorithm\n next_steps_recommendations = \"SELECT incomplete_node.* FROM {tables} WHERE {where}\".format(\n tables=\", \".join(sql_tables_and_aliases),\n where=_join_with_logical_operator(where_statements, \"AND\")\n )\n return models.ContentNode.objects.raw(next_steps_recommendations)\n\n def filter_popular(self, queryset, value):\n \"\"\"\n Recommend content that is popular with all users.\n\n :param queryset: all content nodes for this channel\n :param value: id of currently logged in user, or none if user is anonymous\n :return: 10 most popular content nodes\n \"\"\"\n if ContentSessionLog.objects.count() < 50:\n # return 25 random content nodes if not enough session logs\n pks = queryset.values_list('pk', flat=True).exclude(kind__in=['topic', ''])\n count = min(pks.count(), 25)\n return queryset.filter(pk__in=sample(list(pks), count))\n\n cache_key = 'popular_for_{}'.format(get_active_content_database())\n if cache.get(cache_key):\n return cache.get(cache_key)\n\n # get the most accessed content nodes\n content_counts_sorted = ContentSessionLog.objects \\\n .filter(channel_id=get_active_content_database()) \\\n .values_list('content_id', flat=True) \\\n .annotate(Count('content_id')) \\\n .order_by('-content_id__count')\n\n most_popular = queryset.filter(content_id__in=list(content_counts_sorted[:10]))\n\n # cache the popular results queryset for 10 minutes, for efficiency\n cache.set(cache_key, most_popular, 60 * 10)\n return most_popular\n\n def filter_resume(self, queryset, value):\n \"\"\"\n Recommend content that the user has recently engaged with, but not finished.\n\n :param queryset: all content nodes for this channel\n :param value: id of currently logged in user, or none if user is anonymous\n :return: 10 most recently viewed content nodes\n \"\"\"\n\n # if user is anonymous, return no nodes\n if not value:\n return queryset.none()\n\n # get the most recently viewed, but not finished, content nodes\n content_ids = ContentSummaryLog.objects \\\n .filter(user=value, channel_id=get_active_content_database()) \\\n .exclude(progress=1) \\\n .order_by('end_timestamp') \\\n .values_list('content_id', flat=True) \\\n .distinct()\n\n resume = queryset.filter(content_id__in=list(content_ids[:10]))\n\n return resume\n\n def filter_kind(self, queryset, value):\n \"\"\"\n Show only content of a given kind.\n\n :param queryset: all content nodes for this channel\n :param value: 'content' for everything except topics, or one of the content kind constants\n :return: content nodes of the given kind\n \"\"\"\n if value == 'content':\n return queryset.exclude(kind=content_kinds.TOPIC).order_by(\"lft\")\n return queryset.filter(kind=value).order_by(\"lft\")\n\n def filter_ids(self, queryset, value):\n return queryset.filter(pk__in=value.split(','))\n\n\nclass OptionalPageNumberPagination(pagination.PageNumberPagination):\n \"\"\"\n Pagination class that allows for page number-style pagination, when requested.\n To activate, the `page_size` argument must be set. For example, to request the first 20 records:\n `?page_size=20&page=1`\n \"\"\"\n page_size = None\n page_size_query_param = \"page_size\"\n\n\nclass ContentNodeViewset(viewsets.ModelViewSet):\n serializer_class = serializers.ContentNodeSerializer\n filter_backends = (filters.DjangoFilterBackend,)\n filter_class = ContentNodeFilter\n pagination_class = OptionalPageNumberPagination\n\n def get_queryset(self):\n return models.ContentNode.objects.all().select_related(\n 'parent',\n 'license',\n ).prefetch_related(\n 'assessmentmetadata',\n 'files',\n )\n\n @detail_route(methods=['get'])\n def descendants(self, request, **kwargs):\n node = self.get_object()\n kind = self.request.query_params.get('descendant_kind', None)\n descendants = node.get_descendants()\n if kind:\n descendants = descendants.filter(kind=kind)\n\n serializer = self.get_serializer(descendants, many=True)\n return Response(serializer.data)\n\n @detail_route(methods=['get'])\n def ancestors(self, request, **kwargs):\n return Response(self.get_object().get_ancestors().values('pk', 'title'))\n\n @detail_route(methods=['get'])\n def next_content(self, request, **kwargs):\n # retrieve the \"next\" content node, according to depth-first tree traversal\n this_item = self.get_object()\n next_item = models.ContentNode.objects.filter(tree_id=this_item.tree_id, lft__gt=this_item.rght).order_by(\"lft\").first()\n if not next_item:\n next_item = this_item.get_root()\n return Response({'kind': next_item.kind, 'id': next_item.id})\n\n\nclass FileViewset(viewsets.ModelViewSet):\n serializer_class = serializers.FileSerializer\n pagination_class = OptionalPageNumberPagination\n\n def get_queryset(self):\n return models.File.objects.all()\n", "path": "kolibri/content/api.py"}], "after_files": [{"content": "from functools import reduce\nfrom random import sample\n\nfrom django.core.cache import cache\nfrom django.db.models import Q\nfrom django.db.models.aggregates import Count\nfrom kolibri.content import models, serializers\nfrom kolibri.content.content_db_router import get_active_content_database\nfrom kolibri.logger.models import ContentSessionLog, ContentSummaryLog\nfrom le_utils.constants import content_kinds\nfrom rest_framework import filters, pagination, viewsets\nfrom rest_framework.decorators import detail_route\nfrom rest_framework.response import Response\n\nfrom .utils.search import fuzz\n\ndef _join_with_logical_operator(lst, operator):\n op = \") {operator} (\".format(operator=operator)\n return \"(({items}))\".format(items=op.join(lst))\n\nclass ChannelMetadataCacheViewSet(viewsets.ModelViewSet):\n serializer_class = serializers.ChannelMetadataCacheSerializer\n\n def get_queryset(self):\n return models.ChannelMetadataCache.objects.all()\n\n\nclass ContentNodeFilter(filters.FilterSet):\n search = filters.django_filters.MethodFilter(action='title_description_filter')\n recommendations_for = filters.django_filters.MethodFilter()\n next_steps = filters.django_filters.MethodFilter()\n popular = filters.django_filters.MethodFilter()\n resume = filters.django_filters.MethodFilter()\n kind = filters.django_filters.MethodFilter()\n ids = filters.django_filters.MethodFilter()\n\n class Meta:\n model = models.ContentNode\n fields = ['parent', 'search', 'prerequisite_for', 'has_prerequisite', 'related', 'recommendations_for', 'ids', 'content_id']\n\n def title_description_filter(self, queryset, value):\n \"\"\"\n search for title or description that contains the keywords that are not necessary in adjacent\n \"\"\"\n exact_match = queryset.filter(Q(parent__isnull=False), Q(title__icontains=value) | Q(description__icontains=value))\n if exact_match:\n return exact_match\n # if no exact match, fuzzy search using the stemmed_metaphone field in ContentNode that covers the title and description\n fuzzed_tokens = [fuzz(word) for word in value.split()]\n if not fuzzed_tokens[0]:\n return []\n token_queries = [reduce(lambda x, y: x | y, [Q(stemmed_metaphone__contains=token) for token in tokens]) for tokens in fuzzed_tokens]\n return queryset.filter(\n Q(parent__isnull=False),\n reduce(lambda x, y: x & y, token_queries))\n\n def filter_recommendations_for(self, queryset, value):\n \"\"\"\n Recommend items that are similar to this piece of content.\n \"\"\"\n recc_node = queryset.get(pk=value)\n descendants = recc_node.get_descendants(include_self=False).exclude(kind__in=['topic', ''])\n siblings = recc_node.get_siblings(include_self=False).exclude(kind__in=['topic', ''])\n data = descendants | siblings # concatenates different querysets\n return data\n\n def filter_next_steps(self, queryset, value):\n \"\"\"\n Recommend uncompleted content, content that has user completed content as a prerequisite.\n\n :param queryset: all content nodes for this channel\n :param value: id of currently logged in user, or none if user is anonymous\n :return: uncompleted content nodes, or empty queryset if user is anonymous\n \"\"\"\n\n # if user is anonymous, don't return any nodes\n if not value:\n return queryset.none()\n\n tables = [\n '\"{summarylog_table}\" AS \"complete_log\"',\n '\"{summarylog_table}\" AS \"incomplete_log\"',\n '\"{content_table}\" AS \"complete_node\"',\n '\"{content_table}\" AS \"incomplete_node\"',\n ]\n table_names = {\n \"summarylog_table\": ContentSummaryLog._meta.db_table,\n \"content_table\": models.ContentNode._meta.db_table,\n }\n # aliases for sql table names\n sql_tables_and_aliases = [table.format(**table_names) for table in tables]\n # where conditions joined by ANDs\n where_statements = [\"NOT (incomplete_log.progress < 1 AND incomplete_log.content_id = incomplete_node.content_id)\",\n \"complete_log.user_id = '{user_id}'\".format(user_id=value),\n \"incomplete_log.user_id = '{user_id}'\".format(user_id=value),\n \"complete_log.progress = 1\",\n \"complete_node.rght = incomplete_node.lft - 1\",\n \"complete_log.content_id = complete_node.content_id\"]\n # custom SQL query to get uncompleted content based on mptt algorithm\n next_steps_recommendations = \"SELECT incomplete_node.* FROM {tables} WHERE {where}\".format(\n tables=\", \".join(sql_tables_and_aliases),\n where=_join_with_logical_operator(where_statements, \"AND\")\n )\n return models.ContentNode.objects.raw(next_steps_recommendations)\n\n def filter_popular(self, queryset, value):\n \"\"\"\n Recommend content that is popular with all users.\n\n :param queryset: all content nodes for this channel\n :param value: id of currently logged in user, or none if user is anonymous\n :return: 10 most popular content nodes\n \"\"\"\n if ContentSessionLog.objects.count() < 50:\n # return 25 random content nodes if not enough session logs\n pks = queryset.values_list('pk', flat=True).exclude(kind__in=['topic', ''])\n count = min(pks.count(), 25)\n return queryset.filter(pk__in=sample(list(pks), count))\n\n cache_key = 'popular_for_{}'.format(get_active_content_database())\n if cache.get(cache_key):\n return cache.get(cache_key)\n\n # get the most accessed content nodes\n content_counts_sorted = ContentSessionLog.objects \\\n .filter(channel_id=get_active_content_database()) \\\n .values_list('content_id', flat=True) \\\n .annotate(Count('content_id')) \\\n .order_by('-content_id__count')\n\n most_popular = queryset.filter(content_id__in=list(content_counts_sorted[:10]))\n\n # cache the popular results queryset for 10 minutes, for efficiency\n cache.set(cache_key, most_popular, 60 * 10)\n return most_popular\n\n def filter_resume(self, queryset, value):\n \"\"\"\n Recommend content that the user has recently engaged with, but not finished.\n\n :param queryset: all content nodes for this channel\n :param value: id of currently logged in user, or none if user is anonymous\n :return: 10 most recently viewed content nodes\n \"\"\"\n\n # if user is anonymous, return no nodes\n if not value:\n return queryset.none()\n\n # get the most recently viewed, but not finished, content nodes\n content_ids = ContentSummaryLog.objects \\\n .filter(user=value, channel_id=get_active_content_database()) \\\n .exclude(progress=1) \\\n .order_by('end_timestamp') \\\n .values_list('content_id', flat=True) \\\n .distinct()\n\n resume = queryset.filter(content_id__in=list(content_ids[:10]))\n\n return resume\n\n def filter_kind(self, queryset, value):\n \"\"\"\n Show only content of a given kind.\n\n :param queryset: all content nodes for this channel\n :param value: 'content' for everything except topics, or one of the content kind constants\n :return: content nodes of the given kind\n \"\"\"\n if value == 'content':\n return queryset.exclude(kind=content_kinds.TOPIC).order_by(\"lft\")\n return queryset.filter(kind=value).order_by(\"lft\")\n\n def filter_ids(self, queryset, value):\n return queryset.filter(pk__in=value.split(','))\n\n\nclass OptionalPageNumberPagination(pagination.PageNumberPagination):\n \"\"\"\n Pagination class that allows for page number-style pagination, when requested.\n To activate, the `page_size` argument must be set. For example, to request the first 20 records:\n `?page_size=20&page=1`\n \"\"\"\n page_size = None\n page_size_query_param = \"page_size\"\n\n\nclass ContentNodeViewset(viewsets.ModelViewSet):\n serializer_class = serializers.ContentNodeSerializer\n filter_backends = (filters.DjangoFilterBackend,)\n filter_class = ContentNodeFilter\n pagination_class = OptionalPageNumberPagination\n\n def get_queryset(self):\n return models.ContentNode.objects.all().select_related(\n 'parent',\n 'license',\n ).prefetch_related(\n 'assessmentmetadata',\n 'files',\n )\n\n @detail_route(methods=['get'])\n def descendants(self, request, **kwargs):\n node = self.get_object()\n kind = self.request.query_params.get('descendant_kind', None)\n descendants = node.get_descendants()\n if kind:\n descendants = descendants.filter(kind=kind)\n\n serializer = self.get_serializer(descendants, many=True)\n return Response(serializer.data)\n\n @detail_route(methods=['get'])\n def ancestors(self, request, **kwargs):\n return Response(self.get_object().get_ancestors().values('pk', 'title'))\n\n @detail_route(methods=['get'])\n def next_content(self, request, **kwargs):\n # retrieve the \"next\" content node, according to depth-first tree traversal\n this_item = self.get_object()\n next_item = models.ContentNode.objects.filter(tree_id=this_item.tree_id, lft__gt=this_item.rght).order_by(\"lft\").first()\n if not next_item:\n next_item = this_item.get_root()\n return Response({'kind': next_item.kind, 'id': next_item.id, 'title': next_item.title})\n\n\nclass FileViewset(viewsets.ModelViewSet):\n serializer_class = serializers.FileSerializer\n pagination_class = OptionalPageNumberPagination\n\n def get_queryset(self):\n return models.File.objects.all()\n", "path": "kolibri/content/api.py"}]}
3,286
151
gh_patches_debug_8575
rasdani/github-patches
git_diff
yt-dlp__yt-dlp-4447
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [HiDive]: Login fails when both cookies and netrc are given ### Checklist - [X] I'm reporting a broken site - [X] I've verified that I'm running yt-dlp version **2022.03.08.1** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit) - [X] I've checked that all provided URLs are alive and playable in a browser - [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command) - [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) - [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required ### Region United States ### Description When I try to download from HIDIVE without a `cookies.txt` it uses my netrc credentials and successfully downloads. Once I run the command again using the `cookies.txt` that `yt-dlp` dumps from the successful download, I get the error message shown in the logs. I get the same error message when I use a `cookies.txt` dumped from my browser. Funimation and Crunchyroll both work with the `cookies.txt` dumped from my browser. The issue persists when trying from different countries using a vpn. Also, I install and update `yt-dlp` with `pip3 install --force-reinstall -I https://github.com/yt-dlp/yt-dlp/archive/master.zip` ### Verbose log ```shell [debug] Command-line config: ['-vU', '-N', '32', '--embed-subs', '--remux-video', 'mkv', '-a', 'batch_file', '-o', 'S%(season_number)02dE%(episode_number)02d.%(ext)s'] [debug] User config "/home/{user}/.config/yt-dlp/config": ['--netrc', '--cookies', '~/.config/yt-dlp/cookies.txt'] [debug] Batch file urls: ['https://www.hidive.com/stream/made-in-abyss/s01e001'] [debug] Encodings: locale UTF-8, fs utf-8, out utf-8, err utf-8, pref UTF-8 [debug] yt-dlp version 2022.03.08.1 [c0c2c57d3] [debug] Lazy loading extractors is disabled [debug] Python version 3.8.10 (CPython 64bit) - Linux-5.4.0-100-generic-x86_64-with-glibc2.29 [debug] Checking exe version: ffmpeg -bsfs [debug] Checking exe version: ffprobe -bsfs [debug] exe versions: ffmpeg 4.2.4, ffprobe 4.2.4, phantomjs 2.1.1, rtmpdump 2.4 [debug] Optional libraries: brotli, certifi, Cryptodome, secretstorage, mutagen, sqlite, websockets [debug] Proxy map: {} Latest version: 2022.03.08.1, Current version: 2022.03.08.1 yt-dlp is up to date (2022.03.08.1) [HiDive] None: Downloading webpage ERROR: [HiDive] made-in-abyss/s01e001: Unable to extract login form; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U File "/home/{user}/.local/lib/python3.8/site-packages/yt_dlp/extractor/common.py", line 639, in extract self.initialize() File "/home/{user}/.local/lib/python3.8/site-packages/yt_dlp/extractor/common.py", line 544, in initialize self._perform_login(username, password) File "/home/{user}/.local/lib/python3.8/site-packages/yt_dlp/extractor/hidive.py", line 40, in _perform_login form = self._search_regex( File "/home/{user}/.local/lib/python3.8/site-packages/yt_dlp/extractor/common.py", line 1229, in _search_regex raise RegexNotFoundError('Unable to extract %s' % _name) ``` [HiDive]: Login fails when both cookies and netrc are given ### Checklist - [X] I'm reporting a broken site - [X] I've verified that I'm running yt-dlp version **2022.03.08.1** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit) - [X] I've checked that all provided URLs are alive and playable in a browser - [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command) - [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) - [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required ### Region United States ### Description When I try to download from HIDIVE without a `cookies.txt` it uses my netrc credentials and successfully downloads. Once I run the command again using the `cookies.txt` that `yt-dlp` dumps from the successful download, I get the error message shown in the logs. I get the same error message when I use a `cookies.txt` dumped from my browser. Funimation and Crunchyroll both work with the `cookies.txt` dumped from my browser. The issue persists when trying from different countries using a vpn. Also, I install and update `yt-dlp` with `pip3 install --force-reinstall -I https://github.com/yt-dlp/yt-dlp/archive/master.zip` ### Verbose log ```shell [debug] Command-line config: ['-vU', '-N', '32', '--embed-subs', '--remux-video', 'mkv', '-a', 'batch_file', '-o', 'S%(season_number)02dE%(episode_number)02d.%(ext)s'] [debug] User config "/home/{user}/.config/yt-dlp/config": ['--netrc', '--cookies', '~/.config/yt-dlp/cookies.txt'] [debug] Batch file urls: ['https://www.hidive.com/stream/made-in-abyss/s01e001'] [debug] Encodings: locale UTF-8, fs utf-8, out utf-8, err utf-8, pref UTF-8 [debug] yt-dlp version 2022.03.08.1 [c0c2c57d3] [debug] Lazy loading extractors is disabled [debug] Python version 3.8.10 (CPython 64bit) - Linux-5.4.0-100-generic-x86_64-with-glibc2.29 [debug] Checking exe version: ffmpeg -bsfs [debug] Checking exe version: ffprobe -bsfs [debug] exe versions: ffmpeg 4.2.4, ffprobe 4.2.4, phantomjs 2.1.1, rtmpdump 2.4 [debug] Optional libraries: brotli, certifi, Cryptodome, secretstorage, mutagen, sqlite, websockets [debug] Proxy map: {} Latest version: 2022.03.08.1, Current version: 2022.03.08.1 yt-dlp is up to date (2022.03.08.1) [HiDive] None: Downloading webpage ERROR: [HiDive] made-in-abyss/s01e001: Unable to extract login form; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U File "/home/{user}/.local/lib/python3.8/site-packages/yt_dlp/extractor/common.py", line 639, in extract self.initialize() File "/home/{user}/.local/lib/python3.8/site-packages/yt_dlp/extractor/common.py", line 544, in initialize self._perform_login(username, password) File "/home/{user}/.local/lib/python3.8/site-packages/yt_dlp/extractor/hidive.py", line 40, in _perform_login form = self._search_regex( File "/home/{user}/.local/lib/python3.8/site-packages/yt_dlp/extractor/common.py", line 1229, in _search_regex raise RegexNotFoundError('Unable to extract %s' % _name) ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `yt_dlp/extractor/hidive.py` Content: ``` 1 import re 2 3 from .common import InfoExtractor 4 from ..utils import ( 5 ExtractorError, 6 int_or_none, 7 try_get, 8 url_or_none, 9 urlencode_postdata, 10 ) 11 12 13 class HiDiveIE(InfoExtractor): 14 _VALID_URL = r'https?://(?:www\.)?hidive\.com/stream/(?P<id>(?P<title>[^/]+)/(?P<key>[^/?#&]+))' 15 # Using X-Forwarded-For results in 403 HTTP error for HLS fragments, 16 # so disabling geo bypass completely 17 _GEO_BYPASS = False 18 _NETRC_MACHINE = 'hidive' 19 _LOGIN_URL = 'https://www.hidive.com/account/login' 20 21 _TESTS = [{ 22 'url': 'https://www.hidive.com/stream/the-comic-artist-and-his-assistants/s01e001', 23 'info_dict': { 24 'id': 'the-comic-artist-and-his-assistants/s01e001', 25 'ext': 'mp4', 26 'title': 'the-comic-artist-and-his-assistants/s01e001', 27 'series': 'the-comic-artist-and-his-assistants', 28 'season_number': 1, 29 'episode_number': 1, 30 }, 31 'params': { 32 'skip_download': True, 33 }, 34 'skip': 'Requires Authentication', 35 }] 36 37 def _perform_login(self, username, password): 38 webpage = self._download_webpage(self._LOGIN_URL, None) 39 form = self._search_regex( 40 r'(?s)<form[^>]+action="/account/login"[^>]*>(.+?)</form>', 41 webpage, 'login form') 42 data = self._hidden_inputs(form) 43 data.update({ 44 'Email': username, 45 'Password': password, 46 }) 47 self._download_webpage( 48 self._LOGIN_URL, None, 'Logging in', data=urlencode_postdata(data)) 49 50 def _call_api(self, video_id, title, key, data={}, **kwargs): 51 data = { 52 **data, 53 'Title': title, 54 'Key': key, 55 'PlayerId': 'f4f895ce1ca713ba263b91caeb1daa2d08904783', 56 } 57 return self._download_json( 58 'https://www.hidive.com/play/settings', video_id, 59 data=urlencode_postdata(data), **kwargs) or {} 60 61 def _extract_subtitles_from_rendition(self, rendition, subtitles, parsed_urls): 62 for cc_file in rendition.get('ccFiles', []): 63 cc_url = url_or_none(try_get(cc_file, lambda x: x[2])) 64 # name is used since we cant distinguish subs with same language code 65 cc_lang = try_get(cc_file, (lambda x: x[1].replace(' ', '-').lower(), lambda x: x[0]), str) 66 if cc_url not in parsed_urls and cc_lang: 67 parsed_urls.add(cc_url) 68 subtitles.setdefault(cc_lang, []).append({'url': cc_url}) 69 70 def _get_subtitles(self, url, video_id, title, key, parsed_urls): 71 webpage = self._download_webpage(url, video_id, fatal=False) or '' 72 subtitles = {} 73 for caption in set(re.findall(r'data-captions=\"([^\"]+)\"', webpage)): 74 renditions = self._call_api( 75 video_id, title, key, {'Captions': caption}, fatal=False, 76 note=f'Downloading {caption} subtitle information').get('renditions') or {} 77 for rendition_id, rendition in renditions.items(): 78 self._extract_subtitles_from_rendition(rendition, subtitles, parsed_urls) 79 return subtitles 80 81 def _real_extract(self, url): 82 video_id, title, key = self._match_valid_url(url).group('id', 'title', 'key') 83 settings = self._call_api(video_id, title, key) 84 85 restriction = settings.get('restrictionReason') 86 if restriction == 'RegionRestricted': 87 self.raise_geo_restricted() 88 if restriction and restriction != 'None': 89 raise ExtractorError( 90 '%s said: %s' % (self.IE_NAME, restriction), expected=True) 91 92 formats, parsed_urls = [], {None} 93 for rendition_id, rendition in settings['renditions'].items(): 94 audio, version, extra = rendition_id.split('_') 95 m3u8_url = url_or_none(try_get(rendition, lambda x: x['bitrates']['hls'])) 96 if m3u8_url not in parsed_urls: 97 parsed_urls.add(m3u8_url) 98 frmt = self._extract_m3u8_formats( 99 m3u8_url, video_id, 'mp4', entry_protocol='m3u8_native', m3u8_id=rendition_id, fatal=False) 100 for f in frmt: 101 f['language'] = audio 102 f['format_note'] = f'{version}, {extra}' 103 formats.extend(frmt) 104 self._sort_formats(formats) 105 106 return { 107 'id': video_id, 108 'title': video_id, 109 'subtitles': self.extract_subtitles(url, video_id, title, key, parsed_urls), 110 'formats': formats, 111 'series': title, 112 'season_number': int_or_none( 113 self._search_regex(r's(\d+)', key, 'season number', default=None)), 114 'episode_number': int_or_none( 115 self._search_regex(r'e(\d+)', key, 'episode number', default=None)), 116 'http_headers': {'Referer': url} 117 } 118 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/yt_dlp/extractor/hidive.py b/yt_dlp/extractor/hidive.py --- a/yt_dlp/extractor/hidive.py +++ b/yt_dlp/extractor/hidive.py @@ -38,7 +38,9 @@ webpage = self._download_webpage(self._LOGIN_URL, None) form = self._search_regex( r'(?s)<form[^>]+action="/account/login"[^>]*>(.+?)</form>', - webpage, 'login form') + webpage, 'login form', default=None) + if not form: # logged in + return data = self._hidden_inputs(form) data.update({ 'Email': username,
{"golden_diff": "diff --git a/yt_dlp/extractor/hidive.py b/yt_dlp/extractor/hidive.py\n--- a/yt_dlp/extractor/hidive.py\n+++ b/yt_dlp/extractor/hidive.py\n@@ -38,7 +38,9 @@\n webpage = self._download_webpage(self._LOGIN_URL, None)\n form = self._search_regex(\n r'(?s)<form[^>]+action=\"/account/login\"[^>]*>(.+?)</form>',\n- webpage, 'login form')\n+ webpage, 'login form', default=None)\n+ if not form: # logged in\n+ return\n data = self._hidden_inputs(form)\n data.update({\n 'Email': username,\n", "issue": "[HiDive]: Login fails when both cookies and netrc are given\n### Checklist\n\n- [X] I'm reporting a broken site\n- [X] I've verified that I'm running yt-dlp version **2022.03.08.1** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are alive and playable in a browser\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\nUnited States\n\n### Description\n\nWhen I try to download from HIDIVE without a `cookies.txt` it uses my netrc credentials and successfully downloads. Once I run the command again using the `cookies.txt` that `yt-dlp` dumps from the successful download, I get the error message shown in the logs. I get the same error message when I use a `cookies.txt` dumped from my browser. Funimation and Crunchyroll both work with the `cookies.txt` dumped from my browser.\r\n\r\nThe issue persists when trying from different countries using a vpn.\r\n\r\nAlso, I install and update `yt-dlp` with `pip3 install --force-reinstall -I https://github.com/yt-dlp/yt-dlp/archive/master.zip`\n\n### Verbose log\n\n```shell\n[debug] Command-line config: ['-vU', '-N', '32', '--embed-subs', '--remux-video', 'mkv', '-a', 'batch_file', '-o', 'S%(season_number)02dE%(episode_number)02d.%(ext)s']\r\n[debug] User config \"/home/{user}/.config/yt-dlp/config\": ['--netrc', '--cookies', '~/.config/yt-dlp/cookies.txt']\r\n[debug] Batch file urls: ['https://www.hidive.com/stream/made-in-abyss/s01e001']\r\n[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, err utf-8, pref UTF-8\r\n[debug] yt-dlp version 2022.03.08.1 [c0c2c57d3]\r\n[debug] Lazy loading extractors is disabled\r\n[debug] Python version 3.8.10 (CPython 64bit) - Linux-5.4.0-100-generic-x86_64-with-glibc2.29\r\n[debug] Checking exe version: ffmpeg -bsfs\r\n[debug] Checking exe version: ffprobe -bsfs\r\n[debug] exe versions: ffmpeg 4.2.4, ffprobe 4.2.4, phantomjs 2.1.1, rtmpdump 2.4\r\n[debug] Optional libraries: brotli, certifi, Cryptodome, secretstorage, mutagen, sqlite, websockets\r\n[debug] Proxy map: {}\r\nLatest version: 2022.03.08.1, Current version: 2022.03.08.1\r\nyt-dlp is up to date (2022.03.08.1)\r\n[HiDive] None: Downloading webpage\r\nERROR: [HiDive] made-in-abyss/s01e001: Unable to extract login form; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U\r\n File \"/home/{user}/.local/lib/python3.8/site-packages/yt_dlp/extractor/common.py\", line 639, in extract\r\n self.initialize()\r\n File \"/home/{user}/.local/lib/python3.8/site-packages/yt_dlp/extractor/common.py\", line 544, in initialize\r\n self._perform_login(username, password)\r\n File \"/home/{user}/.local/lib/python3.8/site-packages/yt_dlp/extractor/hidive.py\", line 40, in _perform_login\r\n form = self._search_regex(\r\n File \"/home/{user}/.local/lib/python3.8/site-packages/yt_dlp/extractor/common.py\", line 1229, in _search_regex\r\n raise RegexNotFoundError('Unable to extract %s' % _name)\n```\n\n[HiDive]: Login fails when both cookies and netrc are given\n### Checklist\n\n- [X] I'm reporting a broken site\n- [X] I've verified that I'm running yt-dlp version **2022.03.08.1** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are alive and playable in a browser\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\nUnited States\n\n### Description\n\nWhen I try to download from HIDIVE without a `cookies.txt` it uses my netrc credentials and successfully downloads. Once I run the command again using the `cookies.txt` that `yt-dlp` dumps from the successful download, I get the error message shown in the logs. I get the same error message when I use a `cookies.txt` dumped from my browser. Funimation and Crunchyroll both work with the `cookies.txt` dumped from my browser.\r\n\r\nThe issue persists when trying from different countries using a vpn.\r\n\r\nAlso, I install and update `yt-dlp` with `pip3 install --force-reinstall -I https://github.com/yt-dlp/yt-dlp/archive/master.zip`\n\n### Verbose log\n\n```shell\n[debug] Command-line config: ['-vU', '-N', '32', '--embed-subs', '--remux-video', 'mkv', '-a', 'batch_file', '-o', 'S%(season_number)02dE%(episode_number)02d.%(ext)s']\r\n[debug] User config \"/home/{user}/.config/yt-dlp/config\": ['--netrc', '--cookies', '~/.config/yt-dlp/cookies.txt']\r\n[debug] Batch file urls: ['https://www.hidive.com/stream/made-in-abyss/s01e001']\r\n[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, err utf-8, pref UTF-8\r\n[debug] yt-dlp version 2022.03.08.1 [c0c2c57d3]\r\n[debug] Lazy loading extractors is disabled\r\n[debug] Python version 3.8.10 (CPython 64bit) - Linux-5.4.0-100-generic-x86_64-with-glibc2.29\r\n[debug] Checking exe version: ffmpeg -bsfs\r\n[debug] Checking exe version: ffprobe -bsfs\r\n[debug] exe versions: ffmpeg 4.2.4, ffprobe 4.2.4, phantomjs 2.1.1, rtmpdump 2.4\r\n[debug] Optional libraries: brotli, certifi, Cryptodome, secretstorage, mutagen, sqlite, websockets\r\n[debug] Proxy map: {}\r\nLatest version: 2022.03.08.1, Current version: 2022.03.08.1\r\nyt-dlp is up to date (2022.03.08.1)\r\n[HiDive] None: Downloading webpage\r\nERROR: [HiDive] made-in-abyss/s01e001: Unable to extract login form; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U\r\n File \"/home/{user}/.local/lib/python3.8/site-packages/yt_dlp/extractor/common.py\", line 639, in extract\r\n self.initialize()\r\n File \"/home/{user}/.local/lib/python3.8/site-packages/yt_dlp/extractor/common.py\", line 544, in initialize\r\n self._perform_login(username, password)\r\n File \"/home/{user}/.local/lib/python3.8/site-packages/yt_dlp/extractor/hidive.py\", line 40, in _perform_login\r\n form = self._search_regex(\r\n File \"/home/{user}/.local/lib/python3.8/site-packages/yt_dlp/extractor/common.py\", line 1229, in _search_regex\r\n raise RegexNotFoundError('Unable to extract %s' % _name)\n```\n\n", "before_files": [{"content": "import re\n\nfrom .common import InfoExtractor\nfrom ..utils import (\n ExtractorError,\n int_or_none,\n try_get,\n url_or_none,\n urlencode_postdata,\n)\n\n\nclass HiDiveIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?hidive\\.com/stream/(?P<id>(?P<title>[^/]+)/(?P<key>[^/?#&]+))'\n # Using X-Forwarded-For results in 403 HTTP error for HLS fragments,\n # so disabling geo bypass completely\n _GEO_BYPASS = False\n _NETRC_MACHINE = 'hidive'\n _LOGIN_URL = 'https://www.hidive.com/account/login'\n\n _TESTS = [{\n 'url': 'https://www.hidive.com/stream/the-comic-artist-and-his-assistants/s01e001',\n 'info_dict': {\n 'id': 'the-comic-artist-and-his-assistants/s01e001',\n 'ext': 'mp4',\n 'title': 'the-comic-artist-and-his-assistants/s01e001',\n 'series': 'the-comic-artist-and-his-assistants',\n 'season_number': 1,\n 'episode_number': 1,\n },\n 'params': {\n 'skip_download': True,\n },\n 'skip': 'Requires Authentication',\n }]\n\n def _perform_login(self, username, password):\n webpage = self._download_webpage(self._LOGIN_URL, None)\n form = self._search_regex(\n r'(?s)<form[^>]+action=\"/account/login\"[^>]*>(.+?)</form>',\n webpage, 'login form')\n data = self._hidden_inputs(form)\n data.update({\n 'Email': username,\n 'Password': password,\n })\n self._download_webpage(\n self._LOGIN_URL, None, 'Logging in', data=urlencode_postdata(data))\n\n def _call_api(self, video_id, title, key, data={}, **kwargs):\n data = {\n **data,\n 'Title': title,\n 'Key': key,\n 'PlayerId': 'f4f895ce1ca713ba263b91caeb1daa2d08904783',\n }\n return self._download_json(\n 'https://www.hidive.com/play/settings', video_id,\n data=urlencode_postdata(data), **kwargs) or {}\n\n def _extract_subtitles_from_rendition(self, rendition, subtitles, parsed_urls):\n for cc_file in rendition.get('ccFiles', []):\n cc_url = url_or_none(try_get(cc_file, lambda x: x[2]))\n # name is used since we cant distinguish subs with same language code\n cc_lang = try_get(cc_file, (lambda x: x[1].replace(' ', '-').lower(), lambda x: x[0]), str)\n if cc_url not in parsed_urls and cc_lang:\n parsed_urls.add(cc_url)\n subtitles.setdefault(cc_lang, []).append({'url': cc_url})\n\n def _get_subtitles(self, url, video_id, title, key, parsed_urls):\n webpage = self._download_webpage(url, video_id, fatal=False) or ''\n subtitles = {}\n for caption in set(re.findall(r'data-captions=\\\"([^\\\"]+)\\\"', webpage)):\n renditions = self._call_api(\n video_id, title, key, {'Captions': caption}, fatal=False,\n note=f'Downloading {caption} subtitle information').get('renditions') or {}\n for rendition_id, rendition in renditions.items():\n self._extract_subtitles_from_rendition(rendition, subtitles, parsed_urls)\n return subtitles\n\n def _real_extract(self, url):\n video_id, title, key = self._match_valid_url(url).group('id', 'title', 'key')\n settings = self._call_api(video_id, title, key)\n\n restriction = settings.get('restrictionReason')\n if restriction == 'RegionRestricted':\n self.raise_geo_restricted()\n if restriction and restriction != 'None':\n raise ExtractorError(\n '%s said: %s' % (self.IE_NAME, restriction), expected=True)\n\n formats, parsed_urls = [], {None}\n for rendition_id, rendition in settings['renditions'].items():\n audio, version, extra = rendition_id.split('_')\n m3u8_url = url_or_none(try_get(rendition, lambda x: x['bitrates']['hls']))\n if m3u8_url not in parsed_urls:\n parsed_urls.add(m3u8_url)\n frmt = self._extract_m3u8_formats(\n m3u8_url, video_id, 'mp4', entry_protocol='m3u8_native', m3u8_id=rendition_id, fatal=False)\n for f in frmt:\n f['language'] = audio\n f['format_note'] = f'{version}, {extra}'\n formats.extend(frmt)\n self._sort_formats(formats)\n\n return {\n 'id': video_id,\n 'title': video_id,\n 'subtitles': self.extract_subtitles(url, video_id, title, key, parsed_urls),\n 'formats': formats,\n 'series': title,\n 'season_number': int_or_none(\n self._search_regex(r's(\\d+)', key, 'season number', default=None)),\n 'episode_number': int_or_none(\n self._search_regex(r'e(\\d+)', key, 'episode number', default=None)),\n 'http_headers': {'Referer': url}\n }\n", "path": "yt_dlp/extractor/hidive.py"}], "after_files": [{"content": "import re\n\nfrom .common import InfoExtractor\nfrom ..utils import (\n ExtractorError,\n int_or_none,\n try_get,\n url_or_none,\n urlencode_postdata,\n)\n\n\nclass HiDiveIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?hidive\\.com/stream/(?P<id>(?P<title>[^/]+)/(?P<key>[^/?#&]+))'\n # Using X-Forwarded-For results in 403 HTTP error for HLS fragments,\n # so disabling geo bypass completely\n _GEO_BYPASS = False\n _NETRC_MACHINE = 'hidive'\n _LOGIN_URL = 'https://www.hidive.com/account/login'\n\n _TESTS = [{\n 'url': 'https://www.hidive.com/stream/the-comic-artist-and-his-assistants/s01e001',\n 'info_dict': {\n 'id': 'the-comic-artist-and-his-assistants/s01e001',\n 'ext': 'mp4',\n 'title': 'the-comic-artist-and-his-assistants/s01e001',\n 'series': 'the-comic-artist-and-his-assistants',\n 'season_number': 1,\n 'episode_number': 1,\n },\n 'params': {\n 'skip_download': True,\n },\n 'skip': 'Requires Authentication',\n }]\n\n def _perform_login(self, username, password):\n webpage = self._download_webpage(self._LOGIN_URL, None)\n form = self._search_regex(\n r'(?s)<form[^>]+action=\"/account/login\"[^>]*>(.+?)</form>',\n webpage, 'login form', default=None)\n if not form: # logged in\n return\n data = self._hidden_inputs(form)\n data.update({\n 'Email': username,\n 'Password': password,\n })\n self._download_webpage(\n self._LOGIN_URL, None, 'Logging in', data=urlencode_postdata(data))\n\n def _call_api(self, video_id, title, key, data={}, **kwargs):\n data = {\n **data,\n 'Title': title,\n 'Key': key,\n 'PlayerId': 'f4f895ce1ca713ba263b91caeb1daa2d08904783',\n }\n return self._download_json(\n 'https://www.hidive.com/play/settings', video_id,\n data=urlencode_postdata(data), **kwargs) or {}\n\n def _extract_subtitles_from_rendition(self, rendition, subtitles, parsed_urls):\n for cc_file in rendition.get('ccFiles', []):\n cc_url = url_or_none(try_get(cc_file, lambda x: x[2]))\n # name is used since we cant distinguish subs with same language code\n cc_lang = try_get(cc_file, (lambda x: x[1].replace(' ', '-').lower(), lambda x: x[0]), str)\n if cc_url not in parsed_urls and cc_lang:\n parsed_urls.add(cc_url)\n subtitles.setdefault(cc_lang, []).append({'url': cc_url})\n\n def _get_subtitles(self, url, video_id, title, key, parsed_urls):\n webpage = self._download_webpage(url, video_id, fatal=False) or ''\n subtitles = {}\n for caption in set(re.findall(r'data-captions=\\\"([^\\\"]+)\\\"', webpage)):\n renditions = self._call_api(\n video_id, title, key, {'Captions': caption}, fatal=False,\n note=f'Downloading {caption} subtitle information').get('renditions') or {}\n for rendition_id, rendition in renditions.items():\n self._extract_subtitles_from_rendition(rendition, subtitles, parsed_urls)\n return subtitles\n\n def _real_extract(self, url):\n video_id, title, key = self._match_valid_url(url).group('id', 'title', 'key')\n settings = self._call_api(video_id, title, key)\n\n restriction = settings.get('restrictionReason')\n if restriction == 'RegionRestricted':\n self.raise_geo_restricted()\n if restriction and restriction != 'None':\n raise ExtractorError(\n '%s said: %s' % (self.IE_NAME, restriction), expected=True)\n\n formats, parsed_urls = [], {None}\n for rendition_id, rendition in settings['renditions'].items():\n audio, version, extra = rendition_id.split('_')\n m3u8_url = url_or_none(try_get(rendition, lambda x: x['bitrates']['hls']))\n if m3u8_url not in parsed_urls:\n parsed_urls.add(m3u8_url)\n frmt = self._extract_m3u8_formats(\n m3u8_url, video_id, 'mp4', entry_protocol='m3u8_native', m3u8_id=rendition_id, fatal=False)\n for f in frmt:\n f['language'] = audio\n f['format_note'] = f'{version}, {extra}'\n formats.extend(frmt)\n self._sort_formats(formats)\n\n return {\n 'id': video_id,\n 'title': video_id,\n 'subtitles': self.extract_subtitles(url, video_id, title, key, parsed_urls),\n 'formats': formats,\n 'series': title,\n 'season_number': int_or_none(\n self._search_regex(r's(\\d+)', key, 'season number', default=None)),\n 'episode_number': int_or_none(\n self._search_regex(r'e(\\d+)', key, 'episode number', default=None)),\n 'http_headers': {'Referer': url}\n }\n", "path": "yt_dlp/extractor/hidive.py"}]}
4,054
168
gh_patches_debug_18220
rasdani/github-patches
git_diff
lra__mackup-1359
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Critical error! Using .mackup as directory option deletes application configs I used the following config as `~/.mackup.cfg` ``` [storage] engine = file_system path = /Users/myuser directory = .mackup ``` And running `mackup backup` deleted all my application configs because [this step](https://github.com/lra/mackup/blob/master/mackup/application.py#L114-L116) doesn't check if `home_filepath` is equal to `mackup_filepath` therefore copying files to `.mackup` then deleting `.mackup`... Thanks a lot :( --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `mackup/config.py` Content: ``` 1 """Package used to manage the .mackup.cfg config file.""" 2 3 import os 4 import os.path 5 6 from .constants import (MACKUP_BACKUP_PATH, 7 MACKUP_CONFIG_FILE, 8 ENGINE_DROPBOX, 9 ENGINE_GDRIVE, 10 ENGINE_COPY, 11 ENGINE_ICLOUD, 12 ENGINE_BOX, 13 ENGINE_FS) 14 15 from .utils import (error, 16 get_dropbox_folder_location, 17 get_copy_folder_location, 18 get_google_drive_folder_location, 19 get_icloud_folder_location, 20 get_box_folder_location) 21 try: 22 import configparser 23 except ImportError: 24 import ConfigParser as configparser 25 26 27 class Config(object): 28 29 """The Mackup Config class.""" 30 31 def __init__(self, filename=None): 32 """ 33 Create a Config instance. 34 35 Args: 36 filename (str): Optional filename of the config file. If empty, 37 defaults to MACKUP_CONFIG_FILE 38 """ 39 assert isinstance(filename, str) or filename is None 40 41 # Initialize the parser 42 self._parser = self._setup_parser(filename) 43 44 # Do we have an old config file ? 45 self._warn_on_old_config() 46 47 # Get the storage engine 48 self._engine = self._parse_engine() 49 50 # Get the path where the Mackup folder is 51 self._path = self._parse_path() 52 53 # Get the directory replacing 'Mackup', if any 54 self._directory = self._parse_directory() 55 56 # Get the list of apps to ignore 57 self._apps_to_ignore = self._parse_apps_to_ignore() 58 59 # Get the list of apps to allow 60 self._apps_to_sync = self._parse_apps_to_sync() 61 62 @property 63 def engine(self): 64 """ 65 The engine used by the storage. 66 67 ENGINE_DROPBOX, ENGINE_GDRIVE, ENGINE_COPY, ENGINE_ICLOUD, ENGINE_BOX 68 or ENGINE_FS. 69 70 Returns: 71 str 72 """ 73 return str(self._engine) 74 75 @property 76 def path(self): 77 """ 78 Path to the Mackup configuration files. 79 80 The path to the directory where Mackup is gonna create and store his 81 directory. 82 83 Returns: 84 str 85 """ 86 return str(self._path) 87 88 @property 89 def directory(self): 90 """ 91 The name of the Mackup directory, named Mackup by default. 92 93 Returns: 94 str 95 """ 96 return str(self._directory) 97 98 @property 99 def fullpath(self): 100 """ 101 Full path to the Mackup configuration files. 102 103 The full path to the directory when Mackup is storing the configuration 104 files. 105 106 Returns: 107 str 108 """ 109 return str(os.path.join(self.path, self.directory)) 110 111 @property 112 def apps_to_ignore(self): 113 """ 114 Get the list of applications ignored in the config file. 115 116 Returns: 117 set. Set of application names to ignore, lowercase 118 """ 119 return set(self._apps_to_ignore) 120 121 @property 122 def apps_to_sync(self): 123 """ 124 Get the list of applications allowed in the config file. 125 126 Returns: 127 set. Set of application names to allow, lowercase 128 """ 129 return set(self._apps_to_sync) 130 131 def _setup_parser(self, filename=None): 132 """ 133 Configure the ConfigParser instance the way we want it. 134 135 Args: 136 filename (str) or None 137 138 Returns: 139 SafeConfigParser 140 """ 141 assert isinstance(filename, str) or filename is None 142 143 # If we are not overriding the config filename 144 if not filename: 145 filename = MACKUP_CONFIG_FILE 146 147 parser = configparser.SafeConfigParser(allow_no_value=True) 148 parser.read(os.path.join(os.path.join(os.environ['HOME'], filename))) 149 150 return parser 151 152 def _warn_on_old_config(self): 153 """Warn the user if an old config format is detected.""" 154 # Is an old setion is in the config file ? 155 old_sections = ['Allowed Applications', 'Ignored Applications'] 156 for old_section in old_sections: 157 if self._parser.has_section(old_section): 158 error("Old config file detected. Aborting.\n" 159 "\n" 160 "An old section (e.g. [Allowed Applications]" 161 " or [Ignored Applications] has been detected" 162 " in your {} file.\n" 163 "I'd rather do nothing than do something you" 164 " do not want me to do.\n" 165 "\n" 166 "Please read the up to date documentation on" 167 " <https://github.com/lra/mackup> and migrate" 168 " your configuration file." 169 .format(MACKUP_CONFIG_FILE)) 170 171 def _parse_engine(self): 172 """ 173 Parse the storage engine in the config. 174 175 Returns: 176 str 177 """ 178 if self._parser.has_option('storage', 'engine'): 179 engine = str(self._parser.get('storage', 'engine')) 180 else: 181 engine = ENGINE_DROPBOX 182 183 assert isinstance(engine, str) 184 185 if engine not in [ENGINE_DROPBOX, 186 ENGINE_GDRIVE, 187 ENGINE_COPY, 188 ENGINE_ICLOUD, 189 ENGINE_BOX, 190 ENGINE_FS]: 191 raise ConfigError('Unknown storage engine: {}'.format(engine)) 192 193 return str(engine) 194 195 def _parse_path(self): 196 """ 197 Parse the storage path in the config. 198 199 Returns: 200 str 201 """ 202 if self.engine == ENGINE_DROPBOX: 203 path = get_dropbox_folder_location() 204 elif self.engine == ENGINE_GDRIVE: 205 path = get_google_drive_folder_location() 206 elif self.engine == ENGINE_COPY: 207 path = get_copy_folder_location() 208 elif self.engine == ENGINE_ICLOUD: 209 path = get_icloud_folder_location() 210 elif self.engine == ENGINE_BOX: 211 path = get_box_folder_location() 212 elif self.engine == ENGINE_FS: 213 if self._parser.has_option('storage', 'path'): 214 cfg_path = self._parser.get('storage', 'path') 215 path = os.path.join(os.environ['HOME'], cfg_path) 216 else: 217 raise ConfigError("The required 'path' can't be found while" 218 " the 'file_system' engine is used.") 219 220 return str(path) 221 222 def _parse_directory(self): 223 """ 224 Parse the storage directory in the config. 225 226 Returns: 227 str 228 """ 229 if self._parser.has_option('storage', 'directory'): 230 directory = self._parser.get('storage', 'directory') 231 else: 232 directory = MACKUP_BACKUP_PATH 233 234 return str(directory) 235 236 def _parse_apps_to_ignore(self): 237 """ 238 Parse the applications to ignore in the config. 239 240 Returns: 241 set 242 """ 243 # We ignore nothing by default 244 apps_to_ignore = set() 245 246 # Is the "[applications_to_ignore]" in the cfg file ? 247 section_title = 'applications_to_ignore' 248 if self._parser.has_section(section_title): 249 apps_to_ignore = set(self._parser.options(section_title)) 250 251 return apps_to_ignore 252 253 def _parse_apps_to_sync(self): 254 """ 255 Parse the applications to backup in the config. 256 257 Returns: 258 set 259 """ 260 # We allow nothing by default 261 apps_to_sync = set() 262 263 # Is the "[applications_to_sync]" section in the cfg file ? 264 section_title = 'applications_to_sync' 265 if self._parser.has_section(section_title): 266 apps_to_sync = set(self._parser.options(section_title)) 267 268 return apps_to_sync 269 270 271 class ConfigError(Exception): 272 273 """Exception used for handle errors in the configuration.""" 274 275 pass 276 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/mackup/config.py b/mackup/config.py --- a/mackup/config.py +++ b/mackup/config.py @@ -3,7 +3,8 @@ import os import os.path -from .constants import (MACKUP_BACKUP_PATH, +from .constants import (CUSTOM_APPS_DIR, + MACKUP_BACKUP_PATH, MACKUP_CONFIG_FILE, ENGINE_DROPBOX, ENGINE_GDRIVE, @@ -228,6 +229,10 @@ """ if self._parser.has_option('storage', 'directory'): directory = self._parser.get('storage', 'directory') + # Don't allow CUSTOM_APPS_DIR as a storage directory + if directory == CUSTOM_APPS_DIR: + raise ConfigError("{} cannot be used as a storage directory." + .format(CUSTOM_APPS_DIR)) else: directory = MACKUP_BACKUP_PATH
{"golden_diff": "diff --git a/mackup/config.py b/mackup/config.py\n--- a/mackup/config.py\n+++ b/mackup/config.py\n@@ -3,7 +3,8 @@\n import os\n import os.path\n \n-from .constants import (MACKUP_BACKUP_PATH,\n+from .constants import (CUSTOM_APPS_DIR,\n+ MACKUP_BACKUP_PATH,\n MACKUP_CONFIG_FILE,\n ENGINE_DROPBOX,\n ENGINE_GDRIVE,\n@@ -228,6 +229,10 @@\n \"\"\"\n if self._parser.has_option('storage', 'directory'):\n directory = self._parser.get('storage', 'directory')\n+ # Don't allow CUSTOM_APPS_DIR as a storage directory\n+ if directory == CUSTOM_APPS_DIR:\n+ raise ConfigError(\"{} cannot be used as a storage directory.\"\n+ .format(CUSTOM_APPS_DIR))\n else:\n directory = MACKUP_BACKUP_PATH\n", "issue": "Critical error! Using .mackup as directory option deletes application configs\nI used the following config as `~/.mackup.cfg` \r\n```\r\n[storage]\r\nengine = file_system\r\npath = /Users/myuser\r\ndirectory = .mackup\r\n```\r\n\r\nAnd running `mackup backup` deleted all my application configs because [this step](https://github.com/lra/mackup/blob/master/mackup/application.py#L114-L116) doesn't check if `home_filepath` is equal to `mackup_filepath` therefore copying files to `.mackup` then deleting `.mackup`...\r\n\r\nThanks a lot :(\n", "before_files": [{"content": "\"\"\"Package used to manage the .mackup.cfg config file.\"\"\"\n\nimport os\nimport os.path\n\nfrom .constants import (MACKUP_BACKUP_PATH,\n MACKUP_CONFIG_FILE,\n ENGINE_DROPBOX,\n ENGINE_GDRIVE,\n ENGINE_COPY,\n ENGINE_ICLOUD,\n ENGINE_BOX,\n ENGINE_FS)\n\nfrom .utils import (error,\n get_dropbox_folder_location,\n get_copy_folder_location,\n get_google_drive_folder_location,\n get_icloud_folder_location,\n get_box_folder_location)\ntry:\n import configparser\nexcept ImportError:\n import ConfigParser as configparser\n\n\nclass Config(object):\n\n \"\"\"The Mackup Config class.\"\"\"\n\n def __init__(self, filename=None):\n \"\"\"\n Create a Config instance.\n\n Args:\n filename (str): Optional filename of the config file. If empty,\n defaults to MACKUP_CONFIG_FILE\n \"\"\"\n assert isinstance(filename, str) or filename is None\n\n # Initialize the parser\n self._parser = self._setup_parser(filename)\n\n # Do we have an old config file ?\n self._warn_on_old_config()\n\n # Get the storage engine\n self._engine = self._parse_engine()\n\n # Get the path where the Mackup folder is\n self._path = self._parse_path()\n\n # Get the directory replacing 'Mackup', if any\n self._directory = self._parse_directory()\n\n # Get the list of apps to ignore\n self._apps_to_ignore = self._parse_apps_to_ignore()\n\n # Get the list of apps to allow\n self._apps_to_sync = self._parse_apps_to_sync()\n\n @property\n def engine(self):\n \"\"\"\n The engine used by the storage.\n\n ENGINE_DROPBOX, ENGINE_GDRIVE, ENGINE_COPY, ENGINE_ICLOUD, ENGINE_BOX\n or ENGINE_FS.\n\n Returns:\n str\n \"\"\"\n return str(self._engine)\n\n @property\n def path(self):\n \"\"\"\n Path to the Mackup configuration files.\n\n The path to the directory where Mackup is gonna create and store his\n directory.\n\n Returns:\n str\n \"\"\"\n return str(self._path)\n\n @property\n def directory(self):\n \"\"\"\n The name of the Mackup directory, named Mackup by default.\n\n Returns:\n str\n \"\"\"\n return str(self._directory)\n\n @property\n def fullpath(self):\n \"\"\"\n Full path to the Mackup configuration files.\n\n The full path to the directory when Mackup is storing the configuration\n files.\n\n Returns:\n str\n \"\"\"\n return str(os.path.join(self.path, self.directory))\n\n @property\n def apps_to_ignore(self):\n \"\"\"\n Get the list of applications ignored in the config file.\n\n Returns:\n set. Set of application names to ignore, lowercase\n \"\"\"\n return set(self._apps_to_ignore)\n\n @property\n def apps_to_sync(self):\n \"\"\"\n Get the list of applications allowed in the config file.\n\n Returns:\n set. Set of application names to allow, lowercase\n \"\"\"\n return set(self._apps_to_sync)\n\n def _setup_parser(self, filename=None):\n \"\"\"\n Configure the ConfigParser instance the way we want it.\n\n Args:\n filename (str) or None\n\n Returns:\n SafeConfigParser\n \"\"\"\n assert isinstance(filename, str) or filename is None\n\n # If we are not overriding the config filename\n if not filename:\n filename = MACKUP_CONFIG_FILE\n\n parser = configparser.SafeConfigParser(allow_no_value=True)\n parser.read(os.path.join(os.path.join(os.environ['HOME'], filename)))\n\n return parser\n\n def _warn_on_old_config(self):\n \"\"\"Warn the user if an old config format is detected.\"\"\"\n # Is an old setion is in the config file ?\n old_sections = ['Allowed Applications', 'Ignored Applications']\n for old_section in old_sections:\n if self._parser.has_section(old_section):\n error(\"Old config file detected. Aborting.\\n\"\n \"\\n\"\n \"An old section (e.g. [Allowed Applications]\"\n \" or [Ignored Applications] has been detected\"\n \" in your {} file.\\n\"\n \"I'd rather do nothing than do something you\"\n \" do not want me to do.\\n\"\n \"\\n\"\n \"Please read the up to date documentation on\"\n \" <https://github.com/lra/mackup> and migrate\"\n \" your configuration file.\"\n .format(MACKUP_CONFIG_FILE))\n\n def _parse_engine(self):\n \"\"\"\n Parse the storage engine in the config.\n\n Returns:\n str\n \"\"\"\n if self._parser.has_option('storage', 'engine'):\n engine = str(self._parser.get('storage', 'engine'))\n else:\n engine = ENGINE_DROPBOX\n\n assert isinstance(engine, str)\n\n if engine not in [ENGINE_DROPBOX,\n ENGINE_GDRIVE,\n ENGINE_COPY,\n ENGINE_ICLOUD,\n ENGINE_BOX,\n ENGINE_FS]:\n raise ConfigError('Unknown storage engine: {}'.format(engine))\n\n return str(engine)\n\n def _parse_path(self):\n \"\"\"\n Parse the storage path in the config.\n\n Returns:\n str\n \"\"\"\n if self.engine == ENGINE_DROPBOX:\n path = get_dropbox_folder_location()\n elif self.engine == ENGINE_GDRIVE:\n path = get_google_drive_folder_location()\n elif self.engine == ENGINE_COPY:\n path = get_copy_folder_location()\n elif self.engine == ENGINE_ICLOUD:\n path = get_icloud_folder_location()\n elif self.engine == ENGINE_BOX:\n path = get_box_folder_location()\n elif self.engine == ENGINE_FS:\n if self._parser.has_option('storage', 'path'):\n cfg_path = self._parser.get('storage', 'path')\n path = os.path.join(os.environ['HOME'], cfg_path)\n else:\n raise ConfigError(\"The required 'path' can't be found while\"\n \" the 'file_system' engine is used.\")\n\n return str(path)\n\n def _parse_directory(self):\n \"\"\"\n Parse the storage directory in the config.\n\n Returns:\n str\n \"\"\"\n if self._parser.has_option('storage', 'directory'):\n directory = self._parser.get('storage', 'directory')\n else:\n directory = MACKUP_BACKUP_PATH\n\n return str(directory)\n\n def _parse_apps_to_ignore(self):\n \"\"\"\n Parse the applications to ignore in the config.\n\n Returns:\n set\n \"\"\"\n # We ignore nothing by default\n apps_to_ignore = set()\n\n # Is the \"[applications_to_ignore]\" in the cfg file ?\n section_title = 'applications_to_ignore'\n if self._parser.has_section(section_title):\n apps_to_ignore = set(self._parser.options(section_title))\n\n return apps_to_ignore\n\n def _parse_apps_to_sync(self):\n \"\"\"\n Parse the applications to backup in the config.\n\n Returns:\n set\n \"\"\"\n # We allow nothing by default\n apps_to_sync = set()\n\n # Is the \"[applications_to_sync]\" section in the cfg file ?\n section_title = 'applications_to_sync'\n if self._parser.has_section(section_title):\n apps_to_sync = set(self._parser.options(section_title))\n\n return apps_to_sync\n\n\nclass ConfigError(Exception):\n\n \"\"\"Exception used for handle errors in the configuration.\"\"\"\n\n pass\n", "path": "mackup/config.py"}], "after_files": [{"content": "\"\"\"Package used to manage the .mackup.cfg config file.\"\"\"\n\nimport os\nimport os.path\n\nfrom .constants import (CUSTOM_APPS_DIR,\n MACKUP_BACKUP_PATH,\n MACKUP_CONFIG_FILE,\n ENGINE_DROPBOX,\n ENGINE_GDRIVE,\n ENGINE_COPY,\n ENGINE_ICLOUD,\n ENGINE_BOX,\n ENGINE_FS)\n\nfrom .utils import (error,\n get_dropbox_folder_location,\n get_copy_folder_location,\n get_google_drive_folder_location,\n get_icloud_folder_location,\n get_box_folder_location)\ntry:\n import configparser\nexcept ImportError:\n import ConfigParser as configparser\n\n\nclass Config(object):\n\n \"\"\"The Mackup Config class.\"\"\"\n\n def __init__(self, filename=None):\n \"\"\"\n Create a Config instance.\n\n Args:\n filename (str): Optional filename of the config file. If empty,\n defaults to MACKUP_CONFIG_FILE\n \"\"\"\n assert isinstance(filename, str) or filename is None\n\n # Initialize the parser\n self._parser = self._setup_parser(filename)\n\n # Do we have an old config file ?\n self._warn_on_old_config()\n\n # Get the storage engine\n self._engine = self._parse_engine()\n\n # Get the path where the Mackup folder is\n self._path = self._parse_path()\n\n # Get the directory replacing 'Mackup', if any\n self._directory = self._parse_directory()\n\n # Get the list of apps to ignore\n self._apps_to_ignore = self._parse_apps_to_ignore()\n\n # Get the list of apps to allow\n self._apps_to_sync = self._parse_apps_to_sync()\n\n @property\n def engine(self):\n \"\"\"\n The engine used by the storage.\n\n ENGINE_DROPBOX, ENGINE_GDRIVE, ENGINE_COPY, ENGINE_ICLOUD, ENGINE_BOX\n or ENGINE_FS.\n\n Returns:\n str\n \"\"\"\n return str(self._engine)\n\n @property\n def path(self):\n \"\"\"\n Path to the Mackup configuration files.\n\n The path to the directory where Mackup is gonna create and store his\n directory.\n\n Returns:\n str\n \"\"\"\n return str(self._path)\n\n @property\n def directory(self):\n \"\"\"\n The name of the Mackup directory, named Mackup by default.\n\n Returns:\n str\n \"\"\"\n return str(self._directory)\n\n @property\n def fullpath(self):\n \"\"\"\n Full path to the Mackup configuration files.\n\n The full path to the directory when Mackup is storing the configuration\n files.\n\n Returns:\n str\n \"\"\"\n return str(os.path.join(self.path, self.directory))\n\n @property\n def apps_to_ignore(self):\n \"\"\"\n Get the list of applications ignored in the config file.\n\n Returns:\n set. Set of application names to ignore, lowercase\n \"\"\"\n return set(self._apps_to_ignore)\n\n @property\n def apps_to_sync(self):\n \"\"\"\n Get the list of applications allowed in the config file.\n\n Returns:\n set. Set of application names to allow, lowercase\n \"\"\"\n return set(self._apps_to_sync)\n\n def _setup_parser(self, filename=None):\n \"\"\"\n Configure the ConfigParser instance the way we want it.\n\n Args:\n filename (str) or None\n\n Returns:\n SafeConfigParser\n \"\"\"\n assert isinstance(filename, str) or filename is None\n\n # If we are not overriding the config filename\n if not filename:\n filename = MACKUP_CONFIG_FILE\n\n parser = configparser.SafeConfigParser(allow_no_value=True)\n parser.read(os.path.join(os.path.join(os.environ['HOME'], filename)))\n\n return parser\n\n def _warn_on_old_config(self):\n \"\"\"Warn the user if an old config format is detected.\"\"\"\n # Is an old setion is in the config file ?\n old_sections = ['Allowed Applications', 'Ignored Applications']\n for old_section in old_sections:\n if self._parser.has_section(old_section):\n error(\"Old config file detected. Aborting.\\n\"\n \"\\n\"\n \"An old section (e.g. [Allowed Applications]\"\n \" or [Ignored Applications] has been detected\"\n \" in your {} file.\\n\"\n \"I'd rather do nothing than do something you\"\n \" do not want me to do.\\n\"\n \"\\n\"\n \"Please read the up to date documentation on\"\n \" <https://github.com/lra/mackup> and migrate\"\n \" your configuration file.\"\n .format(MACKUP_CONFIG_FILE))\n\n def _parse_engine(self):\n \"\"\"\n Parse the storage engine in the config.\n\n Returns:\n str\n \"\"\"\n if self._parser.has_option('storage', 'engine'):\n engine = str(self._parser.get('storage', 'engine'))\n else:\n engine = ENGINE_DROPBOX\n\n assert isinstance(engine, str)\n\n if engine not in [ENGINE_DROPBOX,\n ENGINE_GDRIVE,\n ENGINE_COPY,\n ENGINE_ICLOUD,\n ENGINE_BOX,\n ENGINE_FS]:\n raise ConfigError('Unknown storage engine: {}'.format(engine))\n\n return str(engine)\n\n def _parse_path(self):\n \"\"\"\n Parse the storage path in the config.\n\n Returns:\n str\n \"\"\"\n if self.engine == ENGINE_DROPBOX:\n path = get_dropbox_folder_location()\n elif self.engine == ENGINE_GDRIVE:\n path = get_google_drive_folder_location()\n elif self.engine == ENGINE_COPY:\n path = get_copy_folder_location()\n elif self.engine == ENGINE_ICLOUD:\n path = get_icloud_folder_location()\n elif self.engine == ENGINE_BOX:\n path = get_box_folder_location()\n elif self.engine == ENGINE_FS:\n if self._parser.has_option('storage', 'path'):\n cfg_path = self._parser.get('storage', 'path')\n path = os.path.join(os.environ['HOME'], cfg_path)\n else:\n raise ConfigError(\"The required 'path' can't be found while\"\n \" the 'file_system' engine is used.\")\n\n return str(path)\n\n def _parse_directory(self):\n \"\"\"\n Parse the storage directory in the config.\n\n Returns:\n str\n \"\"\"\n if self._parser.has_option('storage', 'directory'):\n directory = self._parser.get('storage', 'directory')\n # Don't allow CUSTOM_APPS_DIR as a storage directory\n if directory == CUSTOM_APPS_DIR:\n raise ConfigError(\"{} cannot be used as a storage directory.\"\n .format(CUSTOM_APPS_DIR))\n else:\n directory = MACKUP_BACKUP_PATH\n\n return str(directory)\n\n def _parse_apps_to_ignore(self):\n \"\"\"\n Parse the applications to ignore in the config.\n\n Returns:\n set\n \"\"\"\n # We ignore nothing by default\n apps_to_ignore = set()\n\n # Is the \"[applications_to_ignore]\" in the cfg file ?\n section_title = 'applications_to_ignore'\n if self._parser.has_section(section_title):\n apps_to_ignore = set(self._parser.options(section_title))\n\n return apps_to_ignore\n\n def _parse_apps_to_sync(self):\n \"\"\"\n Parse the applications to backup in the config.\n\n Returns:\n set\n \"\"\"\n # We allow nothing by default\n apps_to_sync = set()\n\n # Is the \"[applications_to_sync]\" section in the cfg file ?\n section_title = 'applications_to_sync'\n if self._parser.has_section(section_title):\n apps_to_sync = set(self._parser.options(section_title))\n\n return apps_to_sync\n\n\nclass ConfigError(Exception):\n\n \"\"\"Exception used for handle errors in the configuration.\"\"\"\n\n pass\n", "path": "mackup/config.py"}]}
2,737
200
gh_patches_debug_19839
rasdani/github-patches
git_diff
translate__pootle-3723
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Check 'delete' checkbox for a new entry renders error _Version: 2.5.1_ _Originally posted by Mike Massonnet:_ _Imported from attachment 976:_ ``` pytb Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/django/core/handlers/base.py", line 109, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/python2.7/dist-packages/pootle/core/decorators.py", line 87, in wrapped return func(request, path_obj, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/pootle/core/decorators.py", line 180, in _wrapped return func(request, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/pootle/apps/pootle_project/views.py", line 427, in project_admin exclude=('description',)) File "/usr/lib/python2.7/dist-packages/pootle/apps/pootle_app/views/admin/util.py", line 191, in edit queryset=queryset, **kwargs) File "/usr/lib/python2.7/dist-packages/pootle/apps/pootle_app/views/admin/util.py", line 172, in process_modelformset if formset.is_valid(): File "/usr/lib/python2.7/dist-packages/django/forms/formsets.py", line 272, in is_valid err = self.errors File "/usr/lib/python2.7/dist-packages/django/forms/formsets.py", line 250, in _get_errors self.full_clean() File "/usr/lib/python2.7/dist-packages/django/forms/formsets.py", line 293, in full_clean self._errors.append(form.errors) File "/usr/lib/python2.7/dist-packages/django/forms/forms.py", line 115, in _get_errors self.full_clean() File "/usr/lib/python2.7/dist-packages/django/forms/forms.py", line 272, in full_clean self._post_clean() File "/usr/lib/python2.7/dist-packages/django/forms/models.py", line 309, in _post_clean self.instance = construct_instance(self, self.instance, opts.fields, opts.exclude) File "/usr/lib/python2.7/dist-packages/django/forms/models.py", line 51, in construct_instance f.save_form_data(instance, cleaned_data[f.name]) File "/usr/lib/python2.7/dist-packages/django/db/models/fields/__init__.py", line 454, in save_form_data setattr(instance, self.name, data) File "/usr/lib/python2.7/dist-packages/django/db/models/fields/related.py", line 362, in __set__ (instance._meta.object_name, self.field.name)) ValueError: Cannot assign None: "TranslationProject.language" does not allow null values. ``` _Created [attachment 976](http://bugs.locamotion.org/attachment.cgi?id=976)_ pootle-check-all-traceback.txt Hello, In the adminstration page of a project, checking the Delete checkbox for a new entry renders an error. This checkbox is also automatically selected when checking the Delete checkbox at the top of the form, to check all checkboxes. The following error is rendered: "Cannot assign None: "TranslationProject.language" does not allow null values." See attached screenshots and traceback. Regards, Mike --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `pootle/apps/pootle_app/views/admin/util.py` Content: ``` 1 #!/usr/bin/env python 2 # -*- coding: utf-8 -*- 3 # 4 # Copyright (C) Pootle contributors. 5 # 6 # This file is a part of the Pootle project. It is distributed under the GPL3 7 # or later license. See the LICENSE file for a copy of the license and the 8 # AUTHORS file for copyright and authorship information. 9 10 from django.forms.models import modelformset_factory 11 from django.forms.utils import ErrorList 12 from django.shortcuts import render 13 from django.utils.safestring import mark_safe 14 from django.utils.translation import ugettext as _ 15 16 from pootle.core.paginator import paginate 17 18 19 def form_set_as_table(formset, link=None, linkfield='code'): 20 """Create an HTML table from the formset. The first form in the 21 formset is used to obtain a list of the fields that need to be 22 displayed. 23 24 Errors, if there are any, appear in the row above the form which 25 triggered any errors. 26 27 If the forms are based on database models, the order of the 28 columns is determined by the order of the fields in the model 29 specification. 30 """ 31 32 def add_header(result, fields, form): 33 result.append('<tr>\n') 34 for field in fields: 35 widget = form.fields[field].widget 36 widget_name = widget.__class__.__name__ 37 38 if widget.is_hidden or \ 39 widget_name in ('CheckboxInput', 'SelectMultiple'): 40 result.append('<th class="sorttable_nosort">') 41 else: 42 result.append('<th>') 43 44 if widget_name in ('CheckboxInput',): 45 result.append(form[field].as_widget()) 46 result.append(form[field].label_tag()) 47 elif form.fields[field].label is not None and not widget.is_hidden: 48 result.append(unicode(form.fields[field].label)) 49 50 result.append('</th>\n') 51 result.append('</tr>\n') 52 53 def add_footer(result, fields, form): 54 result.append('<tr>\n') 55 for field in fields: 56 field_obj = form.fields[field] 57 result.append('<td>') 58 59 if field_obj.label is not None and not field_obj.widget.is_hidden: 60 result.append(unicode(field_obj.label)) 61 62 result.append('</td>\n') 63 result.append('</tr>\n') 64 65 def add_errors(result, fields, form): 66 # If the form has errors, then we'll add a table row with the 67 # errors. 68 if len(form.errors) > 0: 69 result.append('<tr>\n') 70 for field in fields: 71 result.append('<td>') 72 result.append(form.errors.get(field, ErrorList()).as_ul()) 73 result.append('</td>\n') 74 75 result.append('</tr>\n') 76 77 def add_widgets(result, fields, form, link): 78 result.append('<tr class="item">\n') 79 for i, field in enumerate(fields): 80 result.append('<td class="%s">' % field) 81 # Include a hidden element containing the form's id to the 82 # first column. 83 if i == 0: 84 result.append(form['id'].as_hidden()) 85 86 # `link` indicates whether we put the first field as a link or as 87 # widget 88 if field == linkfield and linkfield in form.initial and link: 89 if callable(link): 90 result.append(link(form.instance)) 91 result.append(form[field].as_hidden()) 92 else: 93 result.append(form[field].as_widget()) 94 95 result.append('</td>\n') 96 result.append('</tr>\n') 97 98 result = [] 99 try: 100 first_form = formset.forms[0] 101 # Get the fields of the form, but filter our the 'id' field, 102 # since we don't want to print a table column for it. 103 fields = [field for field in first_form.fields if field != 'id'] 104 105 result.append('<thead>\n') 106 add_header(result, fields, first_form) 107 result.append('</thead>\n') 108 result.append('<tfoot>\n') 109 add_footer(result, fields, first_form) 110 result.append('</tfoot>\n') 111 112 result.append('<tbody>\n') 113 for i, form in enumerate(formset.forms): 114 add_errors(result, fields, form) 115 add_widgets(result, fields, form, link) 116 117 result.append('</tbody>\n') 118 except IndexError: 119 result.append('<tr>\n') 120 result.append('<td>\n') 121 result.append(_('No files in this project.')) 122 result.append('</td>\n') 123 result.append('</tr>\n') 124 125 return u''.join(result) 126 127 128 def process_modelformset(request, model_class, queryset, **kwargs): 129 """With the Django model class `model_class` and the given `queryset`, 130 construct a formset process its submission.""" 131 132 # Create a formset class for the model `model_class` (i.e. it will contain 133 # forms whose contents are based on the fields of `model_class`); 134 # parameters for the construction of the forms used in the formset should 135 # be in kwargs. 136 formset_class = modelformset_factory(model_class, **kwargs) 137 138 if queryset is None: 139 queryset = model_class.objects.all() 140 141 # If the request is a POST, we want to possibly update our data 142 if request.method == 'POST' and request.POST: 143 # Create a formset from all the 'model_class' instances whose values 144 # will be updated using the contents of request.POST 145 objects = paginate(request, queryset) 146 formset = formset_class(request.POST, queryset=objects.object_list) 147 148 # Validate all the forms in the formset 149 if formset.is_valid(): 150 # If all is well, Django can save all our data for us 151 formset.save() 152 else: 153 # Otherwise, complain to the user that something went wrong 154 return formset, _("There are errors in the form. Please review " 155 "the problems below."), objects 156 157 # Hack to force reevaluation of same query 158 queryset = queryset.filter() 159 160 objects = paginate(request, queryset) 161 162 return formset_class(queryset=objects.object_list), None, objects 163 164 165 def edit(request, template, model_class, ctx=None, 166 link=None, linkfield='code', queryset=None, **kwargs): 167 formset, msg, objects = process_modelformset(request, model_class, 168 queryset=queryset, **kwargs) 169 if ctx is None: 170 ctx = {} 171 172 ctx.update({ 173 'formset_text': mark_safe(form_set_as_table(formset, link, linkfield)), 174 'formset': formset, 175 'objects': objects, 176 'error_msg': msg, 177 }) 178 179 return render(request, template, ctx) 180 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/pootle/apps/pootle_app/views/admin/util.py b/pootle/apps/pootle_app/views/admin/util.py --- a/pootle/apps/pootle_app/views/admin/util.py +++ b/pootle/apps/pootle_app/views/admin/util.py @@ -7,6 +7,7 @@ # or later license. See the LICENSE file for a copy of the license and the # AUTHORS file for copyright and authorship information. +from django import forms from django.forms.models import modelformset_factory from django.forms.utils import ErrorList from django.shortcuts import render @@ -110,6 +111,10 @@ result.append('</tfoot>\n') result.append('<tbody>\n') + + # Do not display the delete checkbox for the 'add a new entry' form. + formset.forms[-1].fields['DELETE'].widget = forms.HiddenInput() + for i, form in enumerate(formset.forms): add_errors(result, fields, form) add_widgets(result, fields, form, link)
{"golden_diff": "diff --git a/pootle/apps/pootle_app/views/admin/util.py b/pootle/apps/pootle_app/views/admin/util.py\n--- a/pootle/apps/pootle_app/views/admin/util.py\n+++ b/pootle/apps/pootle_app/views/admin/util.py\n@@ -7,6 +7,7 @@\n # or later license. See the LICENSE file for a copy of the license and the\n # AUTHORS file for copyright and authorship information.\n \n+from django import forms\n from django.forms.models import modelformset_factory\n from django.forms.utils import ErrorList\n from django.shortcuts import render\n@@ -110,6 +111,10 @@\n result.append('</tfoot>\\n')\n \n result.append('<tbody>\\n')\n+\n+ # Do not display the delete checkbox for the 'add a new entry' form.\n+ formset.forms[-1].fields['DELETE'].widget = forms.HiddenInput()\n+\n for i, form in enumerate(formset.forms):\n add_errors(result, fields, form)\n add_widgets(result, fields, form, link)\n", "issue": "Check 'delete' checkbox for a new entry renders error\n_Version: 2.5.1_\n\n_Originally posted by Mike Massonnet:_\n\n_Imported from attachment 976:_\n\n``` pytb\nTraceback (most recent call last):\n File \"/usr/lib/python2.7/dist-packages/django/core/handlers/base.py\", line 109, in get_response\n response = callback(request, *callback_args, **callback_kwargs)\n File \"/usr/lib/python2.7/dist-packages/pootle/core/decorators.py\", line 87, in wrapped\n return func(request, path_obj, *args, **kwargs)\n File \"/usr/lib/python2.7/dist-packages/pootle/core/decorators.py\", line 180, in _wrapped\n return func(request, *args, **kwargs)\n File \"/usr/lib/python2.7/dist-packages/pootle/apps/pootle_project/views.py\", line 427, in project_admin\n exclude=('description',))\n File \"/usr/lib/python2.7/dist-packages/pootle/apps/pootle_app/views/admin/util.py\", line 191, in edit\n queryset=queryset, **kwargs)\n File \"/usr/lib/python2.7/dist-packages/pootle/apps/pootle_app/views/admin/util.py\", line 172, in process_modelformset\n if formset.is_valid():\n File \"/usr/lib/python2.7/dist-packages/django/forms/formsets.py\", line 272, in is_valid\n err = self.errors\n File \"/usr/lib/python2.7/dist-packages/django/forms/formsets.py\", line 250, in _get_errors\n self.full_clean()\n File \"/usr/lib/python2.7/dist-packages/django/forms/formsets.py\", line 293, in full_clean\n self._errors.append(form.errors)\n File \"/usr/lib/python2.7/dist-packages/django/forms/forms.py\", line 115, in _get_errors\n self.full_clean()\n File \"/usr/lib/python2.7/dist-packages/django/forms/forms.py\", line 272, in full_clean\n self._post_clean()\n File \"/usr/lib/python2.7/dist-packages/django/forms/models.py\", line 309, in _post_clean\n self.instance = construct_instance(self, self.instance, opts.fields, opts.exclude)\n File \"/usr/lib/python2.7/dist-packages/django/forms/models.py\", line 51, in construct_instance\n f.save_form_data(instance, cleaned_data[f.name])\n File \"/usr/lib/python2.7/dist-packages/django/db/models/fields/__init__.py\", line 454, in save_form_data\n setattr(instance, self.name, data)\n File \"/usr/lib/python2.7/dist-packages/django/db/models/fields/related.py\", line 362, in __set__\n (instance._meta.object_name, self.field.name))\nValueError: Cannot assign None: \"TranslationProject.language\" does not allow null values.\n```\n\n_Created [attachment 976](http://bugs.locamotion.org/attachment.cgi?id=976)_\n\npootle-check-all-traceback.txt\n\nHello,\n\nIn the adminstration page of a project, checking the Delete checkbox for a new entry renders an error. This checkbox is also automatically selected when checking the Delete checkbox at the top of the form, to check all checkboxes.\n\nThe following error is rendered:\n\"Cannot assign None: \"TranslationProject.language\" does not allow null values.\"\n\nSee attached screenshots and traceback.\n\nRegards,\nMike\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.forms.models import modelformset_factory\nfrom django.forms.utils import ErrorList\nfrom django.shortcuts import render\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import ugettext as _\n\nfrom pootle.core.paginator import paginate\n\n\ndef form_set_as_table(formset, link=None, linkfield='code'):\n \"\"\"Create an HTML table from the formset. The first form in the\n formset is used to obtain a list of the fields that need to be\n displayed.\n\n Errors, if there are any, appear in the row above the form which\n triggered any errors.\n\n If the forms are based on database models, the order of the\n columns is determined by the order of the fields in the model\n specification.\n \"\"\"\n\n def add_header(result, fields, form):\n result.append('<tr>\\n')\n for field in fields:\n widget = form.fields[field].widget\n widget_name = widget.__class__.__name__\n\n if widget.is_hidden or \\\n widget_name in ('CheckboxInput', 'SelectMultiple'):\n result.append('<th class=\"sorttable_nosort\">')\n else:\n result.append('<th>')\n\n if widget_name in ('CheckboxInput',):\n result.append(form[field].as_widget())\n result.append(form[field].label_tag())\n elif form.fields[field].label is not None and not widget.is_hidden:\n result.append(unicode(form.fields[field].label))\n\n result.append('</th>\\n')\n result.append('</tr>\\n')\n\n def add_footer(result, fields, form):\n result.append('<tr>\\n')\n for field in fields:\n field_obj = form.fields[field]\n result.append('<td>')\n\n if field_obj.label is not None and not field_obj.widget.is_hidden:\n result.append(unicode(field_obj.label))\n\n result.append('</td>\\n')\n result.append('</tr>\\n')\n\n def add_errors(result, fields, form):\n # If the form has errors, then we'll add a table row with the\n # errors.\n if len(form.errors) > 0:\n result.append('<tr>\\n')\n for field in fields:\n result.append('<td>')\n result.append(form.errors.get(field, ErrorList()).as_ul())\n result.append('</td>\\n')\n\n result.append('</tr>\\n')\n\n def add_widgets(result, fields, form, link):\n result.append('<tr class=\"item\">\\n')\n for i, field in enumerate(fields):\n result.append('<td class=\"%s\">' % field)\n # Include a hidden element containing the form's id to the\n # first column.\n if i == 0:\n result.append(form['id'].as_hidden())\n\n # `link` indicates whether we put the first field as a link or as\n # widget\n if field == linkfield and linkfield in form.initial and link:\n if callable(link):\n result.append(link(form.instance))\n result.append(form[field].as_hidden())\n else:\n result.append(form[field].as_widget())\n\n result.append('</td>\\n')\n result.append('</tr>\\n')\n\n result = []\n try:\n first_form = formset.forms[0]\n # Get the fields of the form, but filter our the 'id' field,\n # since we don't want to print a table column for it.\n fields = [field for field in first_form.fields if field != 'id']\n\n result.append('<thead>\\n')\n add_header(result, fields, first_form)\n result.append('</thead>\\n')\n result.append('<tfoot>\\n')\n add_footer(result, fields, first_form)\n result.append('</tfoot>\\n')\n\n result.append('<tbody>\\n')\n for i, form in enumerate(formset.forms):\n add_errors(result, fields, form)\n add_widgets(result, fields, form, link)\n\n result.append('</tbody>\\n')\n except IndexError:\n result.append('<tr>\\n')\n result.append('<td>\\n')\n result.append(_('No files in this project.'))\n result.append('</td>\\n')\n result.append('</tr>\\n')\n\n return u''.join(result)\n\n\ndef process_modelformset(request, model_class, queryset, **kwargs):\n \"\"\"With the Django model class `model_class` and the given `queryset`,\n construct a formset process its submission.\"\"\"\n\n # Create a formset class for the model `model_class` (i.e. it will contain\n # forms whose contents are based on the fields of `model_class`);\n # parameters for the construction of the forms used in the formset should\n # be in kwargs.\n formset_class = modelformset_factory(model_class, **kwargs)\n\n if queryset is None:\n queryset = model_class.objects.all()\n\n # If the request is a POST, we want to possibly update our data\n if request.method == 'POST' and request.POST:\n # Create a formset from all the 'model_class' instances whose values\n # will be updated using the contents of request.POST\n objects = paginate(request, queryset)\n formset = formset_class(request.POST, queryset=objects.object_list)\n\n # Validate all the forms in the formset\n if formset.is_valid():\n # If all is well, Django can save all our data for us\n formset.save()\n else:\n # Otherwise, complain to the user that something went wrong\n return formset, _(\"There are errors in the form. Please review \"\n \"the problems below.\"), objects\n\n # Hack to force reevaluation of same query\n queryset = queryset.filter()\n\n objects = paginate(request, queryset)\n\n return formset_class(queryset=objects.object_list), None, objects\n\n\ndef edit(request, template, model_class, ctx=None,\n link=None, linkfield='code', queryset=None, **kwargs):\n formset, msg, objects = process_modelformset(request, model_class,\n queryset=queryset, **kwargs)\n if ctx is None:\n ctx = {}\n\n ctx.update({\n 'formset_text': mark_safe(form_set_as_table(formset, link, linkfield)),\n 'formset': formset,\n 'objects': objects,\n 'error_msg': msg,\n })\n\n return render(request, template, ctx)\n", "path": "pootle/apps/pootle_app/views/admin/util.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django import forms\nfrom django.forms.models import modelformset_factory\nfrom django.forms.utils import ErrorList\nfrom django.shortcuts import render\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import ugettext as _\n\nfrom pootle.core.paginator import paginate\n\n\ndef form_set_as_table(formset, link=None, linkfield='code'):\n \"\"\"Create an HTML table from the formset. The first form in the\n formset is used to obtain a list of the fields that need to be\n displayed.\n\n Errors, if there are any, appear in the row above the form which\n triggered any errors.\n\n If the forms are based on database models, the order of the\n columns is determined by the order of the fields in the model\n specification.\n \"\"\"\n\n def add_header(result, fields, form):\n result.append('<tr>\\n')\n for field in fields:\n widget = form.fields[field].widget\n widget_name = widget.__class__.__name__\n\n if widget.is_hidden or \\\n widget_name in ('CheckboxInput', 'SelectMultiple'):\n result.append('<th class=\"sorttable_nosort\">')\n else:\n result.append('<th>')\n\n if widget_name in ('CheckboxInput',):\n result.append(form[field].as_widget())\n result.append(form[field].label_tag())\n elif form.fields[field].label is not None and not widget.is_hidden:\n result.append(unicode(form.fields[field].label))\n\n result.append('</th>\\n')\n result.append('</tr>\\n')\n\n def add_footer(result, fields, form):\n result.append('<tr>\\n')\n for field in fields:\n field_obj = form.fields[field]\n result.append('<td>')\n\n if field_obj.label is not None and not field_obj.widget.is_hidden:\n result.append(unicode(field_obj.label))\n\n result.append('</td>\\n')\n result.append('</tr>\\n')\n\n def add_errors(result, fields, form):\n # If the form has errors, then we'll add a table row with the\n # errors.\n if len(form.errors) > 0:\n result.append('<tr>\\n')\n for field in fields:\n result.append('<td>')\n result.append(form.errors.get(field, ErrorList()).as_ul())\n result.append('</td>\\n')\n\n result.append('</tr>\\n')\n\n def add_widgets(result, fields, form, link):\n result.append('<tr class=\"item\">\\n')\n for i, field in enumerate(fields):\n result.append('<td class=\"%s\">' % field)\n # Include a hidden element containing the form's id to the\n # first column.\n if i == 0:\n result.append(form['id'].as_hidden())\n\n # `link` indicates whether we put the first field as a link or as\n # widget\n if field == linkfield and linkfield in form.initial and link:\n if callable(link):\n result.append(link(form.instance))\n result.append(form[field].as_hidden())\n else:\n result.append(form[field].as_widget())\n\n result.append('</td>\\n')\n result.append('</tr>\\n')\n\n result = []\n try:\n first_form = formset.forms[0]\n # Get the fields of the form, but filter our the 'id' field,\n # since we don't want to print a table column for it.\n fields = [field for field in first_form.fields if field != 'id']\n\n result.append('<thead>\\n')\n add_header(result, fields, first_form)\n result.append('</thead>\\n')\n result.append('<tfoot>\\n')\n add_footer(result, fields, first_form)\n result.append('</tfoot>\\n')\n\n result.append('<tbody>\\n')\n\n # Do not display the delete checkbox for the 'add a new entry' form.\n formset.forms[-1].fields['DELETE'].widget = forms.HiddenInput()\n\n for i, form in enumerate(formset.forms):\n add_errors(result, fields, form)\n add_widgets(result, fields, form, link)\n\n result.append('</tbody>\\n')\n except IndexError:\n result.append('<tr>\\n')\n result.append('<td>\\n')\n result.append(_('No files in this project.'))\n result.append('</td>\\n')\n result.append('</tr>\\n')\n\n return u''.join(result)\n\n\ndef process_modelformset(request, model_class, queryset, **kwargs):\n \"\"\"With the Django model class `model_class` and the given `queryset`,\n construct a formset process its submission.\"\"\"\n\n # Create a formset class for the model `model_class` (i.e. it will contain\n # forms whose contents are based on the fields of `model_class`);\n # parameters for the construction of the forms used in the formset should\n # be in kwargs.\n formset_class = modelformset_factory(model_class, **kwargs)\n\n if queryset is None:\n queryset = model_class.objects.all()\n\n # If the request is a POST, we want to possibly update our data\n if request.method == 'POST' and request.POST:\n # Create a formset from all the 'model_class' instances whose values\n # will be updated using the contents of request.POST\n objects = paginate(request, queryset)\n formset = formset_class(request.POST, queryset=objects.object_list)\n\n # Validate all the forms in the formset\n if formset.is_valid():\n # If all is well, Django can save all our data for us\n formset.save()\n else:\n # Otherwise, complain to the user that something went wrong\n return formset, _(\"There are errors in the form. Please review \"\n \"the problems below.\"), objects\n\n # Hack to force reevaluation of same query\n queryset = queryset.filter()\n\n objects = paginate(request, queryset)\n\n return formset_class(queryset=objects.object_list), None, objects\n\n\ndef edit(request, template, model_class, ctx=None,\n link=None, linkfield='code', queryset=None, **kwargs):\n formset, msg, objects = process_modelformset(request, model_class,\n queryset=queryset, **kwargs)\n if ctx is None:\n ctx = {}\n\n ctx.update({\n 'formset_text': mark_safe(form_set_as_table(formset, link, linkfield)),\n 'formset': formset,\n 'objects': objects,\n 'error_msg': msg,\n })\n\n return render(request, template, ctx)\n", "path": "pootle/apps/pootle_app/views/admin/util.py"}]}
2,914
232
gh_patches_debug_17805
rasdani/github-patches
git_diff
scrapy__scrapy-3082
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Request serialization should fail for non-picklable objects The Pickle-based disk queues silently serialize requests that shouldn't be serialized in Python<=3.5. I found this problem when dumping a request with an `ItemLoader` object in its `meta` dict. Python 3.6 fails in [this line](https://github.com/scrapy/scrapy/blob/1.4/scrapy/squeues.py#L27) with `TypeError: can't pickle HtmlElement objects`, because the loader contains a `Selector`, which in turns contains an `HtmlElement` object. I tested this using the https://github.com/scrapinghub/scrapinghub-stack-scrapy repository, and found that `pickle.loads(pickle.dumps(selector))` doesn't fail, but generates a broken object. #### Python 2.7, Scrapy 1.3.3 (https://github.com/scrapinghub/scrapinghub-stack-scrapy/tree/branch-1.3) ``` root@04bfc6cf84cd:/# scrapy version -v Scrapy : 1.3.3 lxml : 3.7.2.0 libxml2 : 2.9.3 cssselect : 1.0.1 parsel : 1.1.0 w3lib : 1.17.0 Twisted : 16.6.0 Python : 2.7.14 (default, Dec 12 2017, 16:55:09) - [GCC 4.9.2] pyOpenSSL : 16.2.0 (OpenSSL 1.0.1t 3 May 2016) Platform : Linux-4.9.44-linuxkit-aufs-x86_64-with-debian-8.10 root@04bfc6cf84cd:/# scrapy shell "http://example.org" 2017-12-29 16:49:27 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: scrapybot) (...) >>> from six.moves import cPickle as pickle >>> s2 = pickle.loads(pickle.dumps(response.selector, protocol=2)) >>> response.selector.css('a') [<Selector xpath=u'descendant-or-self::a' data=u'<a href="http://www.iana.org/domains/exa'>] >>> s2.css('a') Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/local/lib/python2.7/site-packages/parsel/selector.py", line 227, in css return self.xpath(self._css2xpath(query)) File "/usr/local/lib/python2.7/site-packages/parsel/selector.py", line 203, in xpath **kwargs) File "src/lxml/lxml.etree.pyx", line 1584, in lxml.etree._Element.xpath (src/lxml/lxml.etree.c:59349) File "src/lxml/xpath.pxi", line 257, in lxml.etree.XPathElementEvaluator.__init__ (src/lxml/lxml.etree.c:170478) File "src/lxml/apihelpers.pxi", line 19, in lxml.etree._assertValidNode (src/lxml/lxml.etree.c:16482) AssertionError: invalid Element proxy at 140144569743064 ``` #### Python 3.5, Scrapy 1.3.3 (https://github.com/scrapinghub/scrapinghub-stack-scrapy/tree/branch-1.3-py3) ``` root@1945e2154919:/# scrapy version -v Scrapy : 1.3.3 lxml : 3.7.2.0 libxml2 : 2.9.3 cssselect : 1.0.1 parsel : 1.1.0 w3lib : 1.17.0 Twisted : 16.6.0 Python : 3.5.4 (default, Dec 12 2017, 16:43:39) - [GCC 4.9.2] pyOpenSSL : 16.2.0 (OpenSSL 1.0.1t 3 May 2016) Platform : Linux-4.9.44-linuxkit-aufs-x86_64-with-debian-8.10 root@1945e2154919:/# scrapy shell "http://example.org" 2017-12-29 16:52:37 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: scrapybot) (...) >>> from six.moves import cPickle as pickle >>> s2 = pickle.loads(pickle.dumps(response.selector, protocol=2)) >>> response.selector.css('a') [<Selector xpath='descendant-or-self::a' data='<a href="http://www.iana.org/domains/exa'>] >>> s2.css('a') Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/local/lib/python3.5/site-packages/parsel/selector.py", line 227, in css return self.xpath(self._css2xpath(query)) File "/usr/local/lib/python3.5/site-packages/parsel/selector.py", line 203, in xpath **kwargs) File "src/lxml/lxml.etree.pyx", line 1584, in lxml.etree._Element.xpath (src/lxml/lxml.etree.c:59349) File "src/lxml/xpath.pxi", line 257, in lxml.etree.XPathElementEvaluator.__init__ (src/lxml/lxml.etree.c:170478) File "src/lxml/apihelpers.pxi", line 19, in lxml.etree._assertValidNode (src/lxml/lxml.etree.c:16482) AssertionError: invalid Element proxy at 139862544625976 ``` #### Python 3.6, Scrapy 1.3.3 (https://github.com/scrapinghub/scrapinghub-stack-scrapy/tree/branch-1.3-py3) ``` root@43e690443ca7:/# scrapy version -v Scrapy : 1.3.3 lxml : 3.7.2.0 libxml2 : 2.9.3 cssselect : 1.0.1 parsel : 1.1.0 w3lib : 1.17.0 Twisted : 16.6.0 Python : 3.6.4 (default, Dec 21 2017, 01:35:12) - [GCC 4.9.2] pyOpenSSL : 16.2.0 (OpenSSL 1.0.1t 3 May 2016) Platform : Linux-4.9.44-linuxkit-aufs-x86_64-with-debian-8.10 root@43e690443ca7:/# scrapy shell "http://example.org" 2017-12-29 16:54:49 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: scrapybot) (...) >>> from six.moves import cPickle as pickle >>> s2 = pickle.loads(pickle.dumps(response.selector, protocol=2)) Traceback (most recent call last): File "<console>", line 1, in <module> TypeError: can't pickle HtmlElement objects ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `scrapy/squeues.py` Content: ``` 1 """ 2 Scheduler queues 3 """ 4 5 import marshal 6 from six.moves import cPickle as pickle 7 8 from queuelib import queue 9 10 def _serializable_queue(queue_class, serialize, deserialize): 11 12 class SerializableQueue(queue_class): 13 14 def push(self, obj): 15 s = serialize(obj) 16 super(SerializableQueue, self).push(s) 17 18 def pop(self): 19 s = super(SerializableQueue, self).pop() 20 if s: 21 return deserialize(s) 22 23 return SerializableQueue 24 25 def _pickle_serialize(obj): 26 try: 27 return pickle.dumps(obj, protocol=2) 28 # Python>=3.5 raises AttributeError here while 29 # Python<=3.4 raises pickle.PicklingError 30 except (pickle.PicklingError, AttributeError) as e: 31 raise ValueError(str(e)) 32 33 PickleFifoDiskQueue = _serializable_queue(queue.FifoDiskQueue, \ 34 _pickle_serialize, pickle.loads) 35 PickleLifoDiskQueue = _serializable_queue(queue.LifoDiskQueue, \ 36 _pickle_serialize, pickle.loads) 37 MarshalFifoDiskQueue = _serializable_queue(queue.FifoDiskQueue, \ 38 marshal.dumps, marshal.loads) 39 MarshalLifoDiskQueue = _serializable_queue(queue.LifoDiskQueue, \ 40 marshal.dumps, marshal.loads) 41 FifoMemoryQueue = queue.FifoMemoryQueue 42 LifoMemoryQueue = queue.LifoMemoryQueue 43 ``` Path: `setup.py` Content: ``` 1 from os.path import dirname, join 2 from pkg_resources import parse_version 3 from setuptools import setup, find_packages, __version__ as setuptools_version 4 5 6 with open(join(dirname(__file__), 'scrapy/VERSION'), 'rb') as f: 7 version = f.read().decode('ascii').strip() 8 9 10 def has_environment_marker_platform_impl_support(): 11 """Code extracted from 'pytest/setup.py' 12 https://github.com/pytest-dev/pytest/blob/7538680c/setup.py#L31 13 14 The first known release to support environment marker with range operators 15 it is 18.5, see: 16 https://setuptools.readthedocs.io/en/latest/history.html#id235 17 """ 18 return parse_version(setuptools_version) >= parse_version('18.5') 19 20 21 extras_require = {} 22 23 if has_environment_marker_platform_impl_support(): 24 extras_require[':platform_python_implementation == "PyPy"'] = [ 25 'PyPyDispatcher>=2.1.0', 26 ] 27 28 29 setup( 30 name='Scrapy', 31 version=version, 32 url='https://scrapy.org', 33 description='A high-level Web Crawling and Web Scraping framework', 34 long_description=open('README.rst').read(), 35 author='Scrapy developers', 36 maintainer='Pablo Hoffman', 37 maintainer_email='[email protected]', 38 license='BSD', 39 packages=find_packages(exclude=('tests', 'tests.*')), 40 include_package_data=True, 41 zip_safe=False, 42 entry_points={ 43 'console_scripts': ['scrapy = scrapy.cmdline:execute'] 44 }, 45 classifiers=[ 46 'Framework :: Scrapy', 47 'Development Status :: 5 - Production/Stable', 48 'Environment :: Console', 49 'Intended Audience :: Developers', 50 'License :: OSI Approved :: BSD License', 51 'Operating System :: OS Independent', 52 'Programming Language :: Python', 53 'Programming Language :: Python :: 2', 54 'Programming Language :: Python :: 2.7', 55 'Programming Language :: Python :: 3', 56 'Programming Language :: Python :: 3.4', 57 'Programming Language :: Python :: 3.5', 58 'Programming Language :: Python :: 3.6', 59 'Programming Language :: Python :: Implementation :: CPython', 60 'Programming Language :: Python :: Implementation :: PyPy', 61 'Topic :: Internet :: WWW/HTTP', 62 'Topic :: Software Development :: Libraries :: Application Frameworks', 63 'Topic :: Software Development :: Libraries :: Python Modules', 64 ], 65 python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*', 66 install_requires=[ 67 'Twisted>=13.1.0', 68 'w3lib>=1.17.0', 69 'queuelib', 70 'lxml', 71 'pyOpenSSL', 72 'cssselect>=0.9', 73 'six>=1.5.2', 74 'parsel>=1.1', 75 'PyDispatcher>=2.0.5', 76 'service_identity', 77 ], 78 extras_require=extras_require, 79 ) 80 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/scrapy/squeues.py b/scrapy/squeues.py --- a/scrapy/squeues.py +++ b/scrapy/squeues.py @@ -25,9 +25,10 @@ def _pickle_serialize(obj): try: return pickle.dumps(obj, protocol=2) - # Python>=3.5 raises AttributeError here while - # Python<=3.4 raises pickle.PicklingError - except (pickle.PicklingError, AttributeError) as e: + # Python <= 3.4 raises pickle.PicklingError here while + # 3.5 <= Python < 3.6 raises AttributeError and + # Python >= 3.6 raises TypeError + except (pickle.PicklingError, AttributeError, TypeError) as e: raise ValueError(str(e)) PickleFifoDiskQueue = _serializable_queue(queue.FifoDiskQueue, \ diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -71,7 +71,7 @@ 'pyOpenSSL', 'cssselect>=0.9', 'six>=1.5.2', - 'parsel>=1.1', + 'parsel>=1.4', 'PyDispatcher>=2.0.5', 'service_identity', ],
{"golden_diff": "diff --git a/scrapy/squeues.py b/scrapy/squeues.py\n--- a/scrapy/squeues.py\n+++ b/scrapy/squeues.py\n@@ -25,9 +25,10 @@\n def _pickle_serialize(obj):\n try:\n return pickle.dumps(obj, protocol=2)\n- # Python>=3.5 raises AttributeError here while\n- # Python<=3.4 raises pickle.PicklingError\n- except (pickle.PicklingError, AttributeError) as e:\n+ # Python <= 3.4 raises pickle.PicklingError here while\n+ # 3.5 <= Python < 3.6 raises AttributeError and\n+ # Python >= 3.6 raises TypeError\n+ except (pickle.PicklingError, AttributeError, TypeError) as e:\n raise ValueError(str(e))\n \n PickleFifoDiskQueue = _serializable_queue(queue.FifoDiskQueue, \\\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -71,7 +71,7 @@\n 'pyOpenSSL',\n 'cssselect>=0.9',\n 'six>=1.5.2',\n- 'parsel>=1.1',\n+ 'parsel>=1.4',\n 'PyDispatcher>=2.0.5',\n 'service_identity',\n ],\n", "issue": "Request serialization should fail for non-picklable objects\nThe Pickle-based disk queues silently serialize requests that shouldn't be serialized in Python<=3.5. I found this problem when dumping a request with an `ItemLoader` object in its `meta` dict. Python 3.6 fails in [this line](https://github.com/scrapy/scrapy/blob/1.4/scrapy/squeues.py#L27) with `TypeError: can't pickle HtmlElement objects`, because the loader contains a `Selector`, which in turns contains an `HtmlElement` object.\r\n\r\nI tested this using the https://github.com/scrapinghub/scrapinghub-stack-scrapy repository, and found that `pickle.loads(pickle.dumps(selector))` doesn't fail, but generates a broken object.\r\n\r\n#### Python 2.7, Scrapy 1.3.3 (https://github.com/scrapinghub/scrapinghub-stack-scrapy/tree/branch-1.3)\r\n```\r\nroot@04bfc6cf84cd:/# scrapy version -v\r\nScrapy : 1.3.3\r\nlxml : 3.7.2.0\r\nlibxml2 : 2.9.3\r\ncssselect : 1.0.1\r\nparsel : 1.1.0\r\nw3lib : 1.17.0\r\nTwisted : 16.6.0\r\nPython : 2.7.14 (default, Dec 12 2017, 16:55:09) - [GCC 4.9.2]\r\npyOpenSSL : 16.2.0 (OpenSSL 1.0.1t 3 May 2016)\r\nPlatform : Linux-4.9.44-linuxkit-aufs-x86_64-with-debian-8.10\r\nroot@04bfc6cf84cd:/# scrapy shell \"http://example.org\"\r\n2017-12-29 16:49:27 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: scrapybot)\r\n(...)\r\n>>> from six.moves import cPickle as pickle\r\n>>> s2 = pickle.loads(pickle.dumps(response.selector, protocol=2))\r\n>>> response.selector.css('a')\r\n[<Selector xpath=u'descendant-or-self::a' data=u'<a href=\"http://www.iana.org/domains/exa'>]\r\n>>> s2.css('a')\r\nTraceback (most recent call last):\r\n File \"<console>\", line 1, in <module>\r\n File \"/usr/local/lib/python2.7/site-packages/parsel/selector.py\", line 227, in css\r\n return self.xpath(self._css2xpath(query))\r\n File \"/usr/local/lib/python2.7/site-packages/parsel/selector.py\", line 203, in xpath\r\n **kwargs)\r\n File \"src/lxml/lxml.etree.pyx\", line 1584, in lxml.etree._Element.xpath (src/lxml/lxml.etree.c:59349)\r\n File \"src/lxml/xpath.pxi\", line 257, in lxml.etree.XPathElementEvaluator.__init__ (src/lxml/lxml.etree.c:170478)\r\n File \"src/lxml/apihelpers.pxi\", line 19, in lxml.etree._assertValidNode (src/lxml/lxml.etree.c:16482)\r\nAssertionError: invalid Element proxy at 140144569743064\r\n```\r\n\r\n\r\n#### Python 3.5, Scrapy 1.3.3 (https://github.com/scrapinghub/scrapinghub-stack-scrapy/tree/branch-1.3-py3)\r\n```\r\nroot@1945e2154919:/# scrapy version -v\r\nScrapy : 1.3.3\r\nlxml : 3.7.2.0\r\nlibxml2 : 2.9.3\r\ncssselect : 1.0.1\r\nparsel : 1.1.0\r\nw3lib : 1.17.0\r\nTwisted : 16.6.0\r\nPython : 3.5.4 (default, Dec 12 2017, 16:43:39) - [GCC 4.9.2]\r\npyOpenSSL : 16.2.0 (OpenSSL 1.0.1t 3 May 2016)\r\nPlatform : Linux-4.9.44-linuxkit-aufs-x86_64-with-debian-8.10\r\nroot@1945e2154919:/# scrapy shell \"http://example.org\"\r\n2017-12-29 16:52:37 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: scrapybot)\r\n(...)\r\n>>> from six.moves import cPickle as pickle\r\n>>> s2 = pickle.loads(pickle.dumps(response.selector, protocol=2))\r\n>>> response.selector.css('a')\r\n[<Selector xpath='descendant-or-self::a' data='<a href=\"http://www.iana.org/domains/exa'>]\r\n>>> s2.css('a')\r\nTraceback (most recent call last):\r\n File \"<console>\", line 1, in <module>\r\n File \"/usr/local/lib/python3.5/site-packages/parsel/selector.py\", line 227, in css\r\n return self.xpath(self._css2xpath(query))\r\n File \"/usr/local/lib/python3.5/site-packages/parsel/selector.py\", line 203, in xpath\r\n **kwargs)\r\n File \"src/lxml/lxml.etree.pyx\", line 1584, in lxml.etree._Element.xpath (src/lxml/lxml.etree.c:59349)\r\n File \"src/lxml/xpath.pxi\", line 257, in lxml.etree.XPathElementEvaluator.__init__ (src/lxml/lxml.etree.c:170478)\r\n File \"src/lxml/apihelpers.pxi\", line 19, in lxml.etree._assertValidNode (src/lxml/lxml.etree.c:16482)\r\nAssertionError: invalid Element proxy at 139862544625976\r\n```\r\n\r\n\r\n#### Python 3.6, Scrapy 1.3.3 (https://github.com/scrapinghub/scrapinghub-stack-scrapy/tree/branch-1.3-py3)\r\n```\r\nroot@43e690443ca7:/# scrapy version -v\r\nScrapy : 1.3.3\r\nlxml : 3.7.2.0\r\nlibxml2 : 2.9.3\r\ncssselect : 1.0.1\r\nparsel : 1.1.0\r\nw3lib : 1.17.0\r\nTwisted : 16.6.0\r\nPython : 3.6.4 (default, Dec 21 2017, 01:35:12) - [GCC 4.9.2]\r\npyOpenSSL : 16.2.0 (OpenSSL 1.0.1t 3 May 2016)\r\nPlatform : Linux-4.9.44-linuxkit-aufs-x86_64-with-debian-8.10\r\nroot@43e690443ca7:/# scrapy shell \"http://example.org\"\r\n2017-12-29 16:54:49 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: scrapybot)\r\n(...)\r\n>>> from six.moves import cPickle as pickle\r\n>>> s2 = pickle.loads(pickle.dumps(response.selector, protocol=2))\r\nTraceback (most recent call last):\r\n File \"<console>\", line 1, in <module>\r\nTypeError: can't pickle HtmlElement objects\r\n```\n", "before_files": [{"content": "\"\"\"\nScheduler queues\n\"\"\"\n\nimport marshal\nfrom six.moves import cPickle as pickle\n\nfrom queuelib import queue\n\ndef _serializable_queue(queue_class, serialize, deserialize):\n\n class SerializableQueue(queue_class):\n\n def push(self, obj):\n s = serialize(obj)\n super(SerializableQueue, self).push(s)\n\n def pop(self):\n s = super(SerializableQueue, self).pop()\n if s:\n return deserialize(s)\n\n return SerializableQueue\n\ndef _pickle_serialize(obj):\n try:\n return pickle.dumps(obj, protocol=2)\n # Python>=3.5 raises AttributeError here while\n # Python<=3.4 raises pickle.PicklingError\n except (pickle.PicklingError, AttributeError) as e:\n raise ValueError(str(e))\n\nPickleFifoDiskQueue = _serializable_queue(queue.FifoDiskQueue, \\\n _pickle_serialize, pickle.loads)\nPickleLifoDiskQueue = _serializable_queue(queue.LifoDiskQueue, \\\n _pickle_serialize, pickle.loads)\nMarshalFifoDiskQueue = _serializable_queue(queue.FifoDiskQueue, \\\n marshal.dumps, marshal.loads)\nMarshalLifoDiskQueue = _serializable_queue(queue.LifoDiskQueue, \\\n marshal.dumps, marshal.loads)\nFifoMemoryQueue = queue.FifoMemoryQueue\nLifoMemoryQueue = queue.LifoMemoryQueue\n", "path": "scrapy/squeues.py"}, {"content": "from os.path import dirname, join\nfrom pkg_resources import parse_version\nfrom setuptools import setup, find_packages, __version__ as setuptools_version\n\n\nwith open(join(dirname(__file__), 'scrapy/VERSION'), 'rb') as f:\n version = f.read().decode('ascii').strip()\n\n\ndef has_environment_marker_platform_impl_support():\n \"\"\"Code extracted from 'pytest/setup.py'\n https://github.com/pytest-dev/pytest/blob/7538680c/setup.py#L31\n\n The first known release to support environment marker with range operators\n it is 18.5, see:\n https://setuptools.readthedocs.io/en/latest/history.html#id235\n \"\"\"\n return parse_version(setuptools_version) >= parse_version('18.5')\n\n\nextras_require = {}\n\nif has_environment_marker_platform_impl_support():\n extras_require[':platform_python_implementation == \"PyPy\"'] = [\n 'PyPyDispatcher>=2.1.0',\n ]\n\n\nsetup(\n name='Scrapy',\n version=version,\n url='https://scrapy.org',\n description='A high-level Web Crawling and Web Scraping framework',\n long_description=open('README.rst').read(),\n author='Scrapy developers',\n maintainer='Pablo Hoffman',\n maintainer_email='[email protected]',\n license='BSD',\n packages=find_packages(exclude=('tests', 'tests.*')),\n include_package_data=True,\n zip_safe=False,\n entry_points={\n 'console_scripts': ['scrapy = scrapy.cmdline:execute']\n },\n classifiers=[\n 'Framework :: Scrapy',\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Software Development :: Libraries :: Application Frameworks',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',\n install_requires=[\n 'Twisted>=13.1.0',\n 'w3lib>=1.17.0',\n 'queuelib',\n 'lxml',\n 'pyOpenSSL',\n 'cssselect>=0.9',\n 'six>=1.5.2',\n 'parsel>=1.1',\n 'PyDispatcher>=2.0.5',\n 'service_identity',\n ],\n extras_require=extras_require,\n)\n", "path": "setup.py"}], "after_files": [{"content": "\"\"\"\nScheduler queues\n\"\"\"\n\nimport marshal\nfrom six.moves import cPickle as pickle\n\nfrom queuelib import queue\n\ndef _serializable_queue(queue_class, serialize, deserialize):\n\n class SerializableQueue(queue_class):\n\n def push(self, obj):\n s = serialize(obj)\n super(SerializableQueue, self).push(s)\n\n def pop(self):\n s = super(SerializableQueue, self).pop()\n if s:\n return deserialize(s)\n\n return SerializableQueue\n\ndef _pickle_serialize(obj):\n try:\n return pickle.dumps(obj, protocol=2)\n # Python <= 3.4 raises pickle.PicklingError here while\n # 3.5 <= Python < 3.6 raises AttributeError and\n # Python >= 3.6 raises TypeError\n except (pickle.PicklingError, AttributeError, TypeError) as e:\n raise ValueError(str(e))\n\nPickleFifoDiskQueue = _serializable_queue(queue.FifoDiskQueue, \\\n _pickle_serialize, pickle.loads)\nPickleLifoDiskQueue = _serializable_queue(queue.LifoDiskQueue, \\\n _pickle_serialize, pickle.loads)\nMarshalFifoDiskQueue = _serializable_queue(queue.FifoDiskQueue, \\\n marshal.dumps, marshal.loads)\nMarshalLifoDiskQueue = _serializable_queue(queue.LifoDiskQueue, \\\n marshal.dumps, marshal.loads)\nFifoMemoryQueue = queue.FifoMemoryQueue\nLifoMemoryQueue = queue.LifoMemoryQueue\n", "path": "scrapy/squeues.py"}, {"content": "from os.path import dirname, join\nfrom pkg_resources import parse_version\nfrom setuptools import setup, find_packages, __version__ as setuptools_version\n\n\nwith open(join(dirname(__file__), 'scrapy/VERSION'), 'rb') as f:\n version = f.read().decode('ascii').strip()\n\n\ndef has_environment_marker_platform_impl_support():\n \"\"\"Code extracted from 'pytest/setup.py'\n https://github.com/pytest-dev/pytest/blob/7538680c/setup.py#L31\n\n The first known release to support environment marker with range operators\n it is 18.5, see:\n https://setuptools.readthedocs.io/en/latest/history.html#id235\n \"\"\"\n return parse_version(setuptools_version) >= parse_version('18.5')\n\n\nextras_require = {}\n\nif has_environment_marker_platform_impl_support():\n extras_require[':platform_python_implementation == \"PyPy\"'] = [\n 'PyPyDispatcher>=2.1.0',\n ]\n\n\nsetup(\n name='Scrapy',\n version=version,\n url='https://scrapy.org',\n description='A high-level Web Crawling and Web Scraping framework',\n long_description=open('README.rst').read(),\n author='Scrapy developers',\n maintainer='Pablo Hoffman',\n maintainer_email='[email protected]',\n license='BSD',\n packages=find_packages(exclude=('tests', 'tests.*')),\n include_package_data=True,\n zip_safe=False,\n entry_points={\n 'console_scripts': ['scrapy = scrapy.cmdline:execute']\n },\n classifiers=[\n 'Framework :: Scrapy',\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Software Development :: Libraries :: Application Frameworks',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',\n install_requires=[\n 'Twisted>=13.1.0',\n 'w3lib>=1.17.0',\n 'queuelib',\n 'lxml',\n 'pyOpenSSL',\n 'cssselect>=0.9',\n 'six>=1.5.2',\n 'parsel>=1.4',\n 'PyDispatcher>=2.0.5',\n 'service_identity',\n ],\n extras_require=extras_require,\n)\n", "path": "setup.py"}]}
3,263
292
gh_patches_debug_27302
rasdani/github-patches
git_diff
microsoft__ptvsd-913
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Use SIO_LOOPBACK_FAST_PATH Python 3.6 adds the SIO_LOOPBACK_FAST_PATH flag for sockets, which dramatically increases the bandwidth for loopback connections (I got >4x on my machine with the test code on the linked issue). We should enable this flag where possible in our debugger. The functionality has existed since Windows 8, but using it on earlier OSs may cause an exception. It is safe to use if the socket eventually gets a connection from another IP. Moved from https://github.com/Microsoft/PTVS/issues/1350 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `ptvsd/socket.py` Content: ``` 1 # Copyright (c) Microsoft Corporation. All rights reserved. 2 # Licensed under the MIT License. See LICENSE in the project root 3 # for license information. 4 5 from __future__ import absolute_import 6 7 from collections import namedtuple 8 import contextlib 9 import errno 10 import platform 11 import socket 12 try: 13 from urllib.parse import urlparse 14 except ImportError: 15 from urlparse import urlparse 16 17 18 try: 19 ConnectionError # noqa 20 BrokenPipeError # noqa 21 ConnectionResetError # noqa 22 except NameError: 23 class BrokenPipeError(Exception): 24 # EPIPE and ESHUTDOWN 25 pass 26 27 class ConnectionResetError(Exception): 28 # ECONNRESET 29 pass 30 31 32 NOT_CONNECTED = ( 33 errno.ENOTCONN, 34 errno.EBADF, 35 ) 36 37 CLOSED = ( 38 errno.EPIPE, 39 errno.ESHUTDOWN, 40 errno.ECONNRESET, 41 # Windows 42 10038, # "An operation was attempted on something that is not a socket" 43 10058, 44 ) 45 46 EOF = NOT_CONNECTED + CLOSED 47 48 49 @contextlib.contextmanager 50 def convert_eof(): 51 """A context manager to convert some socket errors into EOFError.""" 52 try: 53 yield 54 except ConnectionResetError: 55 raise EOFError 56 except BrokenPipeError: 57 raise EOFError 58 except OSError as exc: 59 if exc.errno in EOF: 60 raise EOFError 61 raise 62 63 64 class TimeoutError(socket.timeout): 65 """A socket timeout happened.""" 66 67 68 def is_socket(sock): 69 """Return True if the object can be used as a socket.""" 70 return isinstance(sock, socket.socket) 71 72 73 def create_server(host, port, timeout=None): 74 """Return a local server socket listening on the given port.""" 75 if host is None: 76 host = 'localhost' 77 server = _new_sock() 78 server.bind((host, port)) 79 80 # _, listener_port = server.getsockname() 81 # print('Listening on', listener_port) 82 83 if timeout is not None: 84 server.settimeout(timeout) 85 server.listen(1) 86 return server 87 88 89 def create_client(): 90 """Return a client socket that may be connected to a remote address.""" 91 return _new_sock() 92 93 94 def _new_sock(): 95 sock = socket.socket(socket.AF_INET, 96 socket.SOCK_STREAM, 97 socket.IPPROTO_TCP) 98 if platform.system() == 'Windows': 99 sock.setsockopt(socket.SOL_SOCKET, socket.SO_EXCLUSIVEADDRUSE, 1) 100 else: 101 sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) 102 return sock 103 104 105 @contextlib.contextmanager 106 def ignored_errno(*ignored): 107 """A context manager that ignores the given errnos.""" 108 try: 109 yield 110 except OSError as exc: 111 if exc.errno not in ignored: 112 raise 113 114 115 class KeepAlive(namedtuple('KeepAlive', 'interval idle maxfails')): 116 """TCP keep-alive settings.""" 117 118 INTERVAL = 3 # seconds 119 IDLE = 1 # seconds after idle 120 MAX_FAILS = 5 121 122 @classmethod 123 def from_raw(cls, raw): 124 """Return the corresponding KeepAlive.""" 125 if raw is None: 126 return None 127 elif isinstance(raw, cls): 128 return raw 129 elif isinstance(raw, (str, int, float)): 130 return cls(raw) 131 else: 132 try: 133 raw = dict(raw) 134 except TypeError: 135 return cls(*raw) 136 else: 137 return cls(**raw) 138 139 def __new__(cls, interval=None, idle=None, maxfails=None): 140 self = super(KeepAlive, cls).__new__( 141 cls, 142 float(interval) if interval or interval == 0 else cls.INTERVAL, 143 float(idle) if idle or idle == 0 else cls.IDLE, 144 float(maxfails) if maxfails or maxfails == 0 else cls.MAX_FAILS, 145 ) 146 return self 147 148 def apply(self, sock): 149 """Set the keepalive values on the socket.""" 150 sock.setsockopt(socket.SOL_SOCKET, 151 socket.SO_KEEPALIVE, 152 1) 153 interval = self.interval 154 idle = self.idle 155 maxfails = self.maxfails 156 try: 157 if interval > 0: 158 sock.setsockopt(socket.IPPROTO_TCP, 159 socket.TCP_KEEPINTVL, 160 interval) 161 if idle > 0: 162 sock.setsockopt(socket.IPPROTO_TCP, 163 socket.TCP_KEEPIDLE, 164 idle) 165 if maxfails >= 0: 166 sock.setsockopt(socket.IPPROTO_TCP, 167 socket.TCP_KEEPCNT, 168 maxfails) 169 except AttributeError: 170 # mostly linux-only 171 pass 172 173 174 def connect(sock, addr, keepalive=None): 175 """Return the client socket for the next connection.""" 176 if addr is None: 177 if keepalive is None or keepalive is True: 178 keepalive = KeepAlive() 179 elif keepalive: 180 keepalive = KeepAlive.from_raw(keepalive) 181 client, _ = sock.accept() 182 if keepalive: 183 keepalive.apply(client) 184 return client 185 else: 186 if keepalive: 187 raise NotImplementedError 188 sock.connect(addr) 189 return sock 190 191 192 def shut_down(sock, how=socket.SHUT_RDWR, ignored=NOT_CONNECTED): 193 """Shut down the given socket.""" 194 with ignored_errno(*ignored or ()): 195 sock.shutdown(how) 196 197 198 def close_socket(sock): 199 """Shutdown and close the socket.""" 200 try: 201 shut_down(sock) 202 except Exception: 203 # TODO: Log errors? 204 pass 205 sock.close() 206 207 208 class Address(namedtuple('Address', 'host port')): 209 """An IP address to use for sockets.""" 210 211 @classmethod 212 def from_raw(cls, raw, defaultport=None): 213 """Return an address corresponding to the given data.""" 214 if isinstance(raw, cls): 215 return raw 216 elif isinstance(raw, int): 217 return cls(None, raw) 218 elif isinstance(raw, str): 219 if raw == '': 220 return cls('', defaultport) 221 parsed = urlparse(raw) 222 if not parsed.netloc: 223 if parsed.scheme: 224 raise ValueError('invalid address {!r}'.format(raw)) 225 return cls.from_raw('x://' + raw, defaultport=defaultport) 226 return cls( 227 parsed.hostname or '', 228 parsed.port if parsed.port else defaultport, 229 ) 230 elif not raw: 231 return cls(None, defaultport) 232 else: 233 try: 234 kwargs = dict(**raw) 235 except TypeError: 236 return cls(*raw) 237 else: 238 kwargs.setdefault('host', None) 239 kwargs.setdefault('port', defaultport) 240 return cls(**kwargs) 241 242 @classmethod 243 def as_server(cls, host, port): 244 """Return an address to use as a server address.""" 245 return cls(host, port, isserver=True) 246 247 @classmethod 248 def as_client(cls, host, port): 249 """Return an address to use as a server address.""" 250 return cls(host, port, isserver=False) 251 252 def __new__(cls, host, port, **kwargs): 253 if host == '*': 254 host = '' 255 isserver = kwargs.pop('isserver', None) 256 if isserver is None: 257 isserver = (host is None or host == '') 258 else: 259 isserver = bool(isserver) 260 if host is None: 261 host = 'localhost' 262 self = super(Address, cls).__new__( 263 cls, 264 str(host), 265 int(port) if port is not None else None, 266 **kwargs 267 ) 268 self._isserver = isserver 269 return self 270 271 def __init__(self, *args, **kwargs): 272 if self.port is None: 273 raise TypeError('missing port') 274 if self.port < 0 or self.port > 65535: 275 raise ValueError('port must be non-negative int < 65535') 276 277 def __repr__(self): 278 orig = super(Address, self).__repr__() 279 return '{}, isserver={})'.format(orig[:-1], self._isserver) 280 281 def __eq__(self, other): 282 if not super(Address, self).__eq__(other): 283 return False 284 try: 285 other = self.from_raw(other) 286 except Exception: 287 return False 288 return self._isserver == other._isserver 289 290 @property 291 def isserver(self): 292 return self._isserver 293 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/ptvsd/socket.py b/ptvsd/socket.py --- a/ptvsd/socket.py +++ b/ptvsd/socket.py @@ -75,14 +75,14 @@ if host is None: host = 'localhost' server = _new_sock() - server.bind((host, port)) - - # _, listener_port = server.getsockname() - # print('Listening on', listener_port) - - if timeout is not None: - server.settimeout(timeout) - server.listen(1) + try: + server.bind((host, port)) + if timeout is not None: + server.settimeout(timeout) + server.listen(1) + except Exception: + server.close() + raise return server @@ -96,6 +96,15 @@ socket.SOCK_STREAM, socket.IPPROTO_TCP) if platform.system() == 'Windows': + try: + sock.ioctl(socket.SIO_LOOPBACK_FAST_PATH, True) + except AttributeError: + pass # Not supported in python 2.* or <3.6 + except OSError as ose: + if ose.winerror == 10045: # Not supported by OS + pass + else: + raise sock.setsockopt(socket.SOL_SOCKET, socket.SO_EXCLUSIVEADDRUSE, 1) else: sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) @@ -200,7 +209,6 @@ try: shut_down(sock) except Exception: - # TODO: Log errors? pass sock.close()
{"golden_diff": "diff --git a/ptvsd/socket.py b/ptvsd/socket.py\n--- a/ptvsd/socket.py\n+++ b/ptvsd/socket.py\n@@ -75,14 +75,14 @@\n if host is None:\n host = 'localhost'\n server = _new_sock()\n- server.bind((host, port))\n-\n- # _, listener_port = server.getsockname()\n- # print('Listening on', listener_port)\n-\n- if timeout is not None:\n- server.settimeout(timeout)\n- server.listen(1)\n+ try:\n+ server.bind((host, port))\n+ if timeout is not None:\n+ server.settimeout(timeout)\n+ server.listen(1)\n+ except Exception:\n+ server.close()\n+ raise\n return server\n \n \n@@ -96,6 +96,15 @@\n socket.SOCK_STREAM,\n socket.IPPROTO_TCP)\n if platform.system() == 'Windows':\n+ try:\n+ sock.ioctl(socket.SIO_LOOPBACK_FAST_PATH, True)\n+ except AttributeError:\n+ pass # Not supported in python 2.* or <3.6\n+ except OSError as ose:\n+ if ose.winerror == 10045: # Not supported by OS\n+ pass\n+ else:\n+ raise\n sock.setsockopt(socket.SOL_SOCKET, socket.SO_EXCLUSIVEADDRUSE, 1)\n else:\n sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n@@ -200,7 +209,6 @@\n try:\n shut_down(sock)\n except Exception:\n- # TODO: Log errors?\n pass\n sock.close()\n", "issue": "Use SIO_LOOPBACK_FAST_PATH\nPython 3.6 adds the SIO_LOOPBACK_FAST_PATH flag for sockets, which dramatically increases the bandwidth for loopback connections (I got >4x on my machine with the test code on the linked issue).\r\n\r\nWe should enable this flag where possible in our debugger. The functionality has existed since Windows 8, but using it on earlier OSs may cause an exception. It is safe to use if the socket eventually gets a connection from another IP.\r\n\r\nMoved from https://github.com/Microsoft/PTVS/issues/1350\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nfrom __future__ import absolute_import\n\nfrom collections import namedtuple\nimport contextlib\nimport errno\nimport platform\nimport socket\ntry:\n from urllib.parse import urlparse\nexcept ImportError:\n from urlparse import urlparse\n\n\ntry:\n ConnectionError # noqa\n BrokenPipeError # noqa\n ConnectionResetError # noqa\nexcept NameError:\n class BrokenPipeError(Exception):\n # EPIPE and ESHUTDOWN\n pass\n\n class ConnectionResetError(Exception):\n # ECONNRESET\n pass\n\n\nNOT_CONNECTED = (\n errno.ENOTCONN,\n errno.EBADF,\n)\n\nCLOSED = (\n errno.EPIPE,\n errno.ESHUTDOWN,\n errno.ECONNRESET,\n # Windows\n 10038, # \"An operation was attempted on something that is not a socket\"\n 10058,\n)\n\nEOF = NOT_CONNECTED + CLOSED\n\n\[email protected]\ndef convert_eof():\n \"\"\"A context manager to convert some socket errors into EOFError.\"\"\"\n try:\n yield\n except ConnectionResetError:\n raise EOFError\n except BrokenPipeError:\n raise EOFError\n except OSError as exc:\n if exc.errno in EOF:\n raise EOFError\n raise\n\n\nclass TimeoutError(socket.timeout):\n \"\"\"A socket timeout happened.\"\"\"\n\n\ndef is_socket(sock):\n \"\"\"Return True if the object can be used as a socket.\"\"\"\n return isinstance(sock, socket.socket)\n\n\ndef create_server(host, port, timeout=None):\n \"\"\"Return a local server socket listening on the given port.\"\"\"\n if host is None:\n host = 'localhost'\n server = _new_sock()\n server.bind((host, port))\n\n # _, listener_port = server.getsockname()\n # print('Listening on', listener_port)\n\n if timeout is not None:\n server.settimeout(timeout)\n server.listen(1)\n return server\n\n\ndef create_client():\n \"\"\"Return a client socket that may be connected to a remote address.\"\"\"\n return _new_sock()\n\n\ndef _new_sock():\n sock = socket.socket(socket.AF_INET,\n socket.SOCK_STREAM,\n socket.IPPROTO_TCP)\n if platform.system() == 'Windows':\n sock.setsockopt(socket.SOL_SOCKET, socket.SO_EXCLUSIVEADDRUSE, 1)\n else:\n sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n return sock\n\n\[email protected]\ndef ignored_errno(*ignored):\n \"\"\"A context manager that ignores the given errnos.\"\"\"\n try:\n yield\n except OSError as exc:\n if exc.errno not in ignored:\n raise\n\n\nclass KeepAlive(namedtuple('KeepAlive', 'interval idle maxfails')):\n \"\"\"TCP keep-alive settings.\"\"\"\n\n INTERVAL = 3 # seconds\n IDLE = 1 # seconds after idle\n MAX_FAILS = 5\n\n @classmethod\n def from_raw(cls, raw):\n \"\"\"Return the corresponding KeepAlive.\"\"\"\n if raw is None:\n return None\n elif isinstance(raw, cls):\n return raw\n elif isinstance(raw, (str, int, float)):\n return cls(raw)\n else:\n try:\n raw = dict(raw)\n except TypeError:\n return cls(*raw)\n else:\n return cls(**raw)\n\n def __new__(cls, interval=None, idle=None, maxfails=None):\n self = super(KeepAlive, cls).__new__(\n cls,\n float(interval) if interval or interval == 0 else cls.INTERVAL,\n float(idle) if idle or idle == 0 else cls.IDLE,\n float(maxfails) if maxfails or maxfails == 0 else cls.MAX_FAILS,\n )\n return self\n\n def apply(self, sock):\n \"\"\"Set the keepalive values on the socket.\"\"\"\n sock.setsockopt(socket.SOL_SOCKET,\n socket.SO_KEEPALIVE,\n 1)\n interval = self.interval\n idle = self.idle\n maxfails = self.maxfails\n try:\n if interval > 0:\n sock.setsockopt(socket.IPPROTO_TCP,\n socket.TCP_KEEPINTVL,\n interval)\n if idle > 0:\n sock.setsockopt(socket.IPPROTO_TCP,\n socket.TCP_KEEPIDLE,\n idle)\n if maxfails >= 0:\n sock.setsockopt(socket.IPPROTO_TCP,\n socket.TCP_KEEPCNT,\n maxfails)\n except AttributeError:\n # mostly linux-only\n pass\n\n\ndef connect(sock, addr, keepalive=None):\n \"\"\"Return the client socket for the next connection.\"\"\"\n if addr is None:\n if keepalive is None or keepalive is True:\n keepalive = KeepAlive()\n elif keepalive:\n keepalive = KeepAlive.from_raw(keepalive)\n client, _ = sock.accept()\n if keepalive:\n keepalive.apply(client)\n return client\n else:\n if keepalive:\n raise NotImplementedError\n sock.connect(addr)\n return sock\n\n\ndef shut_down(sock, how=socket.SHUT_RDWR, ignored=NOT_CONNECTED):\n \"\"\"Shut down the given socket.\"\"\"\n with ignored_errno(*ignored or ()):\n sock.shutdown(how)\n\n\ndef close_socket(sock):\n \"\"\"Shutdown and close the socket.\"\"\"\n try:\n shut_down(sock)\n except Exception:\n # TODO: Log errors?\n pass\n sock.close()\n\n\nclass Address(namedtuple('Address', 'host port')):\n \"\"\"An IP address to use for sockets.\"\"\"\n\n @classmethod\n def from_raw(cls, raw, defaultport=None):\n \"\"\"Return an address corresponding to the given data.\"\"\"\n if isinstance(raw, cls):\n return raw\n elif isinstance(raw, int):\n return cls(None, raw)\n elif isinstance(raw, str):\n if raw == '':\n return cls('', defaultport)\n parsed = urlparse(raw)\n if not parsed.netloc:\n if parsed.scheme:\n raise ValueError('invalid address {!r}'.format(raw))\n return cls.from_raw('x://' + raw, defaultport=defaultport)\n return cls(\n parsed.hostname or '',\n parsed.port if parsed.port else defaultport,\n )\n elif not raw:\n return cls(None, defaultport)\n else:\n try:\n kwargs = dict(**raw)\n except TypeError:\n return cls(*raw)\n else:\n kwargs.setdefault('host', None)\n kwargs.setdefault('port', defaultport)\n return cls(**kwargs)\n\n @classmethod\n def as_server(cls, host, port):\n \"\"\"Return an address to use as a server address.\"\"\"\n return cls(host, port, isserver=True)\n\n @classmethod\n def as_client(cls, host, port):\n \"\"\"Return an address to use as a server address.\"\"\"\n return cls(host, port, isserver=False)\n\n def __new__(cls, host, port, **kwargs):\n if host == '*':\n host = ''\n isserver = kwargs.pop('isserver', None)\n if isserver is None:\n isserver = (host is None or host == '')\n else:\n isserver = bool(isserver)\n if host is None:\n host = 'localhost'\n self = super(Address, cls).__new__(\n cls,\n str(host),\n int(port) if port is not None else None,\n **kwargs\n )\n self._isserver = isserver\n return self\n\n def __init__(self, *args, **kwargs):\n if self.port is None:\n raise TypeError('missing port')\n if self.port < 0 or self.port > 65535:\n raise ValueError('port must be non-negative int < 65535')\n\n def __repr__(self):\n orig = super(Address, self).__repr__()\n return '{}, isserver={})'.format(orig[:-1], self._isserver)\n\n def __eq__(self, other):\n if not super(Address, self).__eq__(other):\n return False\n try:\n other = self.from_raw(other)\n except Exception:\n return False\n return self._isserver == other._isserver\n\n @property\n def isserver(self):\n return self._isserver\n", "path": "ptvsd/socket.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nfrom __future__ import absolute_import\n\nfrom collections import namedtuple\nimport contextlib\nimport errno\nimport platform\nimport socket\ntry:\n from urllib.parse import urlparse\nexcept ImportError:\n from urlparse import urlparse\n\n\ntry:\n ConnectionError # noqa\n BrokenPipeError # noqa\n ConnectionResetError # noqa\nexcept NameError:\n class BrokenPipeError(Exception):\n # EPIPE and ESHUTDOWN\n pass\n\n class ConnectionResetError(Exception):\n # ECONNRESET\n pass\n\n\nNOT_CONNECTED = (\n errno.ENOTCONN,\n errno.EBADF,\n)\n\nCLOSED = (\n errno.EPIPE,\n errno.ESHUTDOWN,\n errno.ECONNRESET,\n # Windows\n 10038, # \"An operation was attempted on something that is not a socket\"\n 10058,\n)\n\nEOF = NOT_CONNECTED + CLOSED\n\n\[email protected]\ndef convert_eof():\n \"\"\"A context manager to convert some socket errors into EOFError.\"\"\"\n try:\n yield\n except ConnectionResetError:\n raise EOFError\n except BrokenPipeError:\n raise EOFError\n except OSError as exc:\n if exc.errno in EOF:\n raise EOFError\n raise\n\n\nclass TimeoutError(socket.timeout):\n \"\"\"A socket timeout happened.\"\"\"\n\n\ndef is_socket(sock):\n \"\"\"Return True if the object can be used as a socket.\"\"\"\n return isinstance(sock, socket.socket)\n\n\ndef create_server(host, port, timeout=None):\n \"\"\"Return a local server socket listening on the given port.\"\"\"\n if host is None:\n host = 'localhost'\n server = _new_sock()\n try:\n server.bind((host, port))\n if timeout is not None:\n server.settimeout(timeout)\n server.listen(1)\n except Exception:\n server.close()\n raise\n return server\n\n\ndef create_client():\n \"\"\"Return a client socket that may be connected to a remote address.\"\"\"\n return _new_sock()\n\n\ndef _new_sock():\n sock = socket.socket(socket.AF_INET,\n socket.SOCK_STREAM,\n socket.IPPROTO_TCP)\n if platform.system() == 'Windows':\n try:\n sock.ioctl(socket.SIO_LOOPBACK_FAST_PATH, True)\n except AttributeError:\n pass # Not supported in python 2.* or <3.6\n except OSError as ose:\n if ose.winerror == 10045: # Not supported by OS\n pass\n else:\n raise\n sock.setsockopt(socket.SOL_SOCKET, socket.SO_EXCLUSIVEADDRUSE, 1)\n else:\n sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n return sock\n\n\[email protected]\ndef ignored_errno(*ignored):\n \"\"\"A context manager that ignores the given errnos.\"\"\"\n try:\n yield\n except OSError as exc:\n if exc.errno not in ignored:\n raise\n\n\nclass KeepAlive(namedtuple('KeepAlive', 'interval idle maxfails')):\n \"\"\"TCP keep-alive settings.\"\"\"\n\n INTERVAL = 3 # seconds\n IDLE = 1 # seconds after idle\n MAX_FAILS = 5\n\n @classmethod\n def from_raw(cls, raw):\n \"\"\"Return the corresponding KeepAlive.\"\"\"\n if raw is None:\n return None\n elif isinstance(raw, cls):\n return raw\n elif isinstance(raw, (str, int, float)):\n return cls(raw)\n else:\n try:\n raw = dict(raw)\n except TypeError:\n return cls(*raw)\n else:\n return cls(**raw)\n\n def __new__(cls, interval=None, idle=None, maxfails=None):\n self = super(KeepAlive, cls).__new__(\n cls,\n float(interval) if interval or interval == 0 else cls.INTERVAL,\n float(idle) if idle or idle == 0 else cls.IDLE,\n float(maxfails) if maxfails or maxfails == 0 else cls.MAX_FAILS,\n )\n return self\n\n def apply(self, sock):\n \"\"\"Set the keepalive values on the socket.\"\"\"\n sock.setsockopt(socket.SOL_SOCKET,\n socket.SO_KEEPALIVE,\n 1)\n interval = self.interval\n idle = self.idle\n maxfails = self.maxfails\n try:\n if interval > 0:\n sock.setsockopt(socket.IPPROTO_TCP,\n socket.TCP_KEEPINTVL,\n interval)\n if idle > 0:\n sock.setsockopt(socket.IPPROTO_TCP,\n socket.TCP_KEEPIDLE,\n idle)\n if maxfails >= 0:\n sock.setsockopt(socket.IPPROTO_TCP,\n socket.TCP_KEEPCNT,\n maxfails)\n except AttributeError:\n # mostly linux-only\n pass\n\n\ndef connect(sock, addr, keepalive=None):\n \"\"\"Return the client socket for the next connection.\"\"\"\n if addr is None:\n if keepalive is None or keepalive is True:\n keepalive = KeepAlive()\n elif keepalive:\n keepalive = KeepAlive.from_raw(keepalive)\n client, _ = sock.accept()\n if keepalive:\n keepalive.apply(client)\n return client\n else:\n if keepalive:\n raise NotImplementedError\n sock.connect(addr)\n return sock\n\n\ndef shut_down(sock, how=socket.SHUT_RDWR, ignored=NOT_CONNECTED):\n \"\"\"Shut down the given socket.\"\"\"\n with ignored_errno(*ignored or ()):\n sock.shutdown(how)\n\n\ndef close_socket(sock):\n \"\"\"Shutdown and close the socket.\"\"\"\n try:\n shut_down(sock)\n except Exception:\n pass\n sock.close()\n\n\nclass Address(namedtuple('Address', 'host port')):\n \"\"\"An IP address to use for sockets.\"\"\"\n\n @classmethod\n def from_raw(cls, raw, defaultport=None):\n \"\"\"Return an address corresponding to the given data.\"\"\"\n if isinstance(raw, cls):\n return raw\n elif isinstance(raw, int):\n return cls(None, raw)\n elif isinstance(raw, str):\n if raw == '':\n return cls('', defaultport)\n parsed = urlparse(raw)\n if not parsed.netloc:\n if parsed.scheme:\n raise ValueError('invalid address {!r}'.format(raw))\n return cls.from_raw('x://' + raw, defaultport=defaultport)\n return cls(\n parsed.hostname or '',\n parsed.port if parsed.port else defaultport,\n )\n elif not raw:\n return cls(None, defaultport)\n else:\n try:\n kwargs = dict(**raw)\n except TypeError:\n return cls(*raw)\n else:\n kwargs.setdefault('host', None)\n kwargs.setdefault('port', defaultport)\n return cls(**kwargs)\n\n @classmethod\n def as_server(cls, host, port):\n \"\"\"Return an address to use as a server address.\"\"\"\n return cls(host, port, isserver=True)\n\n @classmethod\n def as_client(cls, host, port):\n \"\"\"Return an address to use as a server address.\"\"\"\n return cls(host, port, isserver=False)\n\n def __new__(cls, host, port, **kwargs):\n if host == '*':\n host = ''\n isserver = kwargs.pop('isserver', None)\n if isserver is None:\n isserver = (host is None or host == '')\n else:\n isserver = bool(isserver)\n if host is None:\n host = 'localhost'\n self = super(Address, cls).__new__(\n cls,\n str(host),\n int(port) if port is not None else None,\n **kwargs\n )\n self._isserver = isserver\n return self\n\n def __init__(self, *args, **kwargs):\n if self.port is None:\n raise TypeError('missing port')\n if self.port < 0 or self.port > 65535:\n raise ValueError('port must be non-negative int < 65535')\n\n def __repr__(self):\n orig = super(Address, self).__repr__()\n return '{}, isserver={})'.format(orig[:-1], self._isserver)\n\n def __eq__(self, other):\n if not super(Address, self).__eq__(other):\n return False\n try:\n other = self.from_raw(other)\n except Exception:\n return False\n return self._isserver == other._isserver\n\n @property\n def isserver(self):\n return self._isserver\n", "path": "ptvsd/socket.py"}]}
2,949
377
gh_patches_debug_20848
rasdani/github-patches
git_diff
openfun__marsha-2414
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- (deposit) support apostrophe in filenames ## Bug Report **Problematic Behavior** Loading a file with a typographic apostrophe (or curly quote) in the filename generates an error during the download process ``` xml <Error> <Code>InvalidArgument</Code> <Message>Header value cannot be represented using ISO-8859-1.</Message> <ArgumentName>response-content-disposition</ArgumentName> <ArgumentValue>attachment; filename=Capture d’écran du 2023-07-05 17-55-04.png</ArgumentValue> <RequestId>7576DY31G7YEKEHW</RequestId> <HostId>20bDgURy8zglt3nW4UJbIbH0G5+c+DOmSeRKq/K2YNwP6rZMMHzS4h0Af91vioEWtbJsRurDzP0=</HostId> </Error> ``` **Steps to Reproduce** 1. Add a Marsha Deposit XBlock on Studio 2. On student view, upload a file with a typographic apostrophe on its name 3. Try to download the file **Environment** - Marsha version: production - Platform: Open edX **Possible Solution** Use a more extensive character set like UTF-8? Otherwise, escape apostrophes in filenames or replace it by right single quotation marks? **Additional context/Screenshots** - OK : https://www.compart.com/fr/unicode/U+0027 - KO : https://www.compart.com/fr/unicode/U+2019 [deposit-typographic-apostrophe-bug.webm](https://github.com/openfun/marsha/assets/23249365/ee43dd63-9b1a-43a7-912c-6ed84f07404c) --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `src/backend/marsha/deposit/serializers.py` Content: ``` 1 """Structure of deposit related models API responses with Django Rest Framework serializers.""" 2 import mimetypes 3 from os.path import splitext 4 from urllib.parse import quote_plus 5 6 from django.conf import settings 7 from django.urls import reverse 8 9 from rest_framework import serializers 10 11 from marsha.core.serializers import ( 12 BaseInitiateUploadSerializer, 13 UploadableFileWithExtensionSerializerMixin, 14 get_resource_cloudfront_url_params, 15 ) 16 from marsha.core.serializers.playlist import PlaylistLiteSerializer 17 from marsha.core.utils import cloudfront_utils, time_utils 18 19 from ..core.models import User 20 from .models import DepositedFile, FileDepository 21 22 23 class DepositedFileSerializer( 24 UploadableFileWithExtensionSerializerMixin, 25 serializers.ModelSerializer, 26 BaseInitiateUploadSerializer, 27 ): 28 """A serializer to display a deposited file.""" 29 30 class Meta: # noqa 31 model = DepositedFile 32 fields = ( 33 "filename", 34 "author_name", 35 "id", 36 "file_depository_id", 37 "read", 38 "url", 39 "uploaded_on", 40 "upload_state", 41 "size", 42 ) 43 read_only_fields = ( 44 "id", 45 "file_depository_id", 46 "url", 47 "uploaded_on", 48 "upload_state", 49 ) 50 51 # file_depository = FileDepositorySerializer(read_only=True) 52 # filename = serializers.SerializerMethodField() 53 url = serializers.SerializerMethodField() 54 # Make sure file depository UUID is converted to a string during serialization 55 file_depository_id = serializers.PrimaryKeyRelatedField( 56 read_only=True, pk_field=serializers.CharField() 57 ) 58 59 @property 60 def max_upload_file_size(self): 61 """return the deposited max file size define in the settings. 62 63 The @property decorator is used to ease the use of @override_settings 64 in tests. Otherwise the setting is not changed and we can't easily test 65 an upload with a size higher than the one defined in the settings 66 """ 67 return settings.DEPOSITED_FILE_SOURCE_MAX_SIZE 68 69 def create(self, validated_data): 70 """Force the file depository field to the file depository of the JWT Token if any, 71 and set the author name to the username of the JWT Token if any. 72 73 Parameters 74 ---------- 75 validated_data : dictionary 76 Dictionary of the deserialized values of each field after validation. 77 78 Returns 79 ------- 80 dictionary 81 The "validated_data" dictionary is returned after modification. 82 83 """ 84 resource = self.context["request"].resource 85 user = self.context["request"].user 86 file_depository_id = self.context["view"].get_related_filedepository_id() 87 88 if not validated_data.get("file_depository_id"): 89 validated_data["file_depository_id"] = file_depository_id 90 91 if resource: 92 validated_data["author_id"] = resource.user.get("id") 93 94 # try to get the most useful username from the token 95 if resource.user: 96 if author_name := ( 97 resource.user.get("user_fullname") or resource.user.get("username") 98 ): 99 validated_data["author_name"] = author_name 100 else: 101 validated_data["author_id"] = user.id 102 validated_data["author_name"] = User.objects.get(id=user.id).username 103 104 return super().create(validated_data) 105 106 def _get_extension_string(self, obj): 107 """Deposited file extension with the leading dot. 108 109 Parameters 110 ---------- 111 obj : Type[models.DepositedFile] 112 The deposited file that we want to serialize 113 114 Returns 115 ------- 116 String 117 The extension with the leading dot if the deposited file has an extension 118 An empty string otherwise 119 120 """ 121 return "." + obj.extension if obj.extension else "" 122 123 # def get_filename(self, obj): 124 # """Filename of the deposited file.""" 125 # return self._get_filename(obj.title) if obj.uploaded_on else None 126 127 def get_url(self, obj): 128 """Url of the DepositedFile. 129 130 Parameters 131 ---------- 132 obj : Type[models.DepositedFile] 133 The deposited file that we want to serialize 134 135 Returns 136 ------- 137 String or None 138 the url to fetch the deposited file on CloudFront 139 None if the deposited file is still not uploaded to S3 with success 140 141 """ 142 if obj.uploaded_on is None: 143 return None 144 145 base = ( 146 f"{settings.AWS_S3_URL_PROTOCOL}://{settings.CLOUDFRONT_DOMAIN}/" 147 f"{obj.file_depository.pk}/depositedfile/{obj.pk}/" 148 f"{time_utils.to_timestamp(obj.uploaded_on)}" 149 ) 150 151 url = ( 152 f"{base:s}{self._get_extension_string(obj)}?" 153 f"response-content-disposition={quote_plus('attachment; filename=' + obj.filename)}" 154 ) 155 156 if settings.CLOUDFRONT_SIGNED_URLS_ACTIVE: 157 params = get_resource_cloudfront_url_params( 158 "depositedfile", obj.file_depository_id 159 ) 160 url = cloudfront_utils.build_signed_url(url, params) 161 return url 162 163 164 class DepositedFileInitiateUploadSerializer(BaseInitiateUploadSerializer): 165 """An initiate-upload serializer dedicated to deposited file.""" 166 167 @property 168 def max_upload_file_size(self): 169 """return the deposited max file size define in the settings. 170 171 The @property decorator is used to ease the use of @override_settings 172 in tests. Otherwise the setting is not changed and we can't easily test 173 an upload with a size higher than the one defined in the settings 174 """ 175 return settings.DEPOSITED_FILE_SOURCE_MAX_SIZE 176 177 def validate(self, attrs): 178 """Validate if the mimetype is allowed or not.""" 179 # mimetype is provided, we directly check it 180 if attrs["mimetype"] != "": 181 attrs["extension"] = mimetypes.guess_extension(attrs["mimetype"]) 182 183 # mimetype is not provided, we have to guess it from the extension 184 else: 185 mimetypes.init() 186 extension = splitext(attrs["filename"])[1] 187 mimetype = mimetypes.types_map.get(extension) 188 # extension is added to the data in order to be used later 189 attrs["extension"] = extension 190 attrs["mimetype"] = mimetype 191 192 return attrs 193 194 195 class FileDepositorySerializer(serializers.ModelSerializer): 196 """A serializer to display a FileDepository resource.""" 197 198 class Meta: # noqa 199 model = FileDepository 200 fields = ( 201 "id", 202 "lti_id", 203 "title", 204 "description", 205 "playlist", 206 ) 207 read_only_fields = ( 208 "id", 209 "lti_id", 210 "playlist", 211 ) 212 213 playlist = PlaylistLiteSerializer(read_only=True) 214 215 216 class FileDepositorySelectLTISerializer(FileDepositorySerializer): 217 """A serializer to display a FileDepository resource for LTI select content request.""" 218 219 class Meta: # noqa 220 model = FileDepository 221 fields = ( 222 "id", 223 "lti_id", 224 "lti_url", 225 "title", 226 "description", 227 "playlist", 228 ) 229 230 lti_url = serializers.SerializerMethodField() 231 232 def get_lti_url(self, obj): 233 """LTI Url of the FileDepository. 234 235 Parameters 236 ---------- 237 obj : Type[models.Document] 238 The document that we want to serialize 239 240 Returns 241 ------- 242 String 243 the LTI url to be used by LTI consumers 244 245 """ 246 return self.context["request"].build_absolute_uri( 247 reverse("deposit:file_depository_lti_view", args=[obj.id]), 248 ) 249 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/src/backend/marsha/deposit/serializers.py b/src/backend/marsha/deposit/serializers.py --- a/src/backend/marsha/deposit/serializers.py +++ b/src/backend/marsha/deposit/serializers.py @@ -1,7 +1,7 @@ """Structure of deposit related models API responses with Django Rest Framework serializers.""" import mimetypes from os.path import splitext -from urllib.parse import quote_plus +from urllib.parse import quote, quote_plus from django.conf import settings from django.urls import reverse @@ -148,9 +148,12 @@ f"{time_utils.to_timestamp(obj.uploaded_on)}" ) + response_content_disposition = quote_plus( + "attachment; filename=" + quote(obj.filename) + ) url = ( f"{base:s}{self._get_extension_string(obj)}?" - f"response-content-disposition={quote_plus('attachment; filename=' + obj.filename)}" + f"response-content-disposition={response_content_disposition}" ) if settings.CLOUDFRONT_SIGNED_URLS_ACTIVE:
{"golden_diff": "diff --git a/src/backend/marsha/deposit/serializers.py b/src/backend/marsha/deposit/serializers.py\n--- a/src/backend/marsha/deposit/serializers.py\n+++ b/src/backend/marsha/deposit/serializers.py\n@@ -1,7 +1,7 @@\n \"\"\"Structure of deposit related models API responses with Django Rest Framework serializers.\"\"\"\n import mimetypes\n from os.path import splitext\n-from urllib.parse import quote_plus\n+from urllib.parse import quote, quote_plus\n \n from django.conf import settings\n from django.urls import reverse\n@@ -148,9 +148,12 @@\n f\"{time_utils.to_timestamp(obj.uploaded_on)}\"\n )\n \n+ response_content_disposition = quote_plus(\n+ \"attachment; filename=\" + quote(obj.filename)\n+ )\n url = (\n f\"{base:s}{self._get_extension_string(obj)}?\"\n- f\"response-content-disposition={quote_plus('attachment; filename=' + obj.filename)}\"\n+ f\"response-content-disposition={response_content_disposition}\"\n )\n \n if settings.CLOUDFRONT_SIGNED_URLS_ACTIVE:\n", "issue": "(deposit) support apostrophe in filenames\n## Bug Report\r\n\r\n**Problematic Behavior**\r\nLoading a file with a typographic apostrophe (or curly quote) in the filename generates an error during the download process\r\n\r\n``` xml\r\n<Error>\r\n<Code>InvalidArgument</Code>\r\n<Message>Header value cannot be represented using ISO-8859-1.</Message>\r\n<ArgumentName>response-content-disposition</ArgumentName>\r\n<ArgumentValue>attachment; filename=Capture d\u2019\u00e9cran du 2023-07-05 17-55-04.png</ArgumentValue>\r\n<RequestId>7576DY31G7YEKEHW</RequestId>\r\n<HostId>20bDgURy8zglt3nW4UJbIbH0G5+c+DOmSeRKq/K2YNwP6rZMMHzS4h0Af91vioEWtbJsRurDzP0=</HostId>\r\n</Error>\r\n```\r\n\r\n**Steps to Reproduce**\r\n1. Add a Marsha Deposit XBlock on Studio\r\n2. On student view, upload a file with a typographic apostrophe on its name \r\n3. Try to download the file\r\n\r\n**Environment**\r\n- Marsha version: production\r\n- Platform: Open edX\r\n\r\n**Possible Solution**\r\nUse a more extensive character set like UTF-8?\r\nOtherwise, escape apostrophes in filenames or replace it by right single quotation marks?\r\n\r\n**Additional context/Screenshots**\r\n\r\n- OK : https://www.compart.com/fr/unicode/U+0027\r\n- KO : https://www.compart.com/fr/unicode/U+2019\r\n\r\n[deposit-typographic-apostrophe-bug.webm](https://github.com/openfun/marsha/assets/23249365/ee43dd63-9b1a-43a7-912c-6ed84f07404c)\n", "before_files": [{"content": "\"\"\"Structure of deposit related models API responses with Django Rest Framework serializers.\"\"\"\nimport mimetypes\nfrom os.path import splitext\nfrom urllib.parse import quote_plus\n\nfrom django.conf import settings\nfrom django.urls import reverse\n\nfrom rest_framework import serializers\n\nfrom marsha.core.serializers import (\n BaseInitiateUploadSerializer,\n UploadableFileWithExtensionSerializerMixin,\n get_resource_cloudfront_url_params,\n)\nfrom marsha.core.serializers.playlist import PlaylistLiteSerializer\nfrom marsha.core.utils import cloudfront_utils, time_utils\n\nfrom ..core.models import User\nfrom .models import DepositedFile, FileDepository\n\n\nclass DepositedFileSerializer(\n UploadableFileWithExtensionSerializerMixin,\n serializers.ModelSerializer,\n BaseInitiateUploadSerializer,\n):\n \"\"\"A serializer to display a deposited file.\"\"\"\n\n class Meta: # noqa\n model = DepositedFile\n fields = (\n \"filename\",\n \"author_name\",\n \"id\",\n \"file_depository_id\",\n \"read\",\n \"url\",\n \"uploaded_on\",\n \"upload_state\",\n \"size\",\n )\n read_only_fields = (\n \"id\",\n \"file_depository_id\",\n \"url\",\n \"uploaded_on\",\n \"upload_state\",\n )\n\n # file_depository = FileDepositorySerializer(read_only=True)\n # filename = serializers.SerializerMethodField()\n url = serializers.SerializerMethodField()\n # Make sure file depository UUID is converted to a string during serialization\n file_depository_id = serializers.PrimaryKeyRelatedField(\n read_only=True, pk_field=serializers.CharField()\n )\n\n @property\n def max_upload_file_size(self):\n \"\"\"return the deposited max file size define in the settings.\n\n The @property decorator is used to ease the use of @override_settings\n in tests. Otherwise the setting is not changed and we can't easily test\n an upload with a size higher than the one defined in the settings\n \"\"\"\n return settings.DEPOSITED_FILE_SOURCE_MAX_SIZE\n\n def create(self, validated_data):\n \"\"\"Force the file depository field to the file depository of the JWT Token if any,\n and set the author name to the username of the JWT Token if any.\n\n Parameters\n ----------\n validated_data : dictionary\n Dictionary of the deserialized values of each field after validation.\n\n Returns\n -------\n dictionary\n The \"validated_data\" dictionary is returned after modification.\n\n \"\"\"\n resource = self.context[\"request\"].resource\n user = self.context[\"request\"].user\n file_depository_id = self.context[\"view\"].get_related_filedepository_id()\n\n if not validated_data.get(\"file_depository_id\"):\n validated_data[\"file_depository_id\"] = file_depository_id\n\n if resource:\n validated_data[\"author_id\"] = resource.user.get(\"id\")\n\n # try to get the most useful username from the token\n if resource.user:\n if author_name := (\n resource.user.get(\"user_fullname\") or resource.user.get(\"username\")\n ):\n validated_data[\"author_name\"] = author_name\n else:\n validated_data[\"author_id\"] = user.id\n validated_data[\"author_name\"] = User.objects.get(id=user.id).username\n\n return super().create(validated_data)\n\n def _get_extension_string(self, obj):\n \"\"\"Deposited file extension with the leading dot.\n\n Parameters\n ----------\n obj : Type[models.DepositedFile]\n The deposited file that we want to serialize\n\n Returns\n -------\n String\n The extension with the leading dot if the deposited file has an extension\n An empty string otherwise\n\n \"\"\"\n return \".\" + obj.extension if obj.extension else \"\"\n\n # def get_filename(self, obj):\n # \"\"\"Filename of the deposited file.\"\"\"\n # return self._get_filename(obj.title) if obj.uploaded_on else None\n\n def get_url(self, obj):\n \"\"\"Url of the DepositedFile.\n\n Parameters\n ----------\n obj : Type[models.DepositedFile]\n The deposited file that we want to serialize\n\n Returns\n -------\n String or None\n the url to fetch the deposited file on CloudFront\n None if the deposited file is still not uploaded to S3 with success\n\n \"\"\"\n if obj.uploaded_on is None:\n return None\n\n base = (\n f\"{settings.AWS_S3_URL_PROTOCOL}://{settings.CLOUDFRONT_DOMAIN}/\"\n f\"{obj.file_depository.pk}/depositedfile/{obj.pk}/\"\n f\"{time_utils.to_timestamp(obj.uploaded_on)}\"\n )\n\n url = (\n f\"{base:s}{self._get_extension_string(obj)}?\"\n f\"response-content-disposition={quote_plus('attachment; filename=' + obj.filename)}\"\n )\n\n if settings.CLOUDFRONT_SIGNED_URLS_ACTIVE:\n params = get_resource_cloudfront_url_params(\n \"depositedfile\", obj.file_depository_id\n )\n url = cloudfront_utils.build_signed_url(url, params)\n return url\n\n\nclass DepositedFileInitiateUploadSerializer(BaseInitiateUploadSerializer):\n \"\"\"An initiate-upload serializer dedicated to deposited file.\"\"\"\n\n @property\n def max_upload_file_size(self):\n \"\"\"return the deposited max file size define in the settings.\n\n The @property decorator is used to ease the use of @override_settings\n in tests. Otherwise the setting is not changed and we can't easily test\n an upload with a size higher than the one defined in the settings\n \"\"\"\n return settings.DEPOSITED_FILE_SOURCE_MAX_SIZE\n\n def validate(self, attrs):\n \"\"\"Validate if the mimetype is allowed or not.\"\"\"\n # mimetype is provided, we directly check it\n if attrs[\"mimetype\"] != \"\":\n attrs[\"extension\"] = mimetypes.guess_extension(attrs[\"mimetype\"])\n\n # mimetype is not provided, we have to guess it from the extension\n else:\n mimetypes.init()\n extension = splitext(attrs[\"filename\"])[1]\n mimetype = mimetypes.types_map.get(extension)\n # extension is added to the data in order to be used later\n attrs[\"extension\"] = extension\n attrs[\"mimetype\"] = mimetype\n\n return attrs\n\n\nclass FileDepositorySerializer(serializers.ModelSerializer):\n \"\"\"A serializer to display a FileDepository resource.\"\"\"\n\n class Meta: # noqa\n model = FileDepository\n fields = (\n \"id\",\n \"lti_id\",\n \"title\",\n \"description\",\n \"playlist\",\n )\n read_only_fields = (\n \"id\",\n \"lti_id\",\n \"playlist\",\n )\n\n playlist = PlaylistLiteSerializer(read_only=True)\n\n\nclass FileDepositorySelectLTISerializer(FileDepositorySerializer):\n \"\"\"A serializer to display a FileDepository resource for LTI select content request.\"\"\"\n\n class Meta: # noqa\n model = FileDepository\n fields = (\n \"id\",\n \"lti_id\",\n \"lti_url\",\n \"title\",\n \"description\",\n \"playlist\",\n )\n\n lti_url = serializers.SerializerMethodField()\n\n def get_lti_url(self, obj):\n \"\"\"LTI Url of the FileDepository.\n\n Parameters\n ----------\n obj : Type[models.Document]\n The document that we want to serialize\n\n Returns\n -------\n String\n the LTI url to be used by LTI consumers\n\n \"\"\"\n return self.context[\"request\"].build_absolute_uri(\n reverse(\"deposit:file_depository_lti_view\", args=[obj.id]),\n )\n", "path": "src/backend/marsha/deposit/serializers.py"}], "after_files": [{"content": "\"\"\"Structure of deposit related models API responses with Django Rest Framework serializers.\"\"\"\nimport mimetypes\nfrom os.path import splitext\nfrom urllib.parse import quote, quote_plus\n\nfrom django.conf import settings\nfrom django.urls import reverse\n\nfrom rest_framework import serializers\n\nfrom marsha.core.serializers import (\n BaseInitiateUploadSerializer,\n UploadableFileWithExtensionSerializerMixin,\n get_resource_cloudfront_url_params,\n)\nfrom marsha.core.serializers.playlist import PlaylistLiteSerializer\nfrom marsha.core.utils import cloudfront_utils, time_utils\n\nfrom ..core.models import User\nfrom .models import DepositedFile, FileDepository\n\n\nclass DepositedFileSerializer(\n UploadableFileWithExtensionSerializerMixin,\n serializers.ModelSerializer,\n BaseInitiateUploadSerializer,\n):\n \"\"\"A serializer to display a deposited file.\"\"\"\n\n class Meta: # noqa\n model = DepositedFile\n fields = (\n \"filename\",\n \"author_name\",\n \"id\",\n \"file_depository_id\",\n \"read\",\n \"url\",\n \"uploaded_on\",\n \"upload_state\",\n \"size\",\n )\n read_only_fields = (\n \"id\",\n \"file_depository_id\",\n \"url\",\n \"uploaded_on\",\n \"upload_state\",\n )\n\n # file_depository = FileDepositorySerializer(read_only=True)\n # filename = serializers.SerializerMethodField()\n url = serializers.SerializerMethodField()\n # Make sure file depository UUID is converted to a string during serialization\n file_depository_id = serializers.PrimaryKeyRelatedField(\n read_only=True, pk_field=serializers.CharField()\n )\n\n @property\n def max_upload_file_size(self):\n \"\"\"return the deposited max file size define in the settings.\n\n The @property decorator is used to ease the use of @override_settings\n in tests. Otherwise the setting is not changed and we can't easily test\n an upload with a size higher than the one defined in the settings\n \"\"\"\n return settings.DEPOSITED_FILE_SOURCE_MAX_SIZE\n\n def create(self, validated_data):\n \"\"\"Force the file depository field to the file depository of the JWT Token if any,\n and set the author name to the username of the JWT Token if any.\n\n Parameters\n ----------\n validated_data : dictionary\n Dictionary of the deserialized values of each field after validation.\n\n Returns\n -------\n dictionary\n The \"validated_data\" dictionary is returned after modification.\n\n \"\"\"\n resource = self.context[\"request\"].resource\n user = self.context[\"request\"].user\n file_depository_id = self.context[\"view\"].get_related_filedepository_id()\n\n if not validated_data.get(\"file_depository_id\"):\n validated_data[\"file_depository_id\"] = file_depository_id\n\n if resource:\n validated_data[\"author_id\"] = resource.user.get(\"id\")\n\n # try to get the most useful username from the token\n if resource.user:\n if author_name := (\n resource.user.get(\"user_fullname\") or resource.user.get(\"username\")\n ):\n validated_data[\"author_name\"] = author_name\n else:\n validated_data[\"author_id\"] = user.id\n validated_data[\"author_name\"] = User.objects.get(id=user.id).username\n\n return super().create(validated_data)\n\n def _get_extension_string(self, obj):\n \"\"\"Deposited file extension with the leading dot.\n\n Parameters\n ----------\n obj : Type[models.DepositedFile]\n The deposited file that we want to serialize\n\n Returns\n -------\n String\n The extension with the leading dot if the deposited file has an extension\n An empty string otherwise\n\n \"\"\"\n return \".\" + obj.extension if obj.extension else \"\"\n\n # def get_filename(self, obj):\n # \"\"\"Filename of the deposited file.\"\"\"\n # return self._get_filename(obj.title) if obj.uploaded_on else None\n\n def get_url(self, obj):\n \"\"\"Url of the DepositedFile.\n\n Parameters\n ----------\n obj : Type[models.DepositedFile]\n The deposited file that we want to serialize\n\n Returns\n -------\n String or None\n the url to fetch the deposited file on CloudFront\n None if the deposited file is still not uploaded to S3 with success\n\n \"\"\"\n if obj.uploaded_on is None:\n return None\n\n base = (\n f\"{settings.AWS_S3_URL_PROTOCOL}://{settings.CLOUDFRONT_DOMAIN}/\"\n f\"{obj.file_depository.pk}/depositedfile/{obj.pk}/\"\n f\"{time_utils.to_timestamp(obj.uploaded_on)}\"\n )\n\n response_content_disposition = quote_plus(\n \"attachment; filename=\" + quote(obj.filename)\n )\n url = (\n f\"{base:s}{self._get_extension_string(obj)}?\"\n f\"response-content-disposition={response_content_disposition}\"\n )\n\n if settings.CLOUDFRONT_SIGNED_URLS_ACTIVE:\n params = get_resource_cloudfront_url_params(\n \"depositedfile\", obj.file_depository_id\n )\n url = cloudfront_utils.build_signed_url(url, params)\n return url\n\n\nclass DepositedFileInitiateUploadSerializer(BaseInitiateUploadSerializer):\n \"\"\"An initiate-upload serializer dedicated to deposited file.\"\"\"\n\n @property\n def max_upload_file_size(self):\n \"\"\"return the deposited max file size define in the settings.\n\n The @property decorator is used to ease the use of @override_settings\n in tests. Otherwise the setting is not changed and we can't easily test\n an upload with a size higher than the one defined in the settings\n \"\"\"\n return settings.DEPOSITED_FILE_SOURCE_MAX_SIZE\n\n def validate(self, attrs):\n \"\"\"Validate if the mimetype is allowed or not.\"\"\"\n # mimetype is provided, we directly check it\n if attrs[\"mimetype\"] != \"\":\n attrs[\"extension\"] = mimetypes.guess_extension(attrs[\"mimetype\"])\n\n # mimetype is not provided, we have to guess it from the extension\n else:\n mimetypes.init()\n extension = splitext(attrs[\"filename\"])[1]\n mimetype = mimetypes.types_map.get(extension)\n # extension is added to the data in order to be used later\n attrs[\"extension\"] = extension\n attrs[\"mimetype\"] = mimetype\n\n return attrs\n\n\nclass FileDepositorySerializer(serializers.ModelSerializer):\n \"\"\"A serializer to display a FileDepository resource.\"\"\"\n\n class Meta: # noqa\n model = FileDepository\n fields = (\n \"id\",\n \"lti_id\",\n \"title\",\n \"description\",\n \"playlist\",\n )\n read_only_fields = (\n \"id\",\n \"lti_id\",\n \"playlist\",\n )\n\n playlist = PlaylistLiteSerializer(read_only=True)\n\n\nclass FileDepositorySelectLTISerializer(FileDepositorySerializer):\n \"\"\"A serializer to display a FileDepository resource for LTI select content request.\"\"\"\n\n class Meta: # noqa\n model = FileDepository\n fields = (\n \"id\",\n \"lti_id\",\n \"lti_url\",\n \"title\",\n \"description\",\n \"playlist\",\n )\n\n lti_url = serializers.SerializerMethodField()\n\n def get_lti_url(self, obj):\n \"\"\"LTI Url of the FileDepository.\n\n Parameters\n ----------\n obj : Type[models.Document]\n The document that we want to serialize\n\n Returns\n -------\n String\n the LTI url to be used by LTI consumers\n\n \"\"\"\n return self.context[\"request\"].build_absolute_uri(\n reverse(\"deposit:file_depository_lti_view\", args=[obj.id]),\n )\n", "path": "src/backend/marsha/deposit/serializers.py"}]}
2,972
243
gh_patches_debug_21508
rasdani/github-patches
git_diff
cloud-custodian__cloud-custodian-2804
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- batch-compute resource security-group filter support Add a security-group filter to the batch-compute resource using the securityGroupIds attribute --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `c7n/resources/batch.py` Content: ``` 1 # Copyright 2017-2018 Capital One Services, LLC 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 from __future__ import absolute_import, division, print_function, unicode_literals 15 16 from c7n.manager import resources 17 from c7n.query import QueryResourceManager 18 from c7n.actions import BaseAction 19 from c7n.utils import local_session, type_schema 20 21 22 @resources.register('batch-compute') 23 class ComputeEnvironment(QueryResourceManager): 24 25 class resource_type(object): 26 service = 'batch' 27 filter_name = 'computeEnvironments' 28 filter_type = 'list' 29 dimension = None 30 id = name = "computeEnvironmentName" 31 enum_spec = ( 32 'describe_compute_environments', 'computeEnvironments', None) 33 34 35 @resources.register('batch-definition') 36 class JobDefinition(QueryResourceManager): 37 38 class resource_type(object): 39 service = 'batch' 40 filter_name = 'jobDefinitions' 41 filter_type = 'list' 42 dimension = None 43 id = name = "jobDefinitionName" 44 enum_spec = ( 45 'describe_job_definitions', 'jobDefinitions', None) 46 47 48 class StateTransitionFilter(object): 49 """Filter resources by state. 50 51 Try to simplify construction for policy authors by automatically 52 filtering elements (filters or actions) to the resource states 53 they are valid for. 54 """ 55 valid_origin_states = () 56 57 def filter_resource_state(self, resources, key, states=None): 58 states = states or self.valid_origin_states 59 if not states: 60 return resources 61 orig_length = len(resources) 62 results = [r for r in resources if r[key] in states] 63 if orig_length != len(results): 64 self.log.warn( 65 "%s implicitly filtered %d of %d resources with valid %s" % ( 66 self.__class__.__name__, 67 len(results), orig_length, key.lower())) 68 return results 69 70 71 @ComputeEnvironment.action_registry.register('update-environment') 72 class UpdateComputeEnvironment(BaseAction, StateTransitionFilter): 73 """Updates an AWS batch compute environment 74 75 :example: 76 77 .. code-block: yaml 78 79 policies: 80 - name: update-environments 81 resource: batch-compute 82 filters: 83 - computeResources.desiredvCpus: 0 84 - state: ENABLED 85 actions: 86 - type: update-environment 87 state: DISABLED 88 """ 89 schema = { 90 'type': 'object', 91 'additionalProperties': False, 92 'properties': { 93 'type': {'enum': ['update-environment']}, 94 'computeEnvironment': {'type': 'string'}, 95 'state': {'type': 'string', 'enum': ['ENABLED', 'DISABLED']}, 96 'computeResources': { 97 'type': 'object', 98 'additionalProperties': False, 99 'properties': { 100 'minvCpus': {'type': 'integer'}, 101 'maxvCpus': {'type': 'integer'}, 102 'desiredvCpus': {'type': 'integer'} 103 } 104 }, 105 'serviceRole': {'type': 'string'} 106 } 107 } 108 permissions = ('batch:UpdateComputeEnvironment',) 109 valid_origin_status = ('VALID', 'INVALID') 110 111 def process(self, resources): 112 resources = self.filter_resource_state( 113 resources, 'status', self.valid_origin_status) 114 client = local_session(self.manager.session_factory).client('batch') 115 params = dict(self.data) 116 params.pop('type') 117 for r in resources: 118 params['computeEnvironment'] = r['computeEnvironmentName'] 119 client.update_compute_environment(**params) 120 121 122 @ComputeEnvironment.action_registry.register('delete') 123 class DeleteComputeEnvironment(BaseAction, StateTransitionFilter): 124 """Delete an AWS batch compute environment 125 126 :example: 127 128 .. code-block: yaml 129 130 policies: 131 - name: delete-environments 132 resource: batch-compute 133 filters: 134 - computeResources.desiredvCpus: 0 135 action: 136 - type: delete 137 """ 138 schema = type_schema('delete') 139 permissions = ('batch:DeleteComputeEnvironment',) 140 valid_origin_states = ('DISABLED',) 141 valid_origin_status = ('VALID', 'INVALID') 142 143 def delete_environment(self, r): 144 client = local_session(self.manager.session_factory).client('batch') 145 client.delete_compute_environment( 146 computeEnvironment=r['computeEnvironmentName']) 147 148 def process(self, resources): 149 resources = self.filter_resource_state( 150 self.filter_resource_state( 151 resources, 'state', self.valid_origin_states), 152 'status', self.valid_origin_status) 153 with self.executor_factory(max_workers=2) as w: 154 list(w.map(self.delete_environment, resources)) 155 156 157 @JobDefinition.action_registry.register('deregister') 158 class DefinitionDeregister(BaseAction, StateTransitionFilter): 159 """Deregisters a batch definition 160 161 :example: 162 163 .. code-block: yaml 164 165 policies: 166 - name: deregister-definition 167 resource: batch-definition 168 filters: 169 - containerProperties.image: amazonlinux 170 actions: 171 - type: deregister 172 """ 173 schema = type_schema('deregister') 174 permissions = ('batch:DeregisterJobDefinition',) 175 valid_origin_states = ('ACTIVE',) 176 177 def deregister_definition(self, r): 178 self.client.deregister_job_definition( 179 jobDefinition='%s:%s' % (r['jobDefinitionName'], 180 r['revision'])) 181 182 def process(self, resources): 183 resources = self.filter_resource_state( 184 resources, 'status', self.valid_origin_states) 185 self.client = local_session( 186 self.manager.session_factory).client('batch') 187 with self.executor_factory(max_workers=2) as w: 188 list(w.map(self.deregister_definition, resources)) 189 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/c7n/resources/batch.py b/c7n/resources/batch.py --- a/c7n/resources/batch.py +++ b/c7n/resources/batch.py @@ -13,9 +13,11 @@ # limitations under the License. from __future__ import absolute_import, division, print_function, unicode_literals +from c7n.actions import BaseAction +from c7n.filters.vpc import SecurityGroupFilter, SubnetFilter from c7n.manager import resources from c7n.query import QueryResourceManager -from c7n.actions import BaseAction + from c7n.utils import local_session, type_schema @@ -32,6 +34,18 @@ 'describe_compute_environments', 'computeEnvironments', None) [email protected]_registry.register('security-group') +class ComputeSGFilter(SecurityGroupFilter): + + RelatedIdsExpression = "computeResources.securityGroupIds" + + [email protected]_registry.register('subnet') +class ComputeSubnetFilter(SubnetFilter): + + RelatedIdsExpression = "computeResources.subnets" + + @resources.register('batch-definition') class JobDefinition(QueryResourceManager):
{"golden_diff": "diff --git a/c7n/resources/batch.py b/c7n/resources/batch.py\n--- a/c7n/resources/batch.py\n+++ b/c7n/resources/batch.py\n@@ -13,9 +13,11 @@\n # limitations under the License.\n from __future__ import absolute_import, division, print_function, unicode_literals\n \n+from c7n.actions import BaseAction\n+from c7n.filters.vpc import SecurityGroupFilter, SubnetFilter\n from c7n.manager import resources\n from c7n.query import QueryResourceManager\n-from c7n.actions import BaseAction\n+\n from c7n.utils import local_session, type_schema\n \n \n@@ -32,6 +34,18 @@\n 'describe_compute_environments', 'computeEnvironments', None)\n \n \[email protected]_registry.register('security-group')\n+class ComputeSGFilter(SecurityGroupFilter):\n+\n+ RelatedIdsExpression = \"computeResources.securityGroupIds\"\n+\n+\[email protected]_registry.register('subnet')\n+class ComputeSubnetFilter(SubnetFilter):\n+\n+ RelatedIdsExpression = \"computeResources.subnets\"\n+\n+\n @resources.register('batch-definition')\n class JobDefinition(QueryResourceManager):\n", "issue": "batch-compute resource security-group filter support\nAdd a security-group filter to the batch-compute resource using the securityGroupIds attribute\r\n \n", "before_files": [{"content": "# Copyright 2017-2018 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom c7n.manager import resources\nfrom c7n.query import QueryResourceManager\nfrom c7n.actions import BaseAction\nfrom c7n.utils import local_session, type_schema\n\n\[email protected]('batch-compute')\nclass ComputeEnvironment(QueryResourceManager):\n\n class resource_type(object):\n service = 'batch'\n filter_name = 'computeEnvironments'\n filter_type = 'list'\n dimension = None\n id = name = \"computeEnvironmentName\"\n enum_spec = (\n 'describe_compute_environments', 'computeEnvironments', None)\n\n\[email protected]('batch-definition')\nclass JobDefinition(QueryResourceManager):\n\n class resource_type(object):\n service = 'batch'\n filter_name = 'jobDefinitions'\n filter_type = 'list'\n dimension = None\n id = name = \"jobDefinitionName\"\n enum_spec = (\n 'describe_job_definitions', 'jobDefinitions', None)\n\n\nclass StateTransitionFilter(object):\n \"\"\"Filter resources by state.\n\n Try to simplify construction for policy authors by automatically\n filtering elements (filters or actions) to the resource states\n they are valid for.\n \"\"\"\n valid_origin_states = ()\n\n def filter_resource_state(self, resources, key, states=None):\n states = states or self.valid_origin_states\n if not states:\n return resources\n orig_length = len(resources)\n results = [r for r in resources if r[key] in states]\n if orig_length != len(results):\n self.log.warn(\n \"%s implicitly filtered %d of %d resources with valid %s\" % (\n self.__class__.__name__,\n len(results), orig_length, key.lower()))\n return results\n\n\[email protected]_registry.register('update-environment')\nclass UpdateComputeEnvironment(BaseAction, StateTransitionFilter):\n \"\"\"Updates an AWS batch compute environment\n\n :example:\n\n .. code-block: yaml\n\n policies:\n - name: update-environments\n resource: batch-compute\n filters:\n - computeResources.desiredvCpus: 0\n - state: ENABLED\n actions:\n - type: update-environment\n state: DISABLED\n \"\"\"\n schema = {\n 'type': 'object',\n 'additionalProperties': False,\n 'properties': {\n 'type': {'enum': ['update-environment']},\n 'computeEnvironment': {'type': 'string'},\n 'state': {'type': 'string', 'enum': ['ENABLED', 'DISABLED']},\n 'computeResources': {\n 'type': 'object',\n 'additionalProperties': False,\n 'properties': {\n 'minvCpus': {'type': 'integer'},\n 'maxvCpus': {'type': 'integer'},\n 'desiredvCpus': {'type': 'integer'}\n }\n },\n 'serviceRole': {'type': 'string'}\n }\n }\n permissions = ('batch:UpdateComputeEnvironment',)\n valid_origin_status = ('VALID', 'INVALID')\n\n def process(self, resources):\n resources = self.filter_resource_state(\n resources, 'status', self.valid_origin_status)\n client = local_session(self.manager.session_factory).client('batch')\n params = dict(self.data)\n params.pop('type')\n for r in resources:\n params['computeEnvironment'] = r['computeEnvironmentName']\n client.update_compute_environment(**params)\n\n\[email protected]_registry.register('delete')\nclass DeleteComputeEnvironment(BaseAction, StateTransitionFilter):\n \"\"\"Delete an AWS batch compute environment\n\n :example:\n\n .. code-block: yaml\n\n policies:\n - name: delete-environments\n resource: batch-compute\n filters:\n - computeResources.desiredvCpus: 0\n action:\n - type: delete\n \"\"\"\n schema = type_schema('delete')\n permissions = ('batch:DeleteComputeEnvironment',)\n valid_origin_states = ('DISABLED',)\n valid_origin_status = ('VALID', 'INVALID')\n\n def delete_environment(self, r):\n client = local_session(self.manager.session_factory).client('batch')\n client.delete_compute_environment(\n computeEnvironment=r['computeEnvironmentName'])\n\n def process(self, resources):\n resources = self.filter_resource_state(\n self.filter_resource_state(\n resources, 'state', self.valid_origin_states),\n 'status', self.valid_origin_status)\n with self.executor_factory(max_workers=2) as w:\n list(w.map(self.delete_environment, resources))\n\n\[email protected]_registry.register('deregister')\nclass DefinitionDeregister(BaseAction, StateTransitionFilter):\n \"\"\"Deregisters a batch definition\n\n :example:\n\n .. code-block: yaml\n\n policies:\n - name: deregister-definition\n resource: batch-definition\n filters:\n - containerProperties.image: amazonlinux\n actions:\n - type: deregister\n \"\"\"\n schema = type_schema('deregister')\n permissions = ('batch:DeregisterJobDefinition',)\n valid_origin_states = ('ACTIVE',)\n\n def deregister_definition(self, r):\n self.client.deregister_job_definition(\n jobDefinition='%s:%s' % (r['jobDefinitionName'],\n r['revision']))\n\n def process(self, resources):\n resources = self.filter_resource_state(\n resources, 'status', self.valid_origin_states)\n self.client = local_session(\n self.manager.session_factory).client('batch')\n with self.executor_factory(max_workers=2) as w:\n list(w.map(self.deregister_definition, resources))\n", "path": "c7n/resources/batch.py"}], "after_files": [{"content": "# Copyright 2017-2018 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom c7n.actions import BaseAction\nfrom c7n.filters.vpc import SecurityGroupFilter, SubnetFilter\nfrom c7n.manager import resources\nfrom c7n.query import QueryResourceManager\n\nfrom c7n.utils import local_session, type_schema\n\n\[email protected]('batch-compute')\nclass ComputeEnvironment(QueryResourceManager):\n\n class resource_type(object):\n service = 'batch'\n filter_name = 'computeEnvironments'\n filter_type = 'list'\n dimension = None\n id = name = \"computeEnvironmentName\"\n enum_spec = (\n 'describe_compute_environments', 'computeEnvironments', None)\n\n\[email protected]_registry.register('security-group')\nclass ComputeSGFilter(SecurityGroupFilter):\n\n RelatedIdsExpression = \"computeResources.securityGroupIds\"\n\n\[email protected]_registry.register('subnet')\nclass ComputeSubnetFilter(SubnetFilter):\n\n RelatedIdsExpression = \"computeResources.subnets\"\n\n\[email protected]('batch-definition')\nclass JobDefinition(QueryResourceManager):\n\n class resource_type(object):\n service = 'batch'\n filter_name = 'jobDefinitions'\n filter_type = 'list'\n dimension = None\n id = name = \"jobDefinitionName\"\n enum_spec = (\n 'describe_job_definitions', 'jobDefinitions', None)\n\n\nclass StateTransitionFilter(object):\n \"\"\"Filter resources by state.\n\n Try to simplify construction for policy authors by automatically\n filtering elements (filters or actions) to the resource states\n they are valid for.\n \"\"\"\n valid_origin_states = ()\n\n def filter_resource_state(self, resources, key, states=None):\n states = states or self.valid_origin_states\n if not states:\n return resources\n orig_length = len(resources)\n results = [r for r in resources if r[key] in states]\n if orig_length != len(results):\n self.log.warn(\n \"%s implicitly filtered %d of %d resources with valid %s\" % (\n self.__class__.__name__,\n len(results), orig_length, key.lower()))\n return results\n\n\[email protected]_registry.register('update-environment')\nclass UpdateComputeEnvironment(BaseAction, StateTransitionFilter):\n \"\"\"Updates an AWS batch compute environment\n\n :example:\n\n .. code-block: yaml\n\n policies:\n - name: update-environments\n resource: batch-compute\n filters:\n - computeResources.desiredvCpus: 0\n - state: ENABLED\n actions:\n - type: update-environment\n state: DISABLED\n \"\"\"\n schema = {\n 'type': 'object',\n 'additionalProperties': False,\n 'properties': {\n 'type': {'enum': ['update-environment']},\n 'computeEnvironment': {'type': 'string'},\n 'state': {'type': 'string', 'enum': ['ENABLED', 'DISABLED']},\n 'computeResources': {\n 'type': 'object',\n 'additionalProperties': False,\n 'properties': {\n 'minvCpus': {'type': 'integer'},\n 'maxvCpus': {'type': 'integer'},\n 'desiredvCpus': {'type': 'integer'}\n }\n },\n 'serviceRole': {'type': 'string'}\n }\n }\n permissions = ('batch:UpdateComputeEnvironment',)\n valid_origin_status = ('VALID', 'INVALID')\n\n def process(self, resources):\n resources = self.filter_resource_state(\n resources, 'status', self.valid_origin_status)\n client = local_session(self.manager.session_factory).client('batch')\n params = dict(self.data)\n params.pop('type')\n for r in resources:\n params['computeEnvironment'] = r['computeEnvironmentName']\n client.update_compute_environment(**params)\n\n\[email protected]_registry.register('delete')\nclass DeleteComputeEnvironment(BaseAction, StateTransitionFilter):\n \"\"\"Delete an AWS batch compute environment\n\n :example:\n\n .. code-block: yaml\n\n policies:\n - name: delete-environments\n resource: batch-compute\n filters:\n - computeResources.desiredvCpus: 0\n action:\n - type: delete\n \"\"\"\n schema = type_schema('delete')\n permissions = ('batch:DeleteComputeEnvironment',)\n valid_origin_states = ('DISABLED',)\n valid_origin_status = ('VALID', 'INVALID')\n\n def delete_environment(self, r):\n client = local_session(self.manager.session_factory).client('batch')\n client.delete_compute_environment(\n computeEnvironment=r['computeEnvironmentName'])\n\n def process(self, resources):\n resources = self.filter_resource_state(\n self.filter_resource_state(\n resources, 'state', self.valid_origin_states),\n 'status', self.valid_origin_status)\n with self.executor_factory(max_workers=2) as w:\n list(w.map(self.delete_environment, resources))\n\n\[email protected]_registry.register('deregister')\nclass DefinitionDeregister(BaseAction, StateTransitionFilter):\n \"\"\"Deregisters a batch definition\n\n :example:\n\n .. code-block: yaml\n\n policies:\n - name: deregister-definition\n resource: batch-definition\n filters:\n - containerProperties.image: amazonlinux\n actions:\n - type: deregister\n \"\"\"\n schema = type_schema('deregister')\n permissions = ('batch:DeregisterJobDefinition',)\n valid_origin_states = ('ACTIVE',)\n\n def deregister_definition(self, r):\n self.client.deregister_job_definition(\n jobDefinition='%s:%s' % (r['jobDefinitionName'],\n r['revision']))\n\n def process(self, resources):\n resources = self.filter_resource_state(\n resources, 'status', self.valid_origin_states)\n self.client = local_session(\n self.manager.session_factory).client('batch')\n with self.executor_factory(max_workers=2) as w:\n list(w.map(self.deregister_definition, resources))\n", "path": "c7n/resources/batch.py"}]}
2,075
255
gh_patches_debug_17384
rasdani/github-patches
git_diff
azavea__raster-vision-988
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Verbose flag is not passed to remote jobs The `-v` flag works with the inprocess runner, but doesn't seem to get forwarded to remote jobs when using the batch runner. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `rastervision_aws_batch/rastervision/aws_batch/aws_batch_runner.py` Content: ``` 1 import copy 2 import logging 3 import os 4 import uuid 5 from inspect import signature 6 from typing import List, Optional 7 8 from rastervision.pipeline import rv_config 9 from rastervision.pipeline.runner import Runner 10 11 log = logging.getLogger(__name__) 12 AWS_BATCH = 'batch' 13 14 15 def submit_job(cmd: List[str], 16 job_name: str, 17 debug: bool = False, 18 profile: str = False, 19 attempts: int = 5, 20 parent_job_ids: List[str] = None, 21 num_array_jobs: Optional[int] = None, 22 use_gpu: bool = False, 23 job_queue: Optional[str] = None, 24 job_def: Optional[str] = None) -> str: 25 """Submit a job to run on AWS Batch. 26 27 Args: 28 cmd: a command to run in the Docker container for the remote job 29 debug: if True, run the command using a ptvsd wrapper which sets up a remote 30 VS Code Python debugger server 31 profile: if True, run the command using kernprof, a line profiler 32 attempts: the number of times to try running the command which is useful 33 in case of failure. 34 parent_job_ids: optional list of parent Batch job ids. The job created by this 35 will only run after the parent jobs complete successfully. 36 num_array_jobs: if set, make this a Batch array job with size equal to 37 num_array_jobs 38 use_gpu: if True, run the job in a GPU-enabled queue 39 job_queue: if set, use this job queue 40 job_def: if set, use this job definition 41 """ 42 batch_config = rv_config.get_namespace_config(AWS_BATCH) 43 44 if job_queue is None: 45 if use_gpu: 46 job_queue = batch_config('gpu_job_queue') 47 else: 48 job_queue = batch_config('cpu_job_queue') 49 50 if job_def is None: 51 if use_gpu: 52 job_def = batch_config('gpu_job_def') 53 else: 54 job_def = batch_config('cpu_job_def') 55 56 import boto3 57 client = boto3.client('batch') 58 59 cmd_list = cmd.split(' ') 60 if debug: 61 cmd_list = [ 62 'python', '-m', 'ptvsd', '--host', '0.0.0.0', '--port', '6006', 63 '--wait', '-m' 64 ] + cmd_list 65 66 if profile: 67 cmd_list = ['kernprof', '-v', '-l'] + cmd_list 68 69 kwargs = { 70 'jobName': job_name, 71 'jobQueue': job_queue, 72 'jobDefinition': job_def, 73 'containerOverrides': { 74 'command': cmd_list 75 }, 76 'retryStrategy': { 77 'attempts': attempts 78 }, 79 } 80 if parent_job_ids: 81 kwargs['dependsOn'] = [{'jobId': id} for id in parent_job_ids] 82 if num_array_jobs: 83 kwargs['arrayProperties'] = {'size': num_array_jobs} 84 85 job_id = client.submit_job(**kwargs)['jobId'] 86 msg = 'submitted job with jobName={} and jobId={} w/ parent(s)={}'.format( 87 job_name, job_id, parent_job_ids) 88 log.info(msg) 89 log.info(cmd_list) 90 91 return job_id 92 93 94 class AWSBatchRunner(Runner): 95 """Runs pipelines remotely using AWS Batch. 96 97 Requires Everett configuration of form: 98 99 ``` 100 [AWS_BATCH] 101 cpu_job_queue= 102 cpu_job_def= 103 gpu_job_queue= 104 gpu_job_def= 105 attempts= 106 ``` 107 """ 108 109 def run(self, 110 cfg_json_uri, 111 pipeline, 112 commands, 113 num_splits=1, 114 pipeline_run_name: str = 'raster-vision'): 115 parent_job_ids = [] 116 117 # pipeline-specific job queue 118 if hasattr(pipeline, 'job_queue'): 119 pipeline_job_queue = pipeline.job_queue 120 else: 121 pipeline_job_queue = None 122 123 # pipeline-specific job definition 124 if hasattr(pipeline, 'job_def'): 125 pipeline_job_def = pipeline.job_def 126 else: 127 pipeline_job_def = None 128 129 for command in commands: 130 131 # detect external command 132 if hasattr(pipeline, command): 133 fn = getattr(pipeline, command) 134 params = signature(fn).parameters 135 external = hasattr(fn, 'external') and len(params) in {0, 1} 136 array_job_capable = hasattr(fn, 'array_job_capable') \ 137 and fn.array_job_capable 138 else: 139 external = False 140 array_job_capable = False 141 142 # command-specific job queue, job definition 143 job_def = pipeline_job_def 144 job_queue = pipeline_job_queue 145 if hasattr(pipeline, command): 146 fn = getattr(pipeline, command) 147 if hasattr(fn, 'job_def'): 148 job_def = fn.job_def 149 if hasattr(fn, 'job_queue'): 150 job_queue = fn.job_queue 151 152 num_array_jobs = None 153 use_gpu = command in pipeline.gpu_commands 154 155 job_name = f'{pipeline_run_name}-{command}-{uuid.uuid4()}' 156 157 if not external: 158 cmd = [ 159 'python', '-m', 'rastervision.pipeline.cli run_command', 160 cfg_json_uri, command, '--runner', AWS_BATCH 161 ] 162 if command in pipeline.split_commands and num_splits > 1: 163 num_array_jobs = num_splits 164 cmd += ['--num-splits', str(num_splits)] 165 job_id = submit_job( 166 cmd=' '.join(cmd), 167 job_name=job_name, 168 parent_job_ids=parent_job_ids, 169 num_array_jobs=num_array_jobs, 170 use_gpu=use_gpu, 171 job_queue=job_queue, 172 job_def=job_def) 173 parent_job_ids = [job_id] 174 else: 175 if command in pipeline.split_commands and num_splits > 1: 176 if len(params) == 1 and array_job_capable: 177 cmd = fn(-num_splits) 178 num_array_jobs = num_splits 179 job_id = submit_job( 180 cmd=' '.join(cmd), 181 job_name=job_name, 182 parent_job_ids=parent_job_ids, 183 num_array_jobs=num_array_jobs, 184 use_gpu=use_gpu, 185 job_queue=job_queue, 186 job_def=job_def) 187 parent_job_ids = [job_id] 188 elif len(params) == 1 and not array_job_capable: 189 num_array_jobs = None 190 new_parent_job_ids = [] 191 for cmd in fn(num_splits): 192 job_id = submit_job( 193 cmd=' '.join(cmd), 194 job_name=job_name, 195 parent_job_ids=parent_job_ids, 196 num_array_jobs=num_array_jobs, 197 use_gpu=use_gpu, 198 job_queue=job_queue, 199 job_def=job_def) 200 new_parent_job_ids.append(job_id) 201 parent_job_ids = copy.copy(new_parent_job_ids) 202 elif len(params) == 0: 203 cmd = fn() 204 num_array_jobs = None 205 job_id = submit_job( 206 cmd=' '.join(cmd), 207 job_name=job_name, 208 parent_job_ids=parent_job_ids, 209 num_array_jobs=num_array_jobs, 210 use_gpu=use_gpu, 211 job_queue=job_queue, 212 job_def=job_def) 213 parent_job_ids = [job_id] 214 else: 215 if len(params) == 0: 216 cmd = fn() 217 elif len(params) == 1: 218 cmd = fn(1)[0] 219 num_array_jobs = 1 220 job_id = submit_job( 221 cmd=' '.join(cmd), 222 job_name=job_name, 223 parent_job_ids=parent_job_ids, 224 num_array_jobs=num_array_jobs, 225 use_gpu=use_gpu, 226 job_queue=job_queue, 227 job_def=job_def) 228 parent_job_ids = [job_id] 229 230 job_queue = None 231 job_def = None 232 233 def get_split_ind(self): 234 return int(os.environ.get('AWS_BATCH_JOB_ARRAY_INDEX', 0)) 235 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/rastervision_aws_batch/rastervision/aws_batch/aws_batch_runner.py b/rastervision_aws_batch/rastervision/aws_batch/aws_batch_runner.py --- a/rastervision_aws_batch/rastervision/aws_batch/aws_batch_runner.py +++ b/rastervision_aws_batch/rastervision/aws_batch/aws_batch_runner.py @@ -155,10 +155,13 @@ job_name = f'{pipeline_run_name}-{command}-{uuid.uuid4()}' if not external: - cmd = [ - 'python', '-m', 'rastervision.pipeline.cli run_command', - cfg_json_uri, command, '--runner', AWS_BATCH - ] + cmd = ['python', '-m', 'rastervision.pipeline.cli'] + if rv_config.get_verbosity() > 1: + cmd.append('-' + 'v' * (rv_config.get_verbosity() - 1)) + cmd.extend([ + 'run_command', cfg_json_uri, command, '--runner', AWS_BATCH + ]) + if command in pipeline.split_commands and num_splits > 1: num_array_jobs = num_splits cmd += ['--num-splits', str(num_splits)]
{"golden_diff": "diff --git a/rastervision_aws_batch/rastervision/aws_batch/aws_batch_runner.py b/rastervision_aws_batch/rastervision/aws_batch/aws_batch_runner.py\n--- a/rastervision_aws_batch/rastervision/aws_batch/aws_batch_runner.py\n+++ b/rastervision_aws_batch/rastervision/aws_batch/aws_batch_runner.py\n@@ -155,10 +155,13 @@\n job_name = f'{pipeline_run_name}-{command}-{uuid.uuid4()}'\n \n if not external:\n- cmd = [\n- 'python', '-m', 'rastervision.pipeline.cli run_command',\n- cfg_json_uri, command, '--runner', AWS_BATCH\n- ]\n+ cmd = ['python', '-m', 'rastervision.pipeline.cli']\n+ if rv_config.get_verbosity() > 1:\n+ cmd.append('-' + 'v' * (rv_config.get_verbosity() - 1))\n+ cmd.extend([\n+ 'run_command', cfg_json_uri, command, '--runner', AWS_BATCH\n+ ])\n+\n if command in pipeline.split_commands and num_splits > 1:\n num_array_jobs = num_splits\n cmd += ['--num-splits', str(num_splits)]\n", "issue": "Verbose flag is not passed to remote jobs\nThe `-v` flag works with the inprocess runner, but doesn't seem to get forwarded to remote jobs when using the batch runner.\n", "before_files": [{"content": "import copy\nimport logging\nimport os\nimport uuid\nfrom inspect import signature\nfrom typing import List, Optional\n\nfrom rastervision.pipeline import rv_config\nfrom rastervision.pipeline.runner import Runner\n\nlog = logging.getLogger(__name__)\nAWS_BATCH = 'batch'\n\n\ndef submit_job(cmd: List[str],\n job_name: str,\n debug: bool = False,\n profile: str = False,\n attempts: int = 5,\n parent_job_ids: List[str] = None,\n num_array_jobs: Optional[int] = None,\n use_gpu: bool = False,\n job_queue: Optional[str] = None,\n job_def: Optional[str] = None) -> str:\n \"\"\"Submit a job to run on AWS Batch.\n\n Args:\n cmd: a command to run in the Docker container for the remote job\n debug: if True, run the command using a ptvsd wrapper which sets up a remote\n VS Code Python debugger server\n profile: if True, run the command using kernprof, a line profiler\n attempts: the number of times to try running the command which is useful\n in case of failure.\n parent_job_ids: optional list of parent Batch job ids. The job created by this\n will only run after the parent jobs complete successfully.\n num_array_jobs: if set, make this a Batch array job with size equal to\n num_array_jobs\n use_gpu: if True, run the job in a GPU-enabled queue\n job_queue: if set, use this job queue\n job_def: if set, use this job definition\n \"\"\"\n batch_config = rv_config.get_namespace_config(AWS_BATCH)\n\n if job_queue is None:\n if use_gpu:\n job_queue = batch_config('gpu_job_queue')\n else:\n job_queue = batch_config('cpu_job_queue')\n\n if job_def is None:\n if use_gpu:\n job_def = batch_config('gpu_job_def')\n else:\n job_def = batch_config('cpu_job_def')\n\n import boto3\n client = boto3.client('batch')\n\n cmd_list = cmd.split(' ')\n if debug:\n cmd_list = [\n 'python', '-m', 'ptvsd', '--host', '0.0.0.0', '--port', '6006',\n '--wait', '-m'\n ] + cmd_list\n\n if profile:\n cmd_list = ['kernprof', '-v', '-l'] + cmd_list\n\n kwargs = {\n 'jobName': job_name,\n 'jobQueue': job_queue,\n 'jobDefinition': job_def,\n 'containerOverrides': {\n 'command': cmd_list\n },\n 'retryStrategy': {\n 'attempts': attempts\n },\n }\n if parent_job_ids:\n kwargs['dependsOn'] = [{'jobId': id} for id in parent_job_ids]\n if num_array_jobs:\n kwargs['arrayProperties'] = {'size': num_array_jobs}\n\n job_id = client.submit_job(**kwargs)['jobId']\n msg = 'submitted job with jobName={} and jobId={} w/ parent(s)={}'.format(\n job_name, job_id, parent_job_ids)\n log.info(msg)\n log.info(cmd_list)\n\n return job_id\n\n\nclass AWSBatchRunner(Runner):\n \"\"\"Runs pipelines remotely using AWS Batch.\n\n Requires Everett configuration of form:\n\n ```\n [AWS_BATCH]\n cpu_job_queue=\n cpu_job_def=\n gpu_job_queue=\n gpu_job_def=\n attempts=\n ```\n \"\"\"\n\n def run(self,\n cfg_json_uri,\n pipeline,\n commands,\n num_splits=1,\n pipeline_run_name: str = 'raster-vision'):\n parent_job_ids = []\n\n # pipeline-specific job queue\n if hasattr(pipeline, 'job_queue'):\n pipeline_job_queue = pipeline.job_queue\n else:\n pipeline_job_queue = None\n\n # pipeline-specific job definition\n if hasattr(pipeline, 'job_def'):\n pipeline_job_def = pipeline.job_def\n else:\n pipeline_job_def = None\n\n for command in commands:\n\n # detect external command\n if hasattr(pipeline, command):\n fn = getattr(pipeline, command)\n params = signature(fn).parameters\n external = hasattr(fn, 'external') and len(params) in {0, 1}\n array_job_capable = hasattr(fn, 'array_job_capable') \\\n and fn.array_job_capable\n else:\n external = False\n array_job_capable = False\n\n # command-specific job queue, job definition\n job_def = pipeline_job_def\n job_queue = pipeline_job_queue\n if hasattr(pipeline, command):\n fn = getattr(pipeline, command)\n if hasattr(fn, 'job_def'):\n job_def = fn.job_def\n if hasattr(fn, 'job_queue'):\n job_queue = fn.job_queue\n\n num_array_jobs = None\n use_gpu = command in pipeline.gpu_commands\n\n job_name = f'{pipeline_run_name}-{command}-{uuid.uuid4()}'\n\n if not external:\n cmd = [\n 'python', '-m', 'rastervision.pipeline.cli run_command',\n cfg_json_uri, command, '--runner', AWS_BATCH\n ]\n if command in pipeline.split_commands and num_splits > 1:\n num_array_jobs = num_splits\n cmd += ['--num-splits', str(num_splits)]\n job_id = submit_job(\n cmd=' '.join(cmd),\n job_name=job_name,\n parent_job_ids=parent_job_ids,\n num_array_jobs=num_array_jobs,\n use_gpu=use_gpu,\n job_queue=job_queue,\n job_def=job_def)\n parent_job_ids = [job_id]\n else:\n if command in pipeline.split_commands and num_splits > 1:\n if len(params) == 1 and array_job_capable:\n cmd = fn(-num_splits)\n num_array_jobs = num_splits\n job_id = submit_job(\n cmd=' '.join(cmd),\n job_name=job_name,\n parent_job_ids=parent_job_ids,\n num_array_jobs=num_array_jobs,\n use_gpu=use_gpu,\n job_queue=job_queue,\n job_def=job_def)\n parent_job_ids = [job_id]\n elif len(params) == 1 and not array_job_capable:\n num_array_jobs = None\n new_parent_job_ids = []\n for cmd in fn(num_splits):\n job_id = submit_job(\n cmd=' '.join(cmd),\n job_name=job_name,\n parent_job_ids=parent_job_ids,\n num_array_jobs=num_array_jobs,\n use_gpu=use_gpu,\n job_queue=job_queue,\n job_def=job_def)\n new_parent_job_ids.append(job_id)\n parent_job_ids = copy.copy(new_parent_job_ids)\n elif len(params) == 0:\n cmd = fn()\n num_array_jobs = None\n job_id = submit_job(\n cmd=' '.join(cmd),\n job_name=job_name,\n parent_job_ids=parent_job_ids,\n num_array_jobs=num_array_jobs,\n use_gpu=use_gpu,\n job_queue=job_queue,\n job_def=job_def)\n parent_job_ids = [job_id]\n else:\n if len(params) == 0:\n cmd = fn()\n elif len(params) == 1:\n cmd = fn(1)[0]\n num_array_jobs = 1\n job_id = submit_job(\n cmd=' '.join(cmd),\n job_name=job_name,\n parent_job_ids=parent_job_ids,\n num_array_jobs=num_array_jobs,\n use_gpu=use_gpu,\n job_queue=job_queue,\n job_def=job_def)\n parent_job_ids = [job_id]\n\n job_queue = None\n job_def = None\n\n def get_split_ind(self):\n return int(os.environ.get('AWS_BATCH_JOB_ARRAY_INDEX', 0))\n", "path": "rastervision_aws_batch/rastervision/aws_batch/aws_batch_runner.py"}], "after_files": [{"content": "import copy\nimport logging\nimport os\nimport uuid\nfrom inspect import signature\nfrom typing import List, Optional\n\nfrom rastervision.pipeline import rv_config\nfrom rastervision.pipeline.runner import Runner\n\nlog = logging.getLogger(__name__)\nAWS_BATCH = 'batch'\n\n\ndef submit_job(cmd: List[str],\n job_name: str,\n debug: bool = False,\n profile: str = False,\n attempts: int = 5,\n parent_job_ids: List[str] = None,\n num_array_jobs: Optional[int] = None,\n use_gpu: bool = False,\n job_queue: Optional[str] = None,\n job_def: Optional[str] = None) -> str:\n \"\"\"Submit a job to run on AWS Batch.\n\n Args:\n cmd: a command to run in the Docker container for the remote job\n debug: if True, run the command using a ptvsd wrapper which sets up a remote\n VS Code Python debugger server\n profile: if True, run the command using kernprof, a line profiler\n attempts: the number of times to try running the command which is useful\n in case of failure.\n parent_job_ids: optional list of parent Batch job ids. The job created by this\n will only run after the parent jobs complete successfully.\n num_array_jobs: if set, make this a Batch array job with size equal to\n num_array_jobs\n use_gpu: if True, run the job in a GPU-enabled queue\n job_queue: if set, use this job queue\n job_def: if set, use this job definition\n \"\"\"\n batch_config = rv_config.get_namespace_config(AWS_BATCH)\n\n if job_queue is None:\n if use_gpu:\n job_queue = batch_config('gpu_job_queue')\n else:\n job_queue = batch_config('cpu_job_queue')\n\n if job_def is None:\n if use_gpu:\n job_def = batch_config('gpu_job_def')\n else:\n job_def = batch_config('cpu_job_def')\n\n import boto3\n client = boto3.client('batch')\n\n cmd_list = cmd.split(' ')\n if debug:\n cmd_list = [\n 'python', '-m', 'ptvsd', '--host', '0.0.0.0', '--port', '6006',\n '--wait', '-m'\n ] + cmd_list\n\n if profile:\n cmd_list = ['kernprof', '-v', '-l'] + cmd_list\n\n kwargs = {\n 'jobName': job_name,\n 'jobQueue': job_queue,\n 'jobDefinition': job_def,\n 'containerOverrides': {\n 'command': cmd_list\n },\n 'retryStrategy': {\n 'attempts': attempts\n },\n }\n if parent_job_ids:\n kwargs['dependsOn'] = [{'jobId': id} for id in parent_job_ids]\n if num_array_jobs:\n kwargs['arrayProperties'] = {'size': num_array_jobs}\n\n job_id = client.submit_job(**kwargs)['jobId']\n msg = 'submitted job with jobName={} and jobId={} w/ parent(s)={}'.format(\n job_name, job_id, parent_job_ids)\n log.info(msg)\n log.info(cmd_list)\n\n return job_id\n\n\nclass AWSBatchRunner(Runner):\n \"\"\"Runs pipelines remotely using AWS Batch.\n\n Requires Everett configuration of form:\n\n ```\n [AWS_BATCH]\n cpu_job_queue=\n cpu_job_def=\n gpu_job_queue=\n gpu_job_def=\n attempts=\n ```\n \"\"\"\n\n def run(self,\n cfg_json_uri,\n pipeline,\n commands,\n num_splits=1,\n pipeline_run_name: str = 'raster-vision'):\n parent_job_ids = []\n\n # pipeline-specific job queue\n if hasattr(pipeline, 'job_queue'):\n pipeline_job_queue = pipeline.job_queue\n else:\n pipeline_job_queue = None\n\n # pipeline-specific job definition\n if hasattr(pipeline, 'job_def'):\n pipeline_job_def = pipeline.job_def\n else:\n pipeline_job_def = None\n\n for command in commands:\n\n # detect external command\n if hasattr(pipeline, command):\n fn = getattr(pipeline, command)\n params = signature(fn).parameters\n external = hasattr(fn, 'external') and len(params) in {0, 1}\n array_job_capable = hasattr(fn, 'array_job_capable') \\\n and fn.array_job_capable\n else:\n external = False\n array_job_capable = False\n\n # command-specific job queue, job definition\n job_def = pipeline_job_def\n job_queue = pipeline_job_queue\n if hasattr(pipeline, command):\n fn = getattr(pipeline, command)\n if hasattr(fn, 'job_def'):\n job_def = fn.job_def\n if hasattr(fn, 'job_queue'):\n job_queue = fn.job_queue\n\n num_array_jobs = None\n use_gpu = command in pipeline.gpu_commands\n\n job_name = f'{pipeline_run_name}-{command}-{uuid.uuid4()}'\n\n if not external:\n cmd = ['python', '-m', 'rastervision.pipeline.cli']\n if rv_config.get_verbosity() > 1:\n cmd.append('-' + 'v' * (rv_config.get_verbosity() - 1))\n cmd.extend([\n 'run_command', cfg_json_uri, command, '--runner', AWS_BATCH\n ])\n\n if command in pipeline.split_commands and num_splits > 1:\n num_array_jobs = num_splits\n cmd += ['--num-splits', str(num_splits)]\n job_id = submit_job(\n cmd=' '.join(cmd),\n job_name=job_name,\n parent_job_ids=parent_job_ids,\n num_array_jobs=num_array_jobs,\n use_gpu=use_gpu,\n job_queue=job_queue,\n job_def=job_def)\n parent_job_ids = [job_id]\n else:\n if command in pipeline.split_commands and num_splits > 1:\n if len(params) == 1 and array_job_capable:\n cmd = fn(-num_splits)\n num_array_jobs = num_splits\n job_id = submit_job(\n cmd=' '.join(cmd),\n job_name=job_name,\n parent_job_ids=parent_job_ids,\n num_array_jobs=num_array_jobs,\n use_gpu=use_gpu,\n job_queue=job_queue,\n job_def=job_def)\n parent_job_ids = [job_id]\n elif len(params) == 1 and not array_job_capable:\n num_array_jobs = None\n new_parent_job_ids = []\n for cmd in fn(num_splits):\n job_id = submit_job(\n cmd=' '.join(cmd),\n job_name=job_name,\n parent_job_ids=parent_job_ids,\n num_array_jobs=num_array_jobs,\n use_gpu=use_gpu,\n job_queue=job_queue,\n job_def=job_def)\n new_parent_job_ids.append(job_id)\n parent_job_ids = copy.copy(new_parent_job_ids)\n elif len(params) == 0:\n cmd = fn()\n num_array_jobs = None\n job_id = submit_job(\n cmd=' '.join(cmd),\n job_name=job_name,\n parent_job_ids=parent_job_ids,\n num_array_jobs=num_array_jobs,\n use_gpu=use_gpu,\n job_queue=job_queue,\n job_def=job_def)\n parent_job_ids = [job_id]\n else:\n if len(params) == 0:\n cmd = fn()\n elif len(params) == 1:\n cmd = fn(1)[0]\n num_array_jobs = 1\n job_id = submit_job(\n cmd=' '.join(cmd),\n job_name=job_name,\n parent_job_ids=parent_job_ids,\n num_array_jobs=num_array_jobs,\n use_gpu=use_gpu,\n job_queue=job_queue,\n job_def=job_def)\n parent_job_ids = [job_id]\n\n job_queue = None\n job_def = None\n\n def get_split_ind(self):\n return int(os.environ.get('AWS_BATCH_JOB_ARRAY_INDEX', 0))\n", "path": "rastervision_aws_batch/rastervision/aws_batch/aws_batch_runner.py"}]}
2,616
274
gh_patches_debug_23323
rasdani/github-patches
git_diff
facebookresearch__xformers-326
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Rotary embeddings convert queries and keys to float32 when using mixed precision training Not sure, if this is expected behaviour. The problem is that the cos_sin table in the RotaryEmbedding class is stored in float32 format, thus the returned queries and keys get converted to float32 aswell. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `xformers/components/positional_embedding/rotary.py` Content: ``` 1 # Copyright (c) Facebook, Inc. and its affiliates. All rights reserved. 2 # 3 # This source code is licensed under the BSD license found in the 4 # LICENSE file in the root directory of this source tree. 5 6 7 # CREDITS: This implementation is inspired by GPT-NeoX https://github.com/EleutherAI/gpt-neox 8 # NOTE: Almost the same right now, moving parts to Triton is the next step 9 10 from typing import Tuple 11 12 import torch 13 14 15 def rotate_half(x): 16 x1, x2 = x.chunk(2, dim=-1) 17 return torch.cat((-x2, x1), dim=-1) 18 19 20 @torch.jit.script 21 def apply_rotary_pos_emb(x, cos, sin): 22 # NOTE: This could probably be moved to Triton 23 24 # Handle a possible sequence length mismatch in between q and k 25 cos = cos[:, :, : x.shape[-2], :] 26 sin = sin[:, :, : x.shape[-2], :] 27 28 return (x * cos) + (rotate_half(x) * sin) 29 30 31 class RotaryEmbedding(torch.nn.Module): 32 """ 33 The rotary position embeddings from RoFormer_ (Su et. al). 34 A crucial insight from the method is that the query and keys are 35 transformed by rotation matrices which depend on the relative positions. 36 37 Other implementations are available in the Rotary Transformer repo_ and in 38 GPT-NeoX_, GPT-NeoX was an inspiration 39 40 .. _RoFormer: https://arxiv.org/abs/2104.09864 41 .. _repo: https://github.com/ZhuiyiTechnology/roformer 42 .. _GPT-NeoX: https://github.com/EleutherAI/gpt-neox 43 44 45 .. warning: Please note that this embedding is not registered on purpose, as it is transformative 46 (it does not create the embedding dimension) and will likely be picked up (imported) on a ad-hoc basis 47 """ 48 49 def __init__(self, dim_model: int, *_, **__): 50 super().__init__() 51 # Generate and save the inverse frequency buffer (non trainable) 52 inv_freq = 1.0 / (10000 ** (torch.arange(0, dim_model, 2).float() / dim_model)) 53 self.register_buffer("inv_freq", inv_freq) 54 55 self._seq_len_cached = None 56 self._cos_cached = None 57 self._sin_cached = None 58 59 def _update_cos_sin_tables(self, x, seq_dimension=1): 60 seq_len = x.shape[seq_dimension] 61 62 # Reset the tables if the sequence length has changed, 63 # or if we're on a new device (possibly due to tracing for instance) 64 if seq_len != self._seq_len_cached or self._cos_cached.device != x.device: 65 self._seq_len_cached = seq_len 66 t = torch.arange(x.shape[seq_dimension], device=x.device).type_as( 67 self.inv_freq 68 ) 69 freqs = torch.einsum("i,j->ij", t, self.inv_freq) 70 emb = torch.cat((freqs, freqs), dim=-1).to(x.device) 71 72 self._cos_cached = emb.cos()[None, None, :, :] 73 self._sin_cached = emb.sin()[None, None, :, :] 74 75 return self._cos_cached, self._sin_cached 76 77 def forward( 78 self, q: torch.Tensor, k: torch.Tensor 79 ) -> Tuple[torch.Tensor, torch.Tensor]: 80 self._cos_cached, self._sin_cached = self._update_cos_sin_tables( 81 k, seq_dimension=-2 82 ) 83 84 return ( 85 apply_rotary_pos_emb(q, self._cos_cached, self._sin_cached), 86 apply_rotary_pos_emb(k, self._cos_cached, self._sin_cached), 87 ) 88 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/xformers/components/positional_embedding/rotary.py b/xformers/components/positional_embedding/rotary.py --- a/xformers/components/positional_embedding/rotary.py +++ b/xformers/components/positional_embedding/rotary.py @@ -61,16 +61,20 @@ # Reset the tables if the sequence length has changed, # or if we're on a new device (possibly due to tracing for instance) - if seq_len != self._seq_len_cached or self._cos_cached.device != x.device: + if ( + seq_len != self._seq_len_cached + or self._cos_cached.device != x.device + or self._cos_cached.dtype != x.dtype + ): self._seq_len_cached = seq_len - t = torch.arange(x.shape[seq_dimension], device=x.device).type_as( - self.inv_freq + t = torch.arange( + x.shape[seq_dimension], device=x.device, dtype=torch.float32 ) - freqs = torch.einsum("i,j->ij", t, self.inv_freq) + freqs = torch.einsum("i,j->ij", t, self.inv_freq.to(x.dtype)) emb = torch.cat((freqs, freqs), dim=-1).to(x.device) - self._cos_cached = emb.cos()[None, None, :, :] - self._sin_cached = emb.sin()[None, None, :, :] + self._cos_cached = emb.cos()[None, None, :, :].to(x.dtype) + self._sin_cached = emb.sin()[None, None, :, :].to(x.dtype) return self._cos_cached, self._sin_cached
{"golden_diff": "diff --git a/xformers/components/positional_embedding/rotary.py b/xformers/components/positional_embedding/rotary.py\n--- a/xformers/components/positional_embedding/rotary.py\n+++ b/xformers/components/positional_embedding/rotary.py\n@@ -61,16 +61,20 @@\n \n # Reset the tables if the sequence length has changed,\n # or if we're on a new device (possibly due to tracing for instance)\n- if seq_len != self._seq_len_cached or self._cos_cached.device != x.device:\n+ if (\n+ seq_len != self._seq_len_cached\n+ or self._cos_cached.device != x.device\n+ or self._cos_cached.dtype != x.dtype\n+ ):\n self._seq_len_cached = seq_len\n- t = torch.arange(x.shape[seq_dimension], device=x.device).type_as(\n- self.inv_freq\n+ t = torch.arange(\n+ x.shape[seq_dimension], device=x.device, dtype=torch.float32\n )\n- freqs = torch.einsum(\"i,j->ij\", t, self.inv_freq)\n+ freqs = torch.einsum(\"i,j->ij\", t, self.inv_freq.to(x.dtype))\n emb = torch.cat((freqs, freqs), dim=-1).to(x.device)\n \n- self._cos_cached = emb.cos()[None, None, :, :]\n- self._sin_cached = emb.sin()[None, None, :, :]\n+ self._cos_cached = emb.cos()[None, None, :, :].to(x.dtype)\n+ self._sin_cached = emb.sin()[None, None, :, :].to(x.dtype)\n \n return self._cos_cached, self._sin_cached\n", "issue": "Rotary embeddings convert queries and keys to float32 when using mixed precision training\nNot sure, if this is expected behaviour. The problem is that the cos_sin table in the RotaryEmbedding class is stored in float32 format, thus the returned queries and keys get converted to float32 aswell.\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.\n#\n# This source code is licensed under the BSD license found in the\n# LICENSE file in the root directory of this source tree.\n\n\n# CREDITS: This implementation is inspired by GPT-NeoX https://github.com/EleutherAI/gpt-neox\n# NOTE: Almost the same right now, moving parts to Triton is the next step\n\nfrom typing import Tuple\n\nimport torch\n\n\ndef rotate_half(x):\n x1, x2 = x.chunk(2, dim=-1)\n return torch.cat((-x2, x1), dim=-1)\n\n\[email protected]\ndef apply_rotary_pos_emb(x, cos, sin):\n # NOTE: This could probably be moved to Triton\n\n # Handle a possible sequence length mismatch in between q and k\n cos = cos[:, :, : x.shape[-2], :]\n sin = sin[:, :, : x.shape[-2], :]\n\n return (x * cos) + (rotate_half(x) * sin)\n\n\nclass RotaryEmbedding(torch.nn.Module):\n \"\"\"\n The rotary position embeddings from RoFormer_ (Su et. al).\n A crucial insight from the method is that the query and keys are\n transformed by rotation matrices which depend on the relative positions.\n\n Other implementations are available in the Rotary Transformer repo_ and in\n GPT-NeoX_, GPT-NeoX was an inspiration\n\n .. _RoFormer: https://arxiv.org/abs/2104.09864\n .. _repo: https://github.com/ZhuiyiTechnology/roformer\n .. _GPT-NeoX: https://github.com/EleutherAI/gpt-neox\n\n\n .. warning: Please note that this embedding is not registered on purpose, as it is transformative\n (it does not create the embedding dimension) and will likely be picked up (imported) on a ad-hoc basis\n \"\"\"\n\n def __init__(self, dim_model: int, *_, **__):\n super().__init__()\n # Generate and save the inverse frequency buffer (non trainable)\n inv_freq = 1.0 / (10000 ** (torch.arange(0, dim_model, 2).float() / dim_model))\n self.register_buffer(\"inv_freq\", inv_freq)\n\n self._seq_len_cached = None\n self._cos_cached = None\n self._sin_cached = None\n\n def _update_cos_sin_tables(self, x, seq_dimension=1):\n seq_len = x.shape[seq_dimension]\n\n # Reset the tables if the sequence length has changed,\n # or if we're on a new device (possibly due to tracing for instance)\n if seq_len != self._seq_len_cached or self._cos_cached.device != x.device:\n self._seq_len_cached = seq_len\n t = torch.arange(x.shape[seq_dimension], device=x.device).type_as(\n self.inv_freq\n )\n freqs = torch.einsum(\"i,j->ij\", t, self.inv_freq)\n emb = torch.cat((freqs, freqs), dim=-1).to(x.device)\n\n self._cos_cached = emb.cos()[None, None, :, :]\n self._sin_cached = emb.sin()[None, None, :, :]\n\n return self._cos_cached, self._sin_cached\n\n def forward(\n self, q: torch.Tensor, k: torch.Tensor\n ) -> Tuple[torch.Tensor, torch.Tensor]:\n self._cos_cached, self._sin_cached = self._update_cos_sin_tables(\n k, seq_dimension=-2\n )\n\n return (\n apply_rotary_pos_emb(q, self._cos_cached, self._sin_cached),\n apply_rotary_pos_emb(k, self._cos_cached, self._sin_cached),\n )\n", "path": "xformers/components/positional_embedding/rotary.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.\n#\n# This source code is licensed under the BSD license found in the\n# LICENSE file in the root directory of this source tree.\n\n\n# CREDITS: This implementation is inspired by GPT-NeoX https://github.com/EleutherAI/gpt-neox\n# NOTE: Almost the same right now, moving parts to Triton is the next step\n\nfrom typing import Tuple\n\nimport torch\n\n\ndef rotate_half(x):\n x1, x2 = x.chunk(2, dim=-1)\n return torch.cat((-x2, x1), dim=-1)\n\n\[email protected]\ndef apply_rotary_pos_emb(x, cos, sin):\n # NOTE: This could probably be moved to Triton\n\n # Handle a possible sequence length mismatch in between q and k\n cos = cos[:, :, : x.shape[-2], :]\n sin = sin[:, :, : x.shape[-2], :]\n\n return (x * cos) + (rotate_half(x) * sin)\n\n\nclass RotaryEmbedding(torch.nn.Module):\n \"\"\"\n The rotary position embeddings from RoFormer_ (Su et. al).\n A crucial insight from the method is that the query and keys are\n transformed by rotation matrices which depend on the relative positions.\n\n Other implementations are available in the Rotary Transformer repo_ and in\n GPT-NeoX_, GPT-NeoX was an inspiration\n\n .. _RoFormer: https://arxiv.org/abs/2104.09864\n .. _repo: https://github.com/ZhuiyiTechnology/roformer\n .. _GPT-NeoX: https://github.com/EleutherAI/gpt-neox\n\n\n .. warning: Please note that this embedding is not registered on purpose, as it is transformative\n (it does not create the embedding dimension) and will likely be picked up (imported) on a ad-hoc basis\n \"\"\"\n\n def __init__(self, dim_model: int, *_, **__):\n super().__init__()\n # Generate and save the inverse frequency buffer (non trainable)\n inv_freq = 1.0 / (10000 ** (torch.arange(0, dim_model, 2).float() / dim_model))\n self.register_buffer(\"inv_freq\", inv_freq)\n\n self._seq_len_cached = None\n self._cos_cached = None\n self._sin_cached = None\n\n def _update_cos_sin_tables(self, x, seq_dimension=1):\n seq_len = x.shape[seq_dimension]\n\n # Reset the tables if the sequence length has changed,\n # or if we're on a new device (possibly due to tracing for instance)\n if (\n seq_len != self._seq_len_cached\n or self._cos_cached.device != x.device\n or self._cos_cached.dtype != x.dtype\n ):\n self._seq_len_cached = seq_len\n t = torch.arange(\n x.shape[seq_dimension], device=x.device, dtype=torch.float32\n )\n freqs = torch.einsum(\"i,j->ij\", t, self.inv_freq.to(x.dtype))\n emb = torch.cat((freqs, freqs), dim=-1).to(x.device)\n\n self._cos_cached = emb.cos()[None, None, :, :].to(x.dtype)\n self._sin_cached = emb.sin()[None, None, :, :].to(x.dtype)\n\n return self._cos_cached, self._sin_cached\n\n def forward(\n self, q: torch.Tensor, k: torch.Tensor\n ) -> Tuple[torch.Tensor, torch.Tensor]:\n self._cos_cached, self._sin_cached = self._update_cos_sin_tables(\n k, seq_dimension=-2\n )\n\n return (\n apply_rotary_pos_emb(q, self._cos_cached, self._sin_cached),\n apply_rotary_pos_emb(k, self._cos_cached, self._sin_cached),\n )\n", "path": "xformers/components/positional_embedding/rotary.py"}]}
1,333
383
gh_patches_debug_13792
rasdani/github-patches
git_diff
kubeflow__pipelines-7985
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [bug] TFJob launcher pipeline task fails when `delete_finished_tfjob` flag is `True` https://github.com/kubeflow/pipelines/blob/127dab4d4671849d596c455f4619ea807d22f6ea/components/kubeflow/launcher/component.yaml#L19 When `delete_finished_tfjob` flag is `True`, TFJob launcher task fails with error below. ``` Traceback (most recent call last): File "/ml/launch_tfjob.py", line 136, in <module> main() File "/ml/launch_tfjob.py", line 133, in main tfjob.delete(args.name, args.namespace) File "/ml/launch_crd.py", line 115, in delete body) TypeError: delete_namespaced_custom_object() takes exactly 6 arguments (7 given) ``` I think it's some kind of kubernetes client SDK version issue in `nikenano/launchernew:latest` container image. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `components/kubeflow/common/launch_crd.py` Content: ``` 1 # Copyright 2019 kubeflow.org. 2 # Licensed under the Apache License, Version 2.0 (the "License"); 3 # you may not use this file except in compliance with the License. 4 # You may obtain a copy of the License at 5 # 6 # http://www.apache.org/licenses/LICENSE-2.0 7 # 8 # Unless required by applicable law or agreed to in writing, software 9 # distributed under the License is distributed on an "AS IS" BASIS, 10 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 11 # See the License for the specific language governing permissions and 12 # limitations under the License. 13 14 import datetime 15 import json 16 import logging 17 import multiprocessing 18 import time 19 20 from kubernetes import client as k8s_client 21 from kubernetes.client import rest 22 23 logger = logging.getLogger(__name__) 24 25 class K8sCR(object): 26 def __init__(self, group, plural, version, client): 27 self.group = group 28 self.plural = plural 29 self.version = version 30 self.client = k8s_client.CustomObjectsApi(client) 31 32 def wait_for_condition(self, 33 namespace, 34 name, 35 expected_conditions=[], 36 timeout=datetime.timedelta(days=365), 37 polling_interval=datetime.timedelta(seconds=30), 38 status_callback=None): 39 """Waits until any of the specified conditions occur. 40 Args: 41 namespace: namespace for the CR. 42 name: Name of the CR. 43 expected_conditions: A list of conditions. Function waits until any of the 44 supplied conditions is reached. 45 timeout: How long to wait for the CR. 46 polling_interval: How often to poll for the status of the CR. 47 status_callback: (Optional): Callable. If supplied this callable is 48 invoked after we poll the CR. Callable takes a single argument which 49 is the CR. 50 """ 51 end_time = datetime.datetime.now() + timeout 52 while True: 53 try: 54 results = self.client.get_namespaced_custom_object( 55 self.group, self.version, namespace, self.plural, name) 56 except Exception as e: 57 logger.error("There was a problem waiting for %s/%s %s in namespace %s; Exception: %s", 58 self.group, self.plural, name, namespace, e) 59 raise 60 61 if results: 62 if status_callback: 63 status_callback(results) 64 expected, condition = self.is_expected_conditions(results, expected_conditions) 65 if expected: 66 logger.info("%s/%s %s in namespace %s has reached the expected condition: %s.", 67 self.group, self.plural, name, namespace, condition) 68 return results 69 else: 70 if condition: 71 logger.info("Current condition of %s/%s %s in namespace %s is %s.", 72 self.group, self.plural, name, namespace, condition) 73 74 if datetime.datetime.now() + polling_interval > end_time: 75 raise Exception( 76 "Timeout waiting for {0}/{1} {2} in namespace {3} to enter one of the " 77 "conditions {4}.".format(self.group, self.plural, name, namespace, expected_conditions)) 78 79 time.sleep(polling_interval.seconds) 80 81 def is_expected_conditions(self, inst, expected_conditions): 82 conditions = inst.get('status', {}).get("conditions") 83 if not conditions: 84 return False, "" 85 if conditions[-1]["type"] in expected_conditions and conditions[-1]["status"] == "True": 86 return True, conditions[-1]["type"] 87 else: 88 return False, conditions[-1]["type"] 89 90 def create(self, spec): 91 """Create a CR. 92 Args: 93 spec: The spec for the CR. 94 """ 95 try: 96 # Create a Resource 97 namespace = spec["metadata"].get("namespace", "default") 98 logger.info("Creating %s/%s %s in namespace %s.", 99 self.group, self.plural, spec["metadata"]["name"], namespace) 100 api_response = self.client.create_namespaced_custom_object( 101 self.group, self.version, namespace, self.plural, spec) 102 logger.info("Created %s/%s %s in namespace %s.", 103 self.group, self.plural, spec["metadata"]["name"], namespace) 104 return api_response 105 except rest.ApiException as e: 106 self._log_and_raise_exception(e, "create") 107 108 def delete(self, name, namespace): 109 try: 110 body = { 111 # Set garbage collection so that CR won't be deleted until all 112 # owned references are deleted. 113 "propagationPolicy": "Foreground", 114 } 115 logger.info("Deleteing %s/%s %s in namespace %s.", 116 self.group, self.plural, name, namespace) 117 api_response = self.client.delete_namespaced_custom_object( 118 self.group, 119 self.version, 120 namespace, 121 self.plural, 122 name, 123 body) 124 logger.info("Deleted %s/%s %s in namespace %s.", 125 self.group, self.plural, name, namespace) 126 return api_response 127 except rest.ApiException as e: 128 self._log_and_raise_exception(e, "delete") 129 130 def _log_and_raise_exception(self, ex, action): 131 message = "" 132 if ex.message: 133 message = ex.message 134 if ex.body: 135 try: 136 body = json.loads(ex.body) 137 message = body.get("message") 138 except ValueError: 139 logger.error("Exception when %s %s/%s: %s", action, self.group, self.plural, ex.body) 140 raise 141 142 logger.error("Exception when %s %s/%s: %s", action, self.group, self.plural, ex.body) 143 raise ex 144 145 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/components/kubeflow/common/launch_crd.py b/components/kubeflow/common/launch_crd.py --- a/components/kubeflow/common/launch_crd.py +++ b/components/kubeflow/common/launch_crd.py @@ -115,12 +115,12 @@ logger.info("Deleteing %s/%s %s in namespace %s.", self.group, self.plural, name, namespace) api_response = self.client.delete_namespaced_custom_object( - self.group, - self.version, - namespace, - self.plural, - name, - body) + group=self.group, + version=self.version, + namespace=namespace, + plural=self.plural, + name=name, + body=body) logger.info("Deleted %s/%s %s in namespace %s.", self.group, self.plural, name, namespace) return api_response
{"golden_diff": "diff --git a/components/kubeflow/common/launch_crd.py b/components/kubeflow/common/launch_crd.py\n--- a/components/kubeflow/common/launch_crd.py\n+++ b/components/kubeflow/common/launch_crd.py\n@@ -115,12 +115,12 @@\n logger.info(\"Deleteing %s/%s %s in namespace %s.\",\n self.group, self.plural, name, namespace)\n api_response = self.client.delete_namespaced_custom_object(\n- self.group,\n- self.version,\n- namespace,\n- self.plural,\n- name,\n- body)\n+ group=self.group,\n+ version=self.version,\n+ namespace=namespace,\n+ plural=self.plural,\n+ name=name,\n+ body=body)\n logger.info(\"Deleted %s/%s %s in namespace %s.\",\n self.group, self.plural, name, namespace)\n return api_response\n", "issue": "[bug] TFJob launcher pipeline task fails when `delete_finished_tfjob` flag is `True`\nhttps://github.com/kubeflow/pipelines/blob/127dab4d4671849d596c455f4619ea807d22f6ea/components/kubeflow/launcher/component.yaml#L19\r\n\r\nWhen `delete_finished_tfjob` flag is `True`, TFJob launcher task fails with error below.\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/ml/launch_tfjob.py\", line 136, in <module>\r\n main()\r\n File \"/ml/launch_tfjob.py\", line 133, in main\r\n tfjob.delete(args.name, args.namespace)\r\n File \"/ml/launch_crd.py\", line 115, in delete\r\n body)\r\nTypeError: delete_namespaced_custom_object() takes exactly 6 arguments (7 given)\r\n```\r\n\r\nI think it's some kind of kubernetes client SDK version issue in `nikenano/launchernew:latest` container image.\n", "before_files": [{"content": "# Copyright 2019 kubeflow.org.\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport datetime\nimport json\nimport logging\nimport multiprocessing\nimport time\n\nfrom kubernetes import client as k8s_client\nfrom kubernetes.client import rest\n\nlogger = logging.getLogger(__name__)\n\nclass K8sCR(object):\n def __init__(self, group, plural, version, client):\n self.group = group\n self.plural = plural\n self.version = version\n self.client = k8s_client.CustomObjectsApi(client)\n\n def wait_for_condition(self,\n namespace,\n name,\n expected_conditions=[],\n timeout=datetime.timedelta(days=365),\n polling_interval=datetime.timedelta(seconds=30),\n status_callback=None):\n \"\"\"Waits until any of the specified conditions occur.\n Args:\n namespace: namespace for the CR.\n name: Name of the CR.\n expected_conditions: A list of conditions. Function waits until any of the\n supplied conditions is reached.\n timeout: How long to wait for the CR.\n polling_interval: How often to poll for the status of the CR.\n status_callback: (Optional): Callable. If supplied this callable is\n invoked after we poll the CR. Callable takes a single argument which\n is the CR.\n \"\"\"\n end_time = datetime.datetime.now() + timeout\n while True:\n try:\n results = self.client.get_namespaced_custom_object(\n self.group, self.version, namespace, self.plural, name)\n except Exception as e:\n logger.error(\"There was a problem waiting for %s/%s %s in namespace %s; Exception: %s\",\n self.group, self.plural, name, namespace, e)\n raise\n\n if results:\n if status_callback:\n status_callback(results)\n expected, condition = self.is_expected_conditions(results, expected_conditions)\n if expected:\n logger.info(\"%s/%s %s in namespace %s has reached the expected condition: %s.\",\n self.group, self.plural, name, namespace, condition)\n return results\n else:\n if condition:\n logger.info(\"Current condition of %s/%s %s in namespace %s is %s.\",\n self.group, self.plural, name, namespace, condition)\n\n if datetime.datetime.now() + polling_interval > end_time:\n raise Exception(\n \"Timeout waiting for {0}/{1} {2} in namespace {3} to enter one of the \"\n \"conditions {4}.\".format(self.group, self.plural, name, namespace, expected_conditions))\n\n time.sleep(polling_interval.seconds)\n\n def is_expected_conditions(self, inst, expected_conditions):\n conditions = inst.get('status', {}).get(\"conditions\")\n if not conditions:\n return False, \"\"\n if conditions[-1][\"type\"] in expected_conditions and conditions[-1][\"status\"] == \"True\":\n return True, conditions[-1][\"type\"]\n else:\n return False, conditions[-1][\"type\"]\n\n def create(self, spec):\n \"\"\"Create a CR.\n Args:\n spec: The spec for the CR.\n \"\"\"\n try:\n # Create a Resource\n namespace = spec[\"metadata\"].get(\"namespace\", \"default\")\n logger.info(\"Creating %s/%s %s in namespace %s.\",\n self.group, self.plural, spec[\"metadata\"][\"name\"], namespace)\n api_response = self.client.create_namespaced_custom_object(\n self.group, self.version, namespace, self.plural, spec)\n logger.info(\"Created %s/%s %s in namespace %s.\",\n self.group, self.plural, spec[\"metadata\"][\"name\"], namespace)\n return api_response\n except rest.ApiException as e:\n self._log_and_raise_exception(e, \"create\")\n\n def delete(self, name, namespace):\n try:\n body = {\n # Set garbage collection so that CR won't be deleted until all\n # owned references are deleted.\n \"propagationPolicy\": \"Foreground\",\n }\n logger.info(\"Deleteing %s/%s %s in namespace %s.\",\n self.group, self.plural, name, namespace)\n api_response = self.client.delete_namespaced_custom_object(\n self.group,\n self.version,\n namespace,\n self.plural,\n name,\n body)\n logger.info(\"Deleted %s/%s %s in namespace %s.\",\n self.group, self.plural, name, namespace)\n return api_response\n except rest.ApiException as e:\n self._log_and_raise_exception(e, \"delete\")\n\n def _log_and_raise_exception(self, ex, action):\n message = \"\"\n if ex.message:\n message = ex.message\n if ex.body:\n try:\n body = json.loads(ex.body)\n message = body.get(\"message\")\n except ValueError:\n logger.error(\"Exception when %s %s/%s: %s\", action, self.group, self.plural, ex.body)\n raise\n\n logger.error(\"Exception when %s %s/%s: %s\", action, self.group, self.plural, ex.body)\n raise ex\n\n", "path": "components/kubeflow/common/launch_crd.py"}], "after_files": [{"content": "# Copyright 2019 kubeflow.org.\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport datetime\nimport json\nimport logging\nimport multiprocessing\nimport time\n\nfrom kubernetes import client as k8s_client\nfrom kubernetes.client import rest\n\nlogger = logging.getLogger(__name__)\n\nclass K8sCR(object):\n def __init__(self, group, plural, version, client):\n self.group = group\n self.plural = plural\n self.version = version\n self.client = k8s_client.CustomObjectsApi(client)\n\n def wait_for_condition(self,\n namespace,\n name,\n expected_conditions=[],\n timeout=datetime.timedelta(days=365),\n polling_interval=datetime.timedelta(seconds=30),\n status_callback=None):\n \"\"\"Waits until any of the specified conditions occur.\n Args:\n namespace: namespace for the CR.\n name: Name of the CR.\n expected_conditions: A list of conditions. Function waits until any of the\n supplied conditions is reached.\n timeout: How long to wait for the CR.\n polling_interval: How often to poll for the status of the CR.\n status_callback: (Optional): Callable. If supplied this callable is\n invoked after we poll the CR. Callable takes a single argument which\n is the CR.\n \"\"\"\n end_time = datetime.datetime.now() + timeout\n while True:\n try:\n results = self.client.get_namespaced_custom_object(\n self.group, self.version, namespace, self.plural, name)\n except Exception as e:\n logger.error(\"There was a problem waiting for %s/%s %s in namespace %s; Exception: %s\",\n self.group, self.plural, name, namespace, e)\n raise\n\n if results:\n if status_callback:\n status_callback(results)\n expected, condition = self.is_expected_conditions(results, expected_conditions)\n if expected:\n logger.info(\"%s/%s %s in namespace %s has reached the expected condition: %s.\",\n self.group, self.plural, name, namespace, condition)\n return results\n else:\n if condition:\n logger.info(\"Current condition of %s/%s %s in namespace %s is %s.\",\n self.group, self.plural, name, namespace, condition)\n\n if datetime.datetime.now() + polling_interval > end_time:\n raise Exception(\n \"Timeout waiting for {0}/{1} {2} in namespace {3} to enter one of the \"\n \"conditions {4}.\".format(self.group, self.plural, name, namespace, expected_conditions))\n\n time.sleep(polling_interval.seconds)\n\n def is_expected_conditions(self, inst, expected_conditions):\n conditions = inst.get('status', {}).get(\"conditions\")\n if not conditions:\n return False, \"\"\n if conditions[-1][\"type\"] in expected_conditions and conditions[-1][\"status\"] == \"True\":\n return True, conditions[-1][\"type\"]\n else:\n return False, conditions[-1][\"type\"]\n\n def create(self, spec):\n \"\"\"Create a CR.\n Args:\n spec: The spec for the CR.\n \"\"\"\n try:\n # Create a Resource\n namespace = spec[\"metadata\"].get(\"namespace\", \"default\")\n logger.info(\"Creating %s/%s %s in namespace %s.\",\n self.group, self.plural, spec[\"metadata\"][\"name\"], namespace)\n api_response = self.client.create_namespaced_custom_object(\n self.group, self.version, namespace, self.plural, spec)\n logger.info(\"Created %s/%s %s in namespace %s.\",\n self.group, self.plural, spec[\"metadata\"][\"name\"], namespace)\n return api_response\n except rest.ApiException as e:\n self._log_and_raise_exception(e, \"create\")\n\n def delete(self, name, namespace):\n try:\n body = {\n # Set garbage collection so that CR won't be deleted until all\n # owned references are deleted.\n \"propagationPolicy\": \"Foreground\",\n }\n logger.info(\"Deleteing %s/%s %s in namespace %s.\",\n self.group, self.plural, name, namespace)\n api_response = self.client.delete_namespaced_custom_object(\n group=self.group,\n version=self.version,\n namespace=namespace,\n plural=self.plural,\n name=name,\n body=body)\n logger.info(\"Deleted %s/%s %s in namespace %s.\",\n self.group, self.plural, name, namespace)\n return api_response\n except rest.ApiException as e:\n self._log_and_raise_exception(e, \"delete\")\n\n def _log_and_raise_exception(self, ex, action):\n message = \"\"\n if ex.message:\n message = ex.message\n if ex.body:\n try:\n body = json.loads(ex.body)\n message = body.get(\"message\")\n except ValueError:\n logger.error(\"Exception when %s %s/%s: %s\", action, self.group, self.plural, ex.body)\n raise\n\n logger.error(\"Exception when %s %s/%s: %s\", action, self.group, self.plural, ex.body)\n raise ex\n\n", "path": "components/kubeflow/common/launch_crd.py"}]}
2,035
210
gh_patches_debug_18196
rasdani/github-patches
git_diff
joke2k__faker-1443
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Adding ABA Routing Number to Bank Provider * Faker version:8.1.2 Is there any interest in adding ABA routing numbers to the Bank provider? https://en.wikipedia.org/wiki/ABA_routing_transit_number This would be helpful for my use case and have code to generate but wanted to check before putting more effort in. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `faker/providers/bank/__init__.py` Content: ``` 1 import re 2 import string 3 4 from string import ascii_uppercase 5 6 from .. import BaseProvider 7 8 localized = True 9 default_locale = 'en_GB' 10 11 12 class Provider(BaseProvider): 13 """Implement default bank provider for Faker. 14 15 .. important:: 16 Bank codes, account numbers, and other ID's generated by this provider 17 are only valid in form, i.e. they conform to some standard/format, are 18 of the expected lengths, and have valid checksums (where applicable). 19 Results generated that turn out to be valid in real life are purely 20 coincidental. 21 22 Sources: 23 24 - https://en.wikipedia.org/wiki/International_Bank_Account_Number 25 - https://www.theswiftcodes.com/swift-code-checker/ 26 """ 27 28 ALPHA = {c: str(ord(c) % 55) for c in string.ascii_uppercase} 29 bban_format = '????#############' 30 country_code = 'GB' 31 32 def bank_country(self): 33 """Generate the bank provider's ISO 3166-1 alpha-2 country code.""" 34 return self.country_code 35 36 def bban(self): 37 """Generate a Basic Bank Account Number (BBAN).""" 38 temp = re.sub(r'\?', 39 lambda x: self.random_element(ascii_uppercase), 40 self.bban_format) 41 return self.numerify(temp) 42 43 def iban(self): 44 """Generate an International Bank Account Number (IBAN).""" 45 bban = self.bban() 46 47 check = bban + self.country_code + '00' 48 check = int(''.join(self.ALPHA.get(c, c) for c in check)) 49 check = 98 - (check % 97) 50 check = str(check).zfill(2) 51 52 return self.country_code + check + bban 53 54 def swift8(self, use_dataset=False): 55 """Generate an 8-digit SWIFT code. 56 57 This method uses |swift| under the hood with the ``length`` argument set 58 to ``8`` and with the ``primary`` argument omitted. All 8-digit SWIFT 59 codes already refer to the primary branch/office. 60 61 :sample: 62 :sample: use_dataset=True 63 """ 64 return self.swift(length=8, use_dataset=use_dataset) 65 66 def swift11(self, primary=None, use_dataset=False): 67 """Generate an 11-digit SWIFT code. 68 69 This method uses |swift| under the hood with the ``length`` argument set 70 to ``11``. If ``primary`` is set to ``True``, the SWIFT code will always 71 end with ``'XXX'``. All 11-digit SWIFT codes use this convention to 72 refer to the primary branch/office. 73 74 :sample: 75 :sample: use_dataset=True 76 """ 77 return self.swift(length=11, primary=primary, use_dataset=use_dataset) 78 79 def swift(self, length=None, primary=None, use_dataset=False): 80 """Generate a SWIFT code. 81 82 SWIFT codes, reading from left to right, are composed of a 4 alphabet 83 character bank code, a 2 alphabet character country code, a 2 84 alphanumeric location code, and an optional 3 alphanumeric branch code. 85 This means SWIFT codes can only have 8 or 11 characters, so the value of 86 ``length`` can only be ``None`` or the integers ``8`` or ``11``. If the 87 value is ``None``, then a value of ``8`` or ``11`` will randomly be 88 assigned. 89 90 Because all 8-digit SWIFT codes already refer to the primary branch or 91 office, the ``primary`` argument only has an effect if the value of 92 ``length`` is ``11``. If ``primary`` is ``True`` and ``length`` is 93 ``11``, the 11-digit SWIFT codes generated will always end in ``'XXX'`` 94 to denote that they belong to primary branches/offices. 95 96 For extra authenticity, localized providers may opt to include SWIFT 97 bank codes, location codes, and branch codes used in their respective 98 locales. If ``use_dataset`` is ``True``, this method will generate SWIFT 99 codes based on those locale-specific codes if included. If those codes 100 were not included, then it will behave as if ``use_dataset`` were 101 ``False``, and in that mode, all those codes will just be randomly 102 generated as per the specification. 103 104 :sample: 105 :sample: length=8 106 :sample: length=8, use_dataset=True 107 :sample: length=11 108 :sample: length=11, primary=True 109 :sample: length=11, use_dataset=True 110 :sample: length=11, primary=True, use_dataset=True 111 """ 112 if length is None: 113 length = self.random_element((8, 11)) 114 if length not in (8, 11): 115 raise AssertionError('length can only be 8 or 11') 116 117 if use_dataset and hasattr(self, 'swift_bank_codes'): 118 bank_code = self.random_element(self.swift_bank_codes) 119 else: 120 bank_code = self.lexify('????', letters=string.ascii_uppercase) 121 122 if use_dataset and hasattr(self, 'swift_location_codes'): 123 location_code = self.random_element(self.swift_location_codes) 124 else: 125 location_code = self.lexify('??', letters=string.ascii_uppercase + string.digits) 126 127 if length == 8: 128 return bank_code + self.country_code + location_code 129 130 if primary: 131 branch_code = 'XXX' 132 elif use_dataset and hasattr(self, 'swift_branch_codes'): 133 branch_code = self.random_element(self.swift_branch_codes) 134 else: 135 branch_code = self.lexify('???', letters=string.ascii_uppercase + string.digits) 136 137 return bank_code + self.country_code + location_code + branch_code 138 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/faker/providers/bank/__init__.py b/faker/providers/bank/__init__.py --- a/faker/providers/bank/__init__.py +++ b/faker/providers/bank/__init__.py @@ -1,6 +1,7 @@ import re import string +from math import ceil from string import ascii_uppercase from .. import BaseProvider @@ -29,6 +30,19 @@ bban_format = '????#############' country_code = 'GB' + def aba(self): + """Generate an ABA routing transit number.""" + fed_num = self.random_int(min=1, max=12) + rand = self.numerify('######') + aba = f"{fed_num:02}{rand}" + + # calculate check digit + d = [int(n) for n in aba] + chk_digit = 3*(d[0] + d[3] + d[6]) + 7*(d[1] + d[4] + d[7]) + d[2] + d[5] + chk_digit = ceil(chk_digit/10)*10 - chk_digit + + return f"{aba}{chk_digit}" + def bank_country(self): """Generate the bank provider's ISO 3166-1 alpha-2 country code.""" return self.country_code
{"golden_diff": "diff --git a/faker/providers/bank/__init__.py b/faker/providers/bank/__init__.py\n--- a/faker/providers/bank/__init__.py\n+++ b/faker/providers/bank/__init__.py\n@@ -1,6 +1,7 @@\n import re\n import string\n \n+from math import ceil\n from string import ascii_uppercase\n \n from .. import BaseProvider\n@@ -29,6 +30,19 @@\n bban_format = '????#############'\n country_code = 'GB'\n \n+ def aba(self):\n+ \"\"\"Generate an ABA routing transit number.\"\"\"\n+ fed_num = self.random_int(min=1, max=12)\n+ rand = self.numerify('######')\n+ aba = f\"{fed_num:02}{rand}\"\n+\n+ # calculate check digit\n+ d = [int(n) for n in aba]\n+ chk_digit = 3*(d[0] + d[3] + d[6]) + 7*(d[1] + d[4] + d[7]) + d[2] + d[5]\n+ chk_digit = ceil(chk_digit/10)*10 - chk_digit\n+\n+ return f\"{aba}{chk_digit}\"\n+\n def bank_country(self):\n \"\"\"Generate the bank provider's ISO 3166-1 alpha-2 country code.\"\"\"\n return self.country_code\n", "issue": "Adding ABA Routing Number to Bank Provider\n* Faker version:8.1.2\r\n\r\n\r\nIs there any interest in adding ABA routing numbers to the Bank provider? https://en.wikipedia.org/wiki/ABA_routing_transit_number\r\n\r\nThis would be helpful for my use case and have code to generate but wanted to check before putting more effort in.\n", "before_files": [{"content": "import re\nimport string\n\nfrom string import ascii_uppercase\n\nfrom .. import BaseProvider\n\nlocalized = True\ndefault_locale = 'en_GB'\n\n\nclass Provider(BaseProvider):\n \"\"\"Implement default bank provider for Faker.\n\n .. important::\n Bank codes, account numbers, and other ID's generated by this provider\n are only valid in form, i.e. they conform to some standard/format, are\n of the expected lengths, and have valid checksums (where applicable).\n Results generated that turn out to be valid in real life are purely\n coincidental.\n\n Sources:\n\n - https://en.wikipedia.org/wiki/International_Bank_Account_Number\n - https://www.theswiftcodes.com/swift-code-checker/\n \"\"\"\n\n ALPHA = {c: str(ord(c) % 55) for c in string.ascii_uppercase}\n bban_format = '????#############'\n country_code = 'GB'\n\n def bank_country(self):\n \"\"\"Generate the bank provider's ISO 3166-1 alpha-2 country code.\"\"\"\n return self.country_code\n\n def bban(self):\n \"\"\"Generate a Basic Bank Account Number (BBAN).\"\"\"\n temp = re.sub(r'\\?',\n lambda x: self.random_element(ascii_uppercase),\n self.bban_format)\n return self.numerify(temp)\n\n def iban(self):\n \"\"\"Generate an International Bank Account Number (IBAN).\"\"\"\n bban = self.bban()\n\n check = bban + self.country_code + '00'\n check = int(''.join(self.ALPHA.get(c, c) for c in check))\n check = 98 - (check % 97)\n check = str(check).zfill(2)\n\n return self.country_code + check + bban\n\n def swift8(self, use_dataset=False):\n \"\"\"Generate an 8-digit SWIFT code.\n\n This method uses |swift| under the hood with the ``length`` argument set\n to ``8`` and with the ``primary`` argument omitted. All 8-digit SWIFT\n codes already refer to the primary branch/office.\n\n :sample:\n :sample: use_dataset=True\n \"\"\"\n return self.swift(length=8, use_dataset=use_dataset)\n\n def swift11(self, primary=None, use_dataset=False):\n \"\"\"Generate an 11-digit SWIFT code.\n\n This method uses |swift| under the hood with the ``length`` argument set\n to ``11``. If ``primary`` is set to ``True``, the SWIFT code will always\n end with ``'XXX'``. All 11-digit SWIFT codes use this convention to\n refer to the primary branch/office.\n\n :sample:\n :sample: use_dataset=True\n \"\"\"\n return self.swift(length=11, primary=primary, use_dataset=use_dataset)\n\n def swift(self, length=None, primary=None, use_dataset=False):\n \"\"\"Generate a SWIFT code.\n\n SWIFT codes, reading from left to right, are composed of a 4 alphabet\n character bank code, a 2 alphabet character country code, a 2\n alphanumeric location code, and an optional 3 alphanumeric branch code.\n This means SWIFT codes can only have 8 or 11 characters, so the value of\n ``length`` can only be ``None`` or the integers ``8`` or ``11``. If the\n value is ``None``, then a value of ``8`` or ``11`` will randomly be\n assigned.\n\n Because all 8-digit SWIFT codes already refer to the primary branch or\n office, the ``primary`` argument only has an effect if the value of\n ``length`` is ``11``. If ``primary`` is ``True`` and ``length`` is\n ``11``, the 11-digit SWIFT codes generated will always end in ``'XXX'``\n to denote that they belong to primary branches/offices.\n\n For extra authenticity, localized providers may opt to include SWIFT\n bank codes, location codes, and branch codes used in their respective\n locales. If ``use_dataset`` is ``True``, this method will generate SWIFT\n codes based on those locale-specific codes if included. If those codes\n were not included, then it will behave as if ``use_dataset`` were\n ``False``, and in that mode, all those codes will just be randomly\n generated as per the specification.\n\n :sample:\n :sample: length=8\n :sample: length=8, use_dataset=True\n :sample: length=11\n :sample: length=11, primary=True\n :sample: length=11, use_dataset=True\n :sample: length=11, primary=True, use_dataset=True\n \"\"\"\n if length is None:\n length = self.random_element((8, 11))\n if length not in (8, 11):\n raise AssertionError('length can only be 8 or 11')\n\n if use_dataset and hasattr(self, 'swift_bank_codes'):\n bank_code = self.random_element(self.swift_bank_codes)\n else:\n bank_code = self.lexify('????', letters=string.ascii_uppercase)\n\n if use_dataset and hasattr(self, 'swift_location_codes'):\n location_code = self.random_element(self.swift_location_codes)\n else:\n location_code = self.lexify('??', letters=string.ascii_uppercase + string.digits)\n\n if length == 8:\n return bank_code + self.country_code + location_code\n\n if primary:\n branch_code = 'XXX'\n elif use_dataset and hasattr(self, 'swift_branch_codes'):\n branch_code = self.random_element(self.swift_branch_codes)\n else:\n branch_code = self.lexify('???', letters=string.ascii_uppercase + string.digits)\n\n return bank_code + self.country_code + location_code + branch_code\n", "path": "faker/providers/bank/__init__.py"}], "after_files": [{"content": "import re\nimport string\n\nfrom math import ceil\nfrom string import ascii_uppercase\n\nfrom .. import BaseProvider\n\nlocalized = True\ndefault_locale = 'en_GB'\n\n\nclass Provider(BaseProvider):\n \"\"\"Implement default bank provider for Faker.\n\n .. important::\n Bank codes, account numbers, and other ID's generated by this provider\n are only valid in form, i.e. they conform to some standard/format, are\n of the expected lengths, and have valid checksums (where applicable).\n Results generated that turn out to be valid in real life are purely\n coincidental.\n\n Sources:\n\n - https://en.wikipedia.org/wiki/International_Bank_Account_Number\n - https://www.theswiftcodes.com/swift-code-checker/\n \"\"\"\n\n ALPHA = {c: str(ord(c) % 55) for c in string.ascii_uppercase}\n bban_format = '????#############'\n country_code = 'GB'\n\n def aba(self):\n \"\"\"Generate an ABA routing transit number.\"\"\"\n fed_num = self.random_int(min=1, max=12)\n rand = self.numerify('######')\n aba = f\"{fed_num:02}{rand}\"\n\n # calculate check digit\n d = [int(n) for n in aba]\n chk_digit = 3*(d[0] + d[3] + d[6]) + 7*(d[1] + d[4] + d[7]) + d[2] + d[5]\n chk_digit = ceil(chk_digit/10)*10 - chk_digit\n\n return f\"{aba}{chk_digit}\"\n\n def bank_country(self):\n \"\"\"Generate the bank provider's ISO 3166-1 alpha-2 country code.\"\"\"\n return self.country_code\n\n def bban(self):\n \"\"\"Generate a Basic Bank Account Number (BBAN).\"\"\"\n temp = re.sub(r'\\?',\n lambda x: self.random_element(ascii_uppercase),\n self.bban_format)\n return self.numerify(temp)\n\n def iban(self):\n \"\"\"Generate an International Bank Account Number (IBAN).\"\"\"\n bban = self.bban()\n\n check = bban + self.country_code + '00'\n check = int(''.join(self.ALPHA.get(c, c) for c in check))\n check = 98 - (check % 97)\n check = str(check).zfill(2)\n\n return self.country_code + check + bban\n\n def swift8(self, use_dataset=False):\n \"\"\"Generate an 8-digit SWIFT code.\n\n This method uses |swift| under the hood with the ``length`` argument set\n to ``8`` and with the ``primary`` argument omitted. All 8-digit SWIFT\n codes already refer to the primary branch/office.\n\n :sample:\n :sample: use_dataset=True\n \"\"\"\n return self.swift(length=8, use_dataset=use_dataset)\n\n def swift11(self, primary=None, use_dataset=False):\n \"\"\"Generate an 11-digit SWIFT code.\n\n This method uses |swift| under the hood with the ``length`` argument set\n to ``11``. If ``primary`` is set to ``True``, the SWIFT code will always\n end with ``'XXX'``. All 11-digit SWIFT codes use this convention to\n refer to the primary branch/office.\n\n :sample:\n :sample: use_dataset=True\n \"\"\"\n return self.swift(length=11, primary=primary, use_dataset=use_dataset)\n\n def swift(self, length=None, primary=None, use_dataset=False):\n \"\"\"Generate a SWIFT code.\n\n SWIFT codes, reading from left to right, are composed of a 4 alphabet\n character bank code, a 2 alphabet character country code, a 2\n alphanumeric location code, and an optional 3 alphanumeric branch code.\n This means SWIFT codes can only have 8 or 11 characters, so the value of\n ``length`` can only be ``None`` or the integers ``8`` or ``11``. If the\n value is ``None``, then a value of ``8`` or ``11`` will randomly be\n assigned.\n\n Because all 8-digit SWIFT codes already refer to the primary branch or\n office, the ``primary`` argument only has an effect if the value of\n ``length`` is ``11``. If ``primary`` is ``True`` and ``length`` is\n ``11``, the 11-digit SWIFT codes generated will always end in ``'XXX'``\n to denote that they belong to primary branches/offices.\n\n For extra authenticity, localized providers may opt to include SWIFT\n bank codes, location codes, and branch codes used in their respective\n locales. If ``use_dataset`` is ``True``, this method will generate SWIFT\n codes based on those locale-specific codes if included. If those codes\n were not included, then it will behave as if ``use_dataset`` were\n ``False``, and in that mode, all those codes will just be randomly\n generated as per the specification.\n\n :sample:\n :sample: length=8\n :sample: length=8, use_dataset=True\n :sample: length=11\n :sample: length=11, primary=True\n :sample: length=11, use_dataset=True\n :sample: length=11, primary=True, use_dataset=True\n \"\"\"\n if length is None:\n length = self.random_element((8, 11))\n if length not in (8, 11):\n raise AssertionError('length can only be 8 or 11')\n\n if use_dataset and hasattr(self, 'swift_bank_codes'):\n bank_code = self.random_element(self.swift_bank_codes)\n else:\n bank_code = self.lexify('????', letters=string.ascii_uppercase)\n\n if use_dataset and hasattr(self, 'swift_location_codes'):\n location_code = self.random_element(self.swift_location_codes)\n else:\n location_code = self.lexify('??', letters=string.ascii_uppercase + string.digits)\n\n if length == 8:\n return bank_code + self.country_code + location_code\n\n if primary:\n branch_code = 'XXX'\n elif use_dataset and hasattr(self, 'swift_branch_codes'):\n branch_code = self.random_element(self.swift_branch_codes)\n else:\n branch_code = self.lexify('???', letters=string.ascii_uppercase + string.digits)\n\n return bank_code + self.country_code + location_code + branch_code\n", "path": "faker/providers/bank/__init__.py"}]}
1,928
308
gh_patches_debug_5161
rasdani/github-patches
git_diff
kubeflow__pipelines-6683
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [sdk] dependency conflict with tensorflow 2.6.0 and seldon-core ### Environment * KFP version: 1.7 * KFP SDK version: 1.8.3 * All dependencies version: ``` [~]$ pip list | grep kfp kfp 1.8.3 kfp-pipeline-spec 0.1.11 kfp-server-api 1.7.0 ``` kfp==1.8.3 collides with tensorflow==2.6 because it requires https://github.com/kubeflow/pipelines/blob/220d79df66e31bbd93c409fb361e0463bde4aeac/sdk/python/setup.py#L56 while tensorflow needs ``` Warning!!! Possibly conflicting dependencies found: * tensorflow==2.6.0 - typing-extensions [required: ~=3.7.4, installed: 3.10.0.2] ``` https://github.com/tensorflow/tensorflow/blob/421fba8888bb8f8724bc2e35ca2fdcde16e1bfe5/tensorflow/tools/pip_package/setup.py#L90 is `'typing-extensions>=3.7.4,<4;python_version<"3.9"'` not enough? The same goes for seldon-core==1.11.* and package click and absl-py ``` ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. kfp 1.8.2 requires absl-py<=0.11,>=0.9, but you have absl-py 0.13.0 which is incompatible. kfp 1.8.2 requires click<8,>=7.1.1, but you have click 8.0.1 which is incompatible. kfp 1.8.2 requires typing-extensions<4,>=3.10.0.2, but you have typing-extensions 3.7.4.3 which is incompatible. ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `sdk/python/setup.py` Content: ``` 1 # Copyright 2018 The Kubeflow Authors 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 import os 16 import re 17 18 from setuptools import setup 19 20 NAME = 'kfp' 21 #VERSION = .... Change the version in kfp/__init__.py 22 23 # NOTICE, after any updates to the following, ./requirements.in should be updated 24 # accordingly. 25 REQUIRES = [ 26 'absl-py>=0.9,<=0.11', 27 'PyYAML>=5.3,<6', 28 # `Blob.from_string` was introduced in google-cloud-storage 1.20.0 29 # https://github.com/googleapis/python-storage/blob/master/CHANGELOG.md#1200 30 'google-cloud-storage>=1.20.0,<2', 31 'kubernetes>=8.0.0,<19', 32 # google-api-python-client v2 doesn't work for private dicovery by default: 33 # https://github.com/googleapis/google-api-python-client/issues/1225#issuecomment-791058235 34 'google-api-python-client>=1.7.8,<2', 35 'google-auth>=1.6.1,<2', 36 'requests-toolbelt>=0.8.0,<1', 37 'cloudpickle>=1.3.0,<2', 38 # Update the upper version whenever a new major version of the 39 # kfp-server-api package is released. 40 # Update the lower version when kfp sdk depends on new apis/fields in 41 # kfp-server-api. 42 # Note, please also update ./requirements.in 43 'kfp-server-api>=1.1.2,<2.0.0', 44 'jsonschema>=3.0.1,<4', 45 'tabulate>=0.8.6,<1', 46 'click>=7.1.1,<8', 47 'Deprecated>=1.2.7,<2', 48 'strip-hints>=0.1.8,<1', 49 'docstring-parser>=0.7.3,<1', 50 'kfp-pipeline-spec>=0.1.10,<0.2.0', 51 'fire>=0.3.1,<1', 52 'protobuf>=3.13.0,<4', 53 'uritemplate>=3.0.1,<4', 54 # Standard library backports 55 'dataclasses;python_version<"3.7"', 56 'typing-extensions>=3.10.0.2,<4;python_version<"3.9"', 57 'pydantic>=1.8.2,<2', 58 ] 59 60 TESTS_REQUIRE = [ 61 'frozendict', 62 ] 63 64 65 def find_version(*file_path_parts): 66 here = os.path.abspath(os.path.dirname(__file__)) 67 with open(os.path.join(here, *file_path_parts), 'r') as fp: 68 version_file_text = fp.read() 69 70 version_match = re.search( 71 r"^__version__ = ['\"]([^'\"]*)['\"]", 72 version_file_text, 73 re.M, 74 ) 75 if version_match: 76 return version_match.group(1) 77 78 raise RuntimeError('Unable to find version string.') 79 80 81 setup( 82 name=NAME, 83 version=find_version('kfp', '__init__.py'), 84 description='KubeFlow Pipelines SDK', 85 author='The Kubeflow Authors', 86 url="https://github.com/kubeflow/pipelines", 87 project_urls={ 88 "Documentation": "https://kubeflow-pipelines.readthedocs.io/en/stable/", 89 "Bug Tracker": "https://github.com/kubeflow/pipelines/issues", 90 "Source": "https://github.com/kubeflow/pipelines/tree/master/sdk", 91 "Changelog": "https://github.com/kubeflow/pipelines/blob/master/sdk/RELEASE.md", 92 }, 93 install_requires=REQUIRES, 94 tests_require=TESTS_REQUIRE, 95 packages=[ 96 'kfp', 97 'kfp.auth', 98 'kfp.cli', 99 'kfp.cli.diagnose_me', 100 'kfp.compiler', 101 'kfp.components', 102 'kfp.components.structures', 103 'kfp.containers', 104 'kfp.dsl', 105 'kfp.dsl.extensions', 106 'kfp.notebook', 107 'kfp.v2', 108 'kfp.v2.compiler', 109 'kfp.v2.components', 110 'kfp.v2.components.types', 111 'kfp.v2.components.experimental', 112 'kfp.v2.dsl', 113 'kfp.v2.google.client', 114 'kfp.v2.google.experimental', 115 ], 116 classifiers=[ 117 'Intended Audience :: Developers', 118 'Intended Audience :: Education', 119 'Intended Audience :: Science/Research', 120 'License :: OSI Approved :: Apache Software License', 121 'Programming Language :: Python :: 3', 122 'Programming Language :: Python :: 3.6', 123 'Programming Language :: Python :: 3.7', 124 'Programming Language :: Python :: 3.8', 125 'Programming Language :: Python :: 3.9', 126 'Topic :: Scientific/Engineering', 127 'Topic :: Scientific/Engineering :: Artificial Intelligence', 128 'Topic :: Software Development', 129 'Topic :: Software Development :: Libraries', 130 'Topic :: Software Development :: Libraries :: Python Modules', 131 ], 132 python_requires='>=3.6.1', 133 include_package_data=True, 134 entry_points={ 135 'console_scripts': [ 136 'dsl-compile = kfp.compiler.main:main', 137 'dsl-compile-v2 = kfp.v2.compiler.main:main', 138 'kfp=kfp.__main__:main' 139 ] 140 }) 141 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/sdk/python/setup.py b/sdk/python/setup.py --- a/sdk/python/setup.py +++ b/sdk/python/setup.py @@ -51,10 +51,10 @@ 'fire>=0.3.1,<1', 'protobuf>=3.13.0,<4', 'uritemplate>=3.0.1,<4', + 'pydantic>=1.8.2,<2', # Standard library backports 'dataclasses;python_version<"3.7"', - 'typing-extensions>=3.10.0.2,<4;python_version<"3.9"', - 'pydantic>=1.8.2,<2', + 'typing-extensions>=3.7.4,<4;python_version<"3.9"', ] TESTS_REQUIRE = [
{"golden_diff": "diff --git a/sdk/python/setup.py b/sdk/python/setup.py\n--- a/sdk/python/setup.py\n+++ b/sdk/python/setup.py\n@@ -51,10 +51,10 @@\n 'fire>=0.3.1,<1',\n 'protobuf>=3.13.0,<4',\n 'uritemplate>=3.0.1,<4',\n+ 'pydantic>=1.8.2,<2',\n # Standard library backports\n 'dataclasses;python_version<\"3.7\"',\n- 'typing-extensions>=3.10.0.2,<4;python_version<\"3.9\"',\n- 'pydantic>=1.8.2,<2',\n+ 'typing-extensions>=3.7.4,<4;python_version<\"3.9\"',\n ]\n \n TESTS_REQUIRE = [\n", "issue": "[sdk] dependency conflict with tensorflow 2.6.0 and seldon-core\n### Environment\r\n\r\n* KFP version: 1.7\r\n* KFP SDK version: 1.8.3\r\n\r\n\r\n* All dependencies version:\r\n```\r\n[~]$ pip list | grep kfp\r\nkfp 1.8.3\r\nkfp-pipeline-spec 0.1.11\r\nkfp-server-api 1.7.0\r\n```\r\nkfp==1.8.3 collides with tensorflow==2.6 because it requires \r\n\r\nhttps://github.com/kubeflow/pipelines/blob/220d79df66e31bbd93c409fb361e0463bde4aeac/sdk/python/setup.py#L56\r\n\r\nwhile tensorflow needs\r\n```\r\nWarning!!! Possibly conflicting dependencies found:\r\n* tensorflow==2.6.0\r\n - typing-extensions [required: ~=3.7.4, installed: 3.10.0.2]\r\n```\r\n\r\nhttps://github.com/tensorflow/tensorflow/blob/421fba8888bb8f8724bc2e35ca2fdcde16e1bfe5/tensorflow/tools/pip_package/setup.py#L90\r\n\r\nis `'typing-extensions>=3.7.4,<4;python_version<\"3.9\"'` not enough?\r\n\r\nThe same goes for seldon-core==1.11.* and package click and absl-py\r\n\r\n```\r\nERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\r\nkfp 1.8.2 requires absl-py<=0.11,>=0.9, but you have absl-py 0.13.0 which is incompatible.\r\nkfp 1.8.2 requires click<8,>=7.1.1, but you have click 8.0.1 which is incompatible.\r\nkfp 1.8.2 requires typing-extensions<4,>=3.10.0.2, but you have typing-extensions 3.7.4.3 which is incompatible.\r\n```\n", "before_files": [{"content": "# Copyright 2018 The Kubeflow Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport re\n\nfrom setuptools import setup\n\nNAME = 'kfp'\n#VERSION = .... Change the version in kfp/__init__.py\n\n# NOTICE, after any updates to the following, ./requirements.in should be updated\n# accordingly.\nREQUIRES = [\n 'absl-py>=0.9,<=0.11',\n 'PyYAML>=5.3,<6',\n # `Blob.from_string` was introduced in google-cloud-storage 1.20.0\n # https://github.com/googleapis/python-storage/blob/master/CHANGELOG.md#1200\n 'google-cloud-storage>=1.20.0,<2',\n 'kubernetes>=8.0.0,<19',\n # google-api-python-client v2 doesn't work for private dicovery by default:\n # https://github.com/googleapis/google-api-python-client/issues/1225#issuecomment-791058235\n 'google-api-python-client>=1.7.8,<2',\n 'google-auth>=1.6.1,<2',\n 'requests-toolbelt>=0.8.0,<1',\n 'cloudpickle>=1.3.0,<2',\n # Update the upper version whenever a new major version of the\n # kfp-server-api package is released.\n # Update the lower version when kfp sdk depends on new apis/fields in\n # kfp-server-api.\n # Note, please also update ./requirements.in\n 'kfp-server-api>=1.1.2,<2.0.0',\n 'jsonschema>=3.0.1,<4',\n 'tabulate>=0.8.6,<1',\n 'click>=7.1.1,<8',\n 'Deprecated>=1.2.7,<2',\n 'strip-hints>=0.1.8,<1',\n 'docstring-parser>=0.7.3,<1',\n 'kfp-pipeline-spec>=0.1.10,<0.2.0',\n 'fire>=0.3.1,<1',\n 'protobuf>=3.13.0,<4',\n 'uritemplate>=3.0.1,<4',\n # Standard library backports\n 'dataclasses;python_version<\"3.7\"',\n 'typing-extensions>=3.10.0.2,<4;python_version<\"3.9\"',\n 'pydantic>=1.8.2,<2',\n]\n\nTESTS_REQUIRE = [\n 'frozendict',\n]\n\n\ndef find_version(*file_path_parts):\n here = os.path.abspath(os.path.dirname(__file__))\n with open(os.path.join(here, *file_path_parts), 'r') as fp:\n version_file_text = fp.read()\n\n version_match = re.search(\n r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n version_file_text,\n re.M,\n )\n if version_match:\n return version_match.group(1)\n\n raise RuntimeError('Unable to find version string.')\n\n\nsetup(\n name=NAME,\n version=find_version('kfp', '__init__.py'),\n description='KubeFlow Pipelines SDK',\n author='The Kubeflow Authors',\n url=\"https://github.com/kubeflow/pipelines\",\n project_urls={\n \"Documentation\": \"https://kubeflow-pipelines.readthedocs.io/en/stable/\",\n \"Bug Tracker\": \"https://github.com/kubeflow/pipelines/issues\",\n \"Source\": \"https://github.com/kubeflow/pipelines/tree/master/sdk\",\n \"Changelog\": \"https://github.com/kubeflow/pipelines/blob/master/sdk/RELEASE.md\",\n },\n install_requires=REQUIRES,\n tests_require=TESTS_REQUIRE,\n packages=[\n 'kfp',\n 'kfp.auth',\n 'kfp.cli',\n 'kfp.cli.diagnose_me',\n 'kfp.compiler',\n 'kfp.components',\n 'kfp.components.structures',\n 'kfp.containers',\n 'kfp.dsl',\n 'kfp.dsl.extensions',\n 'kfp.notebook',\n 'kfp.v2',\n 'kfp.v2.compiler',\n 'kfp.v2.components',\n 'kfp.v2.components.types',\n 'kfp.v2.components.experimental',\n 'kfp.v2.dsl',\n 'kfp.v2.google.client',\n 'kfp.v2.google.experimental',\n ],\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=3.6.1',\n include_package_data=True,\n entry_points={\n 'console_scripts': [\n 'dsl-compile = kfp.compiler.main:main',\n 'dsl-compile-v2 = kfp.v2.compiler.main:main',\n 'kfp=kfp.__main__:main'\n ]\n })\n", "path": "sdk/python/setup.py"}], "after_files": [{"content": "# Copyright 2018 The Kubeflow Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport re\n\nfrom setuptools import setup\n\nNAME = 'kfp'\n#VERSION = .... Change the version in kfp/__init__.py\n\n# NOTICE, after any updates to the following, ./requirements.in should be updated\n# accordingly.\nREQUIRES = [\n 'absl-py>=0.9,<=0.11',\n 'PyYAML>=5.3,<6',\n # `Blob.from_string` was introduced in google-cloud-storage 1.20.0\n # https://github.com/googleapis/python-storage/blob/master/CHANGELOG.md#1200\n 'google-cloud-storage>=1.20.0,<2',\n 'kubernetes>=8.0.0,<19',\n # google-api-python-client v2 doesn't work for private dicovery by default:\n # https://github.com/googleapis/google-api-python-client/issues/1225#issuecomment-791058235\n 'google-api-python-client>=1.7.8,<2',\n 'google-auth>=1.6.1,<2',\n 'requests-toolbelt>=0.8.0,<1',\n 'cloudpickle>=1.3.0,<2',\n # Update the upper version whenever a new major version of the\n # kfp-server-api package is released.\n # Update the lower version when kfp sdk depends on new apis/fields in\n # kfp-server-api.\n # Note, please also update ./requirements.in\n 'kfp-server-api>=1.1.2,<2.0.0',\n 'jsonschema>=3.0.1,<4',\n 'tabulate>=0.8.6,<1',\n 'click>=7.1.1,<8',\n 'Deprecated>=1.2.7,<2',\n 'strip-hints>=0.1.8,<1',\n 'docstring-parser>=0.7.3,<1',\n 'kfp-pipeline-spec>=0.1.10,<0.2.0',\n 'fire>=0.3.1,<1',\n 'protobuf>=3.13.0,<4',\n 'uritemplate>=3.0.1,<4',\n 'pydantic>=1.8.2,<2',\n # Standard library backports\n 'dataclasses;python_version<\"3.7\"',\n 'typing-extensions>=3.7.4,<4;python_version<\"3.9\"',\n]\n\nTESTS_REQUIRE = [\n 'frozendict',\n]\n\n\ndef find_version(*file_path_parts):\n here = os.path.abspath(os.path.dirname(__file__))\n with open(os.path.join(here, *file_path_parts), 'r') as fp:\n version_file_text = fp.read()\n\n version_match = re.search(\n r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n version_file_text,\n re.M,\n )\n if version_match:\n return version_match.group(1)\n\n raise RuntimeError('Unable to find version string.')\n\n\nsetup(\n name=NAME,\n version=find_version('kfp', '__init__.py'),\n description='KubeFlow Pipelines SDK',\n author='The Kubeflow Authors',\n url=\"https://github.com/kubeflow/pipelines\",\n project_urls={\n \"Documentation\": \"https://kubeflow-pipelines.readthedocs.io/en/stable/\",\n \"Bug Tracker\": \"https://github.com/kubeflow/pipelines/issues\",\n \"Source\": \"https://github.com/kubeflow/pipelines/tree/master/sdk\",\n \"Changelog\": \"https://github.com/kubeflow/pipelines/blob/master/sdk/RELEASE.md\",\n },\n install_requires=REQUIRES,\n tests_require=TESTS_REQUIRE,\n packages=[\n 'kfp',\n 'kfp.auth',\n 'kfp.cli',\n 'kfp.cli.diagnose_me',\n 'kfp.compiler',\n 'kfp.components',\n 'kfp.components.structures',\n 'kfp.containers',\n 'kfp.dsl',\n 'kfp.dsl.extensions',\n 'kfp.notebook',\n 'kfp.v2',\n 'kfp.v2.compiler',\n 'kfp.v2.components',\n 'kfp.v2.components.types',\n 'kfp.v2.components.experimental',\n 'kfp.v2.dsl',\n 'kfp.v2.google.client',\n 'kfp.v2.google.experimental',\n ],\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=3.6.1',\n include_package_data=True,\n entry_points={\n 'console_scripts': [\n 'dsl-compile = kfp.compiler.main:main',\n 'dsl-compile-v2 = kfp.v2.compiler.main:main',\n 'kfp=kfp.__main__:main'\n ]\n })\n", "path": "sdk/python/setup.py"}]}
2,377
184
gh_patches_debug_5741
rasdani/github-patches
git_diff
Parsl__parsl-2997
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Cannot Instantiate SlurmProvider in parsl-2023.12.11 **Describe the bug** When I attempt to instantiate a SlurmProvider in parsl-2023.12.11 I get the following error: `AttributeError: class SlurmProvider uses qos in the constructor, but does not define it as an attribute` **To Reproduce** Steps to reproduce the behavior: 1. Install parsl-2023.12.11 2. Attempt to instantiate SlurmProvider 3. See error **Expected behavior** The provider should be instantiated **Environment** - OS: Seen on Ubuntu and MacOS github workers and Linux Mint 21.2 - Python version: 3.8 and 3.11 - Parsl version: 2023.12.11 **Distributed Environment** - I'm not actually running a script at all just trying to instantiate the provider/a config with the provider --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `parsl/providers/slurm/slurm.py` Content: ``` 1 import os 2 import math 3 import time 4 import logging 5 import re 6 import typeguard 7 8 from typing import Optional 9 10 from parsl.channels import LocalChannel 11 from parsl.channels.base import Channel 12 from parsl.jobs.states import JobState, JobStatus 13 from parsl.launchers import SingleNodeLauncher 14 from parsl.launchers.base import Launcher 15 from parsl.providers.cluster_provider import ClusterProvider 16 from parsl.providers.slurm.template import template_string 17 from parsl.utils import RepresentationMixin, wtime_to_minutes 18 19 logger = logging.getLogger(__name__) 20 21 translate_table = { 22 'PD': JobState.PENDING, 23 'R': JobState.RUNNING, 24 'CA': JobState.CANCELLED, 25 'CF': JobState.PENDING, # (configuring), 26 'CG': JobState.RUNNING, # (completing), 27 'CD': JobState.COMPLETED, 28 'F': JobState.FAILED, # (failed), 29 'TO': JobState.TIMEOUT, # (timeout), 30 'NF': JobState.FAILED, # (node failure), 31 'RV': JobState.FAILED, # (revoked) and 32 'SE': JobState.FAILED # (special exit state) 33 } 34 35 36 class SlurmProvider(ClusterProvider, RepresentationMixin): 37 """Slurm Execution Provider 38 39 This provider uses sbatch to submit, squeue for status and scancel to cancel 40 jobs. The sbatch script to be used is created from a template file in this 41 same module. 42 43 Parameters 44 ---------- 45 partition : str 46 Slurm partition to request blocks from. If unspecified or ``None``, no partition slurm directive will be specified. 47 account : str 48 Slurm account to which to charge resources used by the job. If unspecified or ``None``, the job will use the 49 user's default account. 50 qos : str 51 Slurm queue to place job in. If unspecified or ``None``, no queue slurm directive will be specified. 52 constraint : str 53 Slurm job constraint, often used to choose cpu or gpu type. If unspecified or ``None``, no constraint slurm directive will be added. 54 channel : Channel 55 Channel for accessing this provider. Possible channels include 56 :class:`~parsl.channels.LocalChannel` (the default), 57 :class:`~parsl.channels.SSHChannel`, or 58 :class:`~parsl.channels.SSHInteractiveLoginChannel`. 59 nodes_per_block : int 60 Nodes to provision per block. 61 cores_per_node : int 62 Specify the number of cores to provision per node. If set to None, executors 63 will assume all cores on the node are available for computation. Default is None. 64 mem_per_node : int 65 Specify the real memory to provision per node in GB. If set to None, no 66 explicit request to the scheduler will be made. Default is None. 67 min_blocks : int 68 Minimum number of blocks to maintain. 69 max_blocks : int 70 Maximum number of blocks to maintain. 71 parallelism : float 72 Ratio of provisioned task slots to active tasks. A parallelism value of 1 represents aggressive 73 scaling where as many resources as possible are used; parallelism close to 0 represents 74 the opposite situation in which as few resources as possible (i.e., min_blocks) are used. 75 walltime : str 76 Walltime requested per block in HH:MM:SS. 77 scheduler_options : str 78 String to prepend to the #SBATCH blocks in the submit script to the scheduler. 79 regex_job_id : str 80 The regular expression used to extract the job ID from the ``sbatch`` standard output. 81 The default is ``r"Submitted batch job (?P<id>\\S*)"``, where ``id`` is the regular expression 82 symbolic group for the job ID. 83 worker_init : str 84 Command to be run before starting a worker, such as 'module load Anaconda; source activate env'. 85 exclusive : bool (Default = True) 86 Requests nodes which are not shared with other running jobs. 87 launcher : Launcher 88 Launcher for this provider. Possible launchers include 89 :class:`~parsl.launchers.SingleNodeLauncher` (the default), 90 :class:`~parsl.launchers.SrunLauncher`, or 91 :class:`~parsl.launchers.AprunLauncher` 92 move_files : Optional[Bool]: should files be moved? by default, Parsl will try to move files. 93 """ 94 95 @typeguard.typechecked 96 def __init__(self, 97 partition: Optional[str] = None, 98 account: Optional[str] = None, 99 qos: Optional[str] = None, 100 constraint: Optional[str] = None, 101 channel: Channel = LocalChannel(), 102 nodes_per_block: int = 1, 103 cores_per_node: Optional[int] = None, 104 mem_per_node: Optional[int] = None, 105 init_blocks: int = 1, 106 min_blocks: int = 0, 107 max_blocks: int = 1, 108 parallelism: float = 1, 109 walltime: str = "00:10:00", 110 scheduler_options: str = '', 111 regex_job_id: str = r"Submitted batch job (?P<id>\S*)", 112 worker_init: str = '', 113 cmd_timeout: int = 10, 114 exclusive: bool = True, 115 move_files: bool = True, 116 launcher: Launcher = SingleNodeLauncher()): 117 label = 'slurm' 118 super().__init__(label, 119 channel, 120 nodes_per_block, 121 init_blocks, 122 min_blocks, 123 max_blocks, 124 parallelism, 125 walltime, 126 cmd_timeout=cmd_timeout, 127 launcher=launcher) 128 129 self.partition = partition 130 self.cores_per_node = cores_per_node 131 self.mem_per_node = mem_per_node 132 self.exclusive = exclusive 133 self.move_files = move_files 134 self.account = account 135 self.scheduler_options = scheduler_options + '\n' 136 if exclusive: 137 self.scheduler_options += "#SBATCH --exclusive\n" 138 if partition: 139 self.scheduler_options += "#SBATCH --partition={}\n".format(partition) 140 if account: 141 self.scheduler_options += "#SBATCH --account={}\n".format(account) 142 if qos: 143 self.scheduler_options += "#SBATCH --qos={}\n".format(qos) 144 if constraint: 145 self.scheduler_options += "#SBATCH --constraint={}\n".format(constraint) 146 147 self.regex_job_id = regex_job_id 148 self.worker_init = worker_init + '\n' 149 150 def _status(self): 151 '''Returns the status list for a list of job_ids 152 153 Args: 154 self 155 156 Returns: 157 [status...] : Status list of all jobs 158 ''' 159 job_id_list = ','.join( 160 [jid for jid, job in self.resources.items() if not job['status'].terminal] 161 ) 162 if not job_id_list: 163 logger.debug('No active jobs, skipping status update') 164 return 165 166 cmd = "squeue --noheader --format='%i %t' --job '{0}'".format(job_id_list) 167 logger.debug("Executing %s", cmd) 168 retcode, stdout, stderr = self.execute_wait(cmd) 169 logger.debug("squeue returned %s %s", stdout, stderr) 170 171 # Execute_wait failed. Do no update 172 if retcode != 0: 173 logger.warning("squeue failed with non-zero exit code {}".format(retcode)) 174 return 175 176 jobs_missing = set(self.resources.keys()) 177 for line in stdout.split('\n'): 178 if not line: 179 # Blank line 180 continue 181 job_id, slurm_state = line.split() 182 if slurm_state not in translate_table: 183 logger.warning(f"Slurm status {slurm_state} is not recognized") 184 status = translate_table.get(slurm_state, JobState.UNKNOWN) 185 logger.debug("Updating job {} with slurm status {} to parsl state {!s}".format(job_id, slurm_state, status)) 186 self.resources[job_id]['status'] = JobStatus(status) 187 jobs_missing.remove(job_id) 188 189 # squeue does not report on jobs that are not running. So we are filling in the 190 # blanks for missing jobs, we might lose some information about why the jobs failed. 191 for missing_job in jobs_missing: 192 logger.debug("Updating missing job {} to completed status".format(missing_job)) 193 self.resources[missing_job]['status'] = JobStatus(JobState.COMPLETED) 194 195 def submit(self, command, tasks_per_node, job_name="parsl.slurm"): 196 """Submit the command as a slurm job. 197 198 Parameters 199 ---------- 200 command : str 201 Command to be made on the remote side. 202 tasks_per_node : int 203 Command invocations to be launched per node 204 job_name : str 205 Name for the job 206 Returns 207 ------- 208 None or str 209 If at capacity, returns None; otherwise, a string identifier for the job 210 """ 211 212 scheduler_options = self.scheduler_options 213 worker_init = self.worker_init 214 if self.mem_per_node is not None: 215 scheduler_options += '#SBATCH --mem={}g\n'.format(self.mem_per_node) 216 worker_init += 'export PARSL_MEMORY_GB={}\n'.format(self.mem_per_node) 217 if self.cores_per_node is not None: 218 cpus_per_task = math.floor(self.cores_per_node / tasks_per_node) 219 scheduler_options += '#SBATCH --cpus-per-task={}'.format(cpus_per_task) 220 worker_init += 'export PARSL_CORES={}\n'.format(cpus_per_task) 221 222 job_name = "{0}.{1}".format(job_name, time.time()) 223 224 script_path = "{0}/{1}.submit".format(self.script_dir, job_name) 225 script_path = os.path.abspath(script_path) 226 227 logger.debug("Requesting one block with {} nodes".format(self.nodes_per_block)) 228 229 job_config = {} 230 job_config["submit_script_dir"] = self.channel.script_dir 231 job_config["nodes"] = self.nodes_per_block 232 job_config["tasks_per_node"] = tasks_per_node 233 job_config["walltime"] = wtime_to_minutes(self.walltime) 234 job_config["scheduler_options"] = scheduler_options 235 job_config["worker_init"] = worker_init 236 job_config["user_script"] = command 237 238 # Wrap the command 239 job_config["user_script"] = self.launcher(command, 240 tasks_per_node, 241 self.nodes_per_block) 242 243 logger.debug("Writing submit script") 244 self._write_submit_script(template_string, script_path, job_name, job_config) 245 246 if self.move_files: 247 logger.debug("moving files") 248 channel_script_path = self.channel.push_file(script_path, self.channel.script_dir) 249 else: 250 logger.debug("not moving files") 251 channel_script_path = script_path 252 253 retcode, stdout, stderr = self.execute_wait("sbatch {0}".format(channel_script_path)) 254 255 job_id = None 256 if retcode == 0: 257 for line in stdout.split('\n'): 258 match = re.match(self.regex_job_id, line) 259 if match: 260 job_id = match.group("id") 261 self.resources[job_id] = {'job_id': job_id, 'status': JobStatus(JobState.PENDING)} 262 break 263 else: 264 logger.error("Could not read job ID from submit command standard output.") 265 logger.error("Retcode:%s STDOUT:%s STDERR:%s", retcode, stdout.strip(), stderr.strip()) 266 else: 267 logger.error("Submit command failed") 268 logger.error("Retcode:%s STDOUT:%s STDERR:%s", retcode, stdout.strip(), stderr.strip()) 269 return job_id 270 271 def cancel(self, job_ids): 272 ''' Cancels the jobs specified by a list of job ids 273 274 Args: 275 job_ids : [<job_id> ...] 276 277 Returns : 278 [True/False...] : If the cancel operation fails the entire list will be False. 279 ''' 280 281 job_id_list = ' '.join(job_ids) 282 retcode, stdout, stderr = self.execute_wait("scancel {0}".format(job_id_list)) 283 rets = None 284 if retcode == 0: 285 for jid in job_ids: 286 self.resources[jid]['status'] = JobStatus(JobState.CANCELLED) # Setting state to cancelled 287 rets = [True for i in job_ids] 288 else: 289 rets = [False for i in job_ids] 290 291 return rets 292 293 @property 294 def status_polling_interval(self): 295 return 60 296 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/parsl/providers/slurm/slurm.py b/parsl/providers/slurm/slurm.py --- a/parsl/providers/slurm/slurm.py +++ b/parsl/providers/slurm/slurm.py @@ -132,6 +132,8 @@ self.exclusive = exclusive self.move_files = move_files self.account = account + self.qos = qos + self.constraint = constraint self.scheduler_options = scheduler_options + '\n' if exclusive: self.scheduler_options += "#SBATCH --exclusive\n"
{"golden_diff": "diff --git a/parsl/providers/slurm/slurm.py b/parsl/providers/slurm/slurm.py\n--- a/parsl/providers/slurm/slurm.py\n+++ b/parsl/providers/slurm/slurm.py\n@@ -132,6 +132,8 @@\n self.exclusive = exclusive\n self.move_files = move_files\n self.account = account\n+ self.qos = qos\n+ self.constraint = constraint\n self.scheduler_options = scheduler_options + '\\n'\n if exclusive:\n self.scheduler_options += \"#SBATCH --exclusive\\n\"\n", "issue": "Cannot Instantiate SlurmProvider in parsl-2023.12.11\n**Describe the bug**\r\nWhen I attempt to instantiate a SlurmProvider in parsl-2023.12.11 I get the following error:\r\n`AttributeError: class SlurmProvider uses qos in the constructor, but does not define it as an attribute`\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Install parsl-2023.12.11\r\n2. Attempt to instantiate SlurmProvider\r\n3. See error\r\n\r\n**Expected behavior**\r\nThe provider should be instantiated\r\n\r\n**Environment**\r\n - OS: Seen on Ubuntu and MacOS github workers and Linux Mint 21.2\r\n - Python version: 3.8 and 3.11\r\n - Parsl version: 2023.12.11\r\n\r\n**Distributed Environment**\r\n- I'm not actually running a script at all just trying to instantiate the provider/a config with the provider\r\n\n", "before_files": [{"content": "import os\nimport math\nimport time\nimport logging\nimport re\nimport typeguard\n\nfrom typing import Optional\n\nfrom parsl.channels import LocalChannel\nfrom parsl.channels.base import Channel\nfrom parsl.jobs.states import JobState, JobStatus\nfrom parsl.launchers import SingleNodeLauncher\nfrom parsl.launchers.base import Launcher\nfrom parsl.providers.cluster_provider import ClusterProvider\nfrom parsl.providers.slurm.template import template_string\nfrom parsl.utils import RepresentationMixin, wtime_to_minutes\n\nlogger = logging.getLogger(__name__)\n\ntranslate_table = {\n 'PD': JobState.PENDING,\n 'R': JobState.RUNNING,\n 'CA': JobState.CANCELLED,\n 'CF': JobState.PENDING, # (configuring),\n 'CG': JobState.RUNNING, # (completing),\n 'CD': JobState.COMPLETED,\n 'F': JobState.FAILED, # (failed),\n 'TO': JobState.TIMEOUT, # (timeout),\n 'NF': JobState.FAILED, # (node failure),\n 'RV': JobState.FAILED, # (revoked) and\n 'SE': JobState.FAILED # (special exit state)\n}\n\n\nclass SlurmProvider(ClusterProvider, RepresentationMixin):\n \"\"\"Slurm Execution Provider\n\n This provider uses sbatch to submit, squeue for status and scancel to cancel\n jobs. The sbatch script to be used is created from a template file in this\n same module.\n\n Parameters\n ----------\n partition : str\n Slurm partition to request blocks from. If unspecified or ``None``, no partition slurm directive will be specified.\n account : str\n Slurm account to which to charge resources used by the job. If unspecified or ``None``, the job will use the\n user's default account.\n qos : str\n Slurm queue to place job in. If unspecified or ``None``, no queue slurm directive will be specified.\n constraint : str\n Slurm job constraint, often used to choose cpu or gpu type. If unspecified or ``None``, no constraint slurm directive will be added.\n channel : Channel\n Channel for accessing this provider. Possible channels include\n :class:`~parsl.channels.LocalChannel` (the default),\n :class:`~parsl.channels.SSHChannel`, or\n :class:`~parsl.channels.SSHInteractiveLoginChannel`.\n nodes_per_block : int\n Nodes to provision per block.\n cores_per_node : int\n Specify the number of cores to provision per node. If set to None, executors\n will assume all cores on the node are available for computation. Default is None.\n mem_per_node : int\n Specify the real memory to provision per node in GB. If set to None, no\n explicit request to the scheduler will be made. Default is None.\n min_blocks : int\n Minimum number of blocks to maintain.\n max_blocks : int\n Maximum number of blocks to maintain.\n parallelism : float\n Ratio of provisioned task slots to active tasks. A parallelism value of 1 represents aggressive\n scaling where as many resources as possible are used; parallelism close to 0 represents\n the opposite situation in which as few resources as possible (i.e., min_blocks) are used.\n walltime : str\n Walltime requested per block in HH:MM:SS.\n scheduler_options : str\n String to prepend to the #SBATCH blocks in the submit script to the scheduler.\n regex_job_id : str\n The regular expression used to extract the job ID from the ``sbatch`` standard output.\n The default is ``r\"Submitted batch job (?P<id>\\\\S*)\"``, where ``id`` is the regular expression\n symbolic group for the job ID.\n worker_init : str\n Command to be run before starting a worker, such as 'module load Anaconda; source activate env'.\n exclusive : bool (Default = True)\n Requests nodes which are not shared with other running jobs.\n launcher : Launcher\n Launcher for this provider. Possible launchers include\n :class:`~parsl.launchers.SingleNodeLauncher` (the default),\n :class:`~parsl.launchers.SrunLauncher`, or\n :class:`~parsl.launchers.AprunLauncher`\n move_files : Optional[Bool]: should files be moved? by default, Parsl will try to move files.\n \"\"\"\n\n @typeguard.typechecked\n def __init__(self,\n partition: Optional[str] = None,\n account: Optional[str] = None,\n qos: Optional[str] = None,\n constraint: Optional[str] = None,\n channel: Channel = LocalChannel(),\n nodes_per_block: int = 1,\n cores_per_node: Optional[int] = None,\n mem_per_node: Optional[int] = None,\n init_blocks: int = 1,\n min_blocks: int = 0,\n max_blocks: int = 1,\n parallelism: float = 1,\n walltime: str = \"00:10:00\",\n scheduler_options: str = '',\n regex_job_id: str = r\"Submitted batch job (?P<id>\\S*)\",\n worker_init: str = '',\n cmd_timeout: int = 10,\n exclusive: bool = True,\n move_files: bool = True,\n launcher: Launcher = SingleNodeLauncher()):\n label = 'slurm'\n super().__init__(label,\n channel,\n nodes_per_block,\n init_blocks,\n min_blocks,\n max_blocks,\n parallelism,\n walltime,\n cmd_timeout=cmd_timeout,\n launcher=launcher)\n\n self.partition = partition\n self.cores_per_node = cores_per_node\n self.mem_per_node = mem_per_node\n self.exclusive = exclusive\n self.move_files = move_files\n self.account = account\n self.scheduler_options = scheduler_options + '\\n'\n if exclusive:\n self.scheduler_options += \"#SBATCH --exclusive\\n\"\n if partition:\n self.scheduler_options += \"#SBATCH --partition={}\\n\".format(partition)\n if account:\n self.scheduler_options += \"#SBATCH --account={}\\n\".format(account)\n if qos:\n self.scheduler_options += \"#SBATCH --qos={}\\n\".format(qos)\n if constraint:\n self.scheduler_options += \"#SBATCH --constraint={}\\n\".format(constraint)\n\n self.regex_job_id = regex_job_id\n self.worker_init = worker_init + '\\n'\n\n def _status(self):\n '''Returns the status list for a list of job_ids\n\n Args:\n self\n\n Returns:\n [status...] : Status list of all jobs\n '''\n job_id_list = ','.join(\n [jid for jid, job in self.resources.items() if not job['status'].terminal]\n )\n if not job_id_list:\n logger.debug('No active jobs, skipping status update')\n return\n\n cmd = \"squeue --noheader --format='%i %t' --job '{0}'\".format(job_id_list)\n logger.debug(\"Executing %s\", cmd)\n retcode, stdout, stderr = self.execute_wait(cmd)\n logger.debug(\"squeue returned %s %s\", stdout, stderr)\n\n # Execute_wait failed. Do no update\n if retcode != 0:\n logger.warning(\"squeue failed with non-zero exit code {}\".format(retcode))\n return\n\n jobs_missing = set(self.resources.keys())\n for line in stdout.split('\\n'):\n if not line:\n # Blank line\n continue\n job_id, slurm_state = line.split()\n if slurm_state not in translate_table:\n logger.warning(f\"Slurm status {slurm_state} is not recognized\")\n status = translate_table.get(slurm_state, JobState.UNKNOWN)\n logger.debug(\"Updating job {} with slurm status {} to parsl state {!s}\".format(job_id, slurm_state, status))\n self.resources[job_id]['status'] = JobStatus(status)\n jobs_missing.remove(job_id)\n\n # squeue does not report on jobs that are not running. So we are filling in the\n # blanks for missing jobs, we might lose some information about why the jobs failed.\n for missing_job in jobs_missing:\n logger.debug(\"Updating missing job {} to completed status\".format(missing_job))\n self.resources[missing_job]['status'] = JobStatus(JobState.COMPLETED)\n\n def submit(self, command, tasks_per_node, job_name=\"parsl.slurm\"):\n \"\"\"Submit the command as a slurm job.\n\n Parameters\n ----------\n command : str\n Command to be made on the remote side.\n tasks_per_node : int\n Command invocations to be launched per node\n job_name : str\n Name for the job\n Returns\n -------\n None or str\n If at capacity, returns None; otherwise, a string identifier for the job\n \"\"\"\n\n scheduler_options = self.scheduler_options\n worker_init = self.worker_init\n if self.mem_per_node is not None:\n scheduler_options += '#SBATCH --mem={}g\\n'.format(self.mem_per_node)\n worker_init += 'export PARSL_MEMORY_GB={}\\n'.format(self.mem_per_node)\n if self.cores_per_node is not None:\n cpus_per_task = math.floor(self.cores_per_node / tasks_per_node)\n scheduler_options += '#SBATCH --cpus-per-task={}'.format(cpus_per_task)\n worker_init += 'export PARSL_CORES={}\\n'.format(cpus_per_task)\n\n job_name = \"{0}.{1}\".format(job_name, time.time())\n\n script_path = \"{0}/{1}.submit\".format(self.script_dir, job_name)\n script_path = os.path.abspath(script_path)\n\n logger.debug(\"Requesting one block with {} nodes\".format(self.nodes_per_block))\n\n job_config = {}\n job_config[\"submit_script_dir\"] = self.channel.script_dir\n job_config[\"nodes\"] = self.nodes_per_block\n job_config[\"tasks_per_node\"] = tasks_per_node\n job_config[\"walltime\"] = wtime_to_minutes(self.walltime)\n job_config[\"scheduler_options\"] = scheduler_options\n job_config[\"worker_init\"] = worker_init\n job_config[\"user_script\"] = command\n\n # Wrap the command\n job_config[\"user_script\"] = self.launcher(command,\n tasks_per_node,\n self.nodes_per_block)\n\n logger.debug(\"Writing submit script\")\n self._write_submit_script(template_string, script_path, job_name, job_config)\n\n if self.move_files:\n logger.debug(\"moving files\")\n channel_script_path = self.channel.push_file(script_path, self.channel.script_dir)\n else:\n logger.debug(\"not moving files\")\n channel_script_path = script_path\n\n retcode, stdout, stderr = self.execute_wait(\"sbatch {0}\".format(channel_script_path))\n\n job_id = None\n if retcode == 0:\n for line in stdout.split('\\n'):\n match = re.match(self.regex_job_id, line)\n if match:\n job_id = match.group(\"id\")\n self.resources[job_id] = {'job_id': job_id, 'status': JobStatus(JobState.PENDING)}\n break\n else:\n logger.error(\"Could not read job ID from submit command standard output.\")\n logger.error(\"Retcode:%s STDOUT:%s STDERR:%s\", retcode, stdout.strip(), stderr.strip())\n else:\n logger.error(\"Submit command failed\")\n logger.error(\"Retcode:%s STDOUT:%s STDERR:%s\", retcode, stdout.strip(), stderr.strip())\n return job_id\n\n def cancel(self, job_ids):\n ''' Cancels the jobs specified by a list of job ids\n\n Args:\n job_ids : [<job_id> ...]\n\n Returns :\n [True/False...] : If the cancel operation fails the entire list will be False.\n '''\n\n job_id_list = ' '.join(job_ids)\n retcode, stdout, stderr = self.execute_wait(\"scancel {0}\".format(job_id_list))\n rets = None\n if retcode == 0:\n for jid in job_ids:\n self.resources[jid]['status'] = JobStatus(JobState.CANCELLED) # Setting state to cancelled\n rets = [True for i in job_ids]\n else:\n rets = [False for i in job_ids]\n\n return rets\n\n @property\n def status_polling_interval(self):\n return 60\n", "path": "parsl/providers/slurm/slurm.py"}], "after_files": [{"content": "import os\nimport math\nimport time\nimport logging\nimport re\nimport typeguard\n\nfrom typing import Optional\n\nfrom parsl.channels import LocalChannel\nfrom parsl.channels.base import Channel\nfrom parsl.jobs.states import JobState, JobStatus\nfrom parsl.launchers import SingleNodeLauncher\nfrom parsl.launchers.base import Launcher\nfrom parsl.providers.cluster_provider import ClusterProvider\nfrom parsl.providers.slurm.template import template_string\nfrom parsl.utils import RepresentationMixin, wtime_to_minutes\n\nlogger = logging.getLogger(__name__)\n\ntranslate_table = {\n 'PD': JobState.PENDING,\n 'R': JobState.RUNNING,\n 'CA': JobState.CANCELLED,\n 'CF': JobState.PENDING, # (configuring),\n 'CG': JobState.RUNNING, # (completing),\n 'CD': JobState.COMPLETED,\n 'F': JobState.FAILED, # (failed),\n 'TO': JobState.TIMEOUT, # (timeout),\n 'NF': JobState.FAILED, # (node failure),\n 'RV': JobState.FAILED, # (revoked) and\n 'SE': JobState.FAILED # (special exit state)\n}\n\n\nclass SlurmProvider(ClusterProvider, RepresentationMixin):\n \"\"\"Slurm Execution Provider\n\n This provider uses sbatch to submit, squeue for status and scancel to cancel\n jobs. The sbatch script to be used is created from a template file in this\n same module.\n\n Parameters\n ----------\n partition : str\n Slurm partition to request blocks from. If unspecified or ``None``, no partition slurm directive will be specified.\n account : str\n Slurm account to which to charge resources used by the job. If unspecified or ``None``, the job will use the\n user's default account.\n qos : str\n Slurm queue to place job in. If unspecified or ``None``, no queue slurm directive will be specified.\n constraint : str\n Slurm job constraint, often used to choose cpu or gpu type. If unspecified or ``None``, no constraint slurm directive will be added.\n channel : Channel\n Channel for accessing this provider. Possible channels include\n :class:`~parsl.channels.LocalChannel` (the default),\n :class:`~parsl.channels.SSHChannel`, or\n :class:`~parsl.channels.SSHInteractiveLoginChannel`.\n nodes_per_block : int\n Nodes to provision per block.\n cores_per_node : int\n Specify the number of cores to provision per node. If set to None, executors\n will assume all cores on the node are available for computation. Default is None.\n mem_per_node : int\n Specify the real memory to provision per node in GB. If set to None, no\n explicit request to the scheduler will be made. Default is None.\n min_blocks : int\n Minimum number of blocks to maintain.\n max_blocks : int\n Maximum number of blocks to maintain.\n parallelism : float\n Ratio of provisioned task slots to active tasks. A parallelism value of 1 represents aggressive\n scaling where as many resources as possible are used; parallelism close to 0 represents\n the opposite situation in which as few resources as possible (i.e., min_blocks) are used.\n walltime : str\n Walltime requested per block in HH:MM:SS.\n scheduler_options : str\n String to prepend to the #SBATCH blocks in the submit script to the scheduler.\n regex_job_id : str\n The regular expression used to extract the job ID from the ``sbatch`` standard output.\n The default is ``r\"Submitted batch job (?P<id>\\\\S*)\"``, where ``id`` is the regular expression\n symbolic group for the job ID.\n worker_init : str\n Command to be run before starting a worker, such as 'module load Anaconda; source activate env'.\n exclusive : bool (Default = True)\n Requests nodes which are not shared with other running jobs.\n launcher : Launcher\n Launcher for this provider. Possible launchers include\n :class:`~parsl.launchers.SingleNodeLauncher` (the default),\n :class:`~parsl.launchers.SrunLauncher`, or\n :class:`~parsl.launchers.AprunLauncher`\n move_files : Optional[Bool]: should files be moved? by default, Parsl will try to move files.\n \"\"\"\n\n @typeguard.typechecked\n def __init__(self,\n partition: Optional[str] = None,\n account: Optional[str] = None,\n qos: Optional[str] = None,\n constraint: Optional[str] = None,\n channel: Channel = LocalChannel(),\n nodes_per_block: int = 1,\n cores_per_node: Optional[int] = None,\n mem_per_node: Optional[int] = None,\n init_blocks: int = 1,\n min_blocks: int = 0,\n max_blocks: int = 1,\n parallelism: float = 1,\n walltime: str = \"00:10:00\",\n scheduler_options: str = '',\n regex_job_id: str = r\"Submitted batch job (?P<id>\\S*)\",\n worker_init: str = '',\n cmd_timeout: int = 10,\n exclusive: bool = True,\n move_files: bool = True,\n launcher: Launcher = SingleNodeLauncher()):\n label = 'slurm'\n super().__init__(label,\n channel,\n nodes_per_block,\n init_blocks,\n min_blocks,\n max_blocks,\n parallelism,\n walltime,\n cmd_timeout=cmd_timeout,\n launcher=launcher)\n\n self.partition = partition\n self.cores_per_node = cores_per_node\n self.mem_per_node = mem_per_node\n self.exclusive = exclusive\n self.move_files = move_files\n self.account = account\n self.qos = qos\n self.constraint = constraint\n self.scheduler_options = scheduler_options + '\\n'\n if exclusive:\n self.scheduler_options += \"#SBATCH --exclusive\\n\"\n if partition:\n self.scheduler_options += \"#SBATCH --partition={}\\n\".format(partition)\n if account:\n self.scheduler_options += \"#SBATCH --account={}\\n\".format(account)\n if qos:\n self.scheduler_options += \"#SBATCH --qos={}\\n\".format(qos)\n if constraint:\n self.scheduler_options += \"#SBATCH --constraint={}\\n\".format(constraint)\n\n self.regex_job_id = regex_job_id\n self.worker_init = worker_init + '\\n'\n\n def _status(self):\n '''Returns the status list for a list of job_ids\n\n Args:\n self\n\n Returns:\n [status...] : Status list of all jobs\n '''\n job_id_list = ','.join(\n [jid for jid, job in self.resources.items() if not job['status'].terminal]\n )\n if not job_id_list:\n logger.debug('No active jobs, skipping status update')\n return\n\n cmd = \"squeue --noheader --format='%i %t' --job '{0}'\".format(job_id_list)\n logger.debug(\"Executing %s\", cmd)\n retcode, stdout, stderr = self.execute_wait(cmd)\n logger.debug(\"squeue returned %s %s\", stdout, stderr)\n\n # Execute_wait failed. Do no update\n if retcode != 0:\n logger.warning(\"squeue failed with non-zero exit code {}\".format(retcode))\n return\n\n jobs_missing = set(self.resources.keys())\n for line in stdout.split('\\n'):\n if not line:\n # Blank line\n continue\n job_id, slurm_state = line.split()\n if slurm_state not in translate_table:\n logger.warning(f\"Slurm status {slurm_state} is not recognized\")\n status = translate_table.get(slurm_state, JobState.UNKNOWN)\n logger.debug(\"Updating job {} with slurm status {} to parsl state {!s}\".format(job_id, slurm_state, status))\n self.resources[job_id]['status'] = JobStatus(status)\n jobs_missing.remove(job_id)\n\n # squeue does not report on jobs that are not running. So we are filling in the\n # blanks for missing jobs, we might lose some information about why the jobs failed.\n for missing_job in jobs_missing:\n logger.debug(\"Updating missing job {} to completed status\".format(missing_job))\n self.resources[missing_job]['status'] = JobStatus(JobState.COMPLETED)\n\n def submit(self, command, tasks_per_node, job_name=\"parsl.slurm\"):\n \"\"\"Submit the command as a slurm job.\n\n Parameters\n ----------\n command : str\n Command to be made on the remote side.\n tasks_per_node : int\n Command invocations to be launched per node\n job_name : str\n Name for the job\n Returns\n -------\n None or str\n If at capacity, returns None; otherwise, a string identifier for the job\n \"\"\"\n\n scheduler_options = self.scheduler_options\n worker_init = self.worker_init\n if self.mem_per_node is not None:\n scheduler_options += '#SBATCH --mem={}g\\n'.format(self.mem_per_node)\n worker_init += 'export PARSL_MEMORY_GB={}\\n'.format(self.mem_per_node)\n if self.cores_per_node is not None:\n cpus_per_task = math.floor(self.cores_per_node / tasks_per_node)\n scheduler_options += '#SBATCH --cpus-per-task={}'.format(cpus_per_task)\n worker_init += 'export PARSL_CORES={}\\n'.format(cpus_per_task)\n\n job_name = \"{0}.{1}\".format(job_name, time.time())\n\n script_path = \"{0}/{1}.submit\".format(self.script_dir, job_name)\n script_path = os.path.abspath(script_path)\n\n logger.debug(\"Requesting one block with {} nodes\".format(self.nodes_per_block))\n\n job_config = {}\n job_config[\"submit_script_dir\"] = self.channel.script_dir\n job_config[\"nodes\"] = self.nodes_per_block\n job_config[\"tasks_per_node\"] = tasks_per_node\n job_config[\"walltime\"] = wtime_to_minutes(self.walltime)\n job_config[\"scheduler_options\"] = scheduler_options\n job_config[\"worker_init\"] = worker_init\n job_config[\"user_script\"] = command\n\n # Wrap the command\n job_config[\"user_script\"] = self.launcher(command,\n tasks_per_node,\n self.nodes_per_block)\n\n logger.debug(\"Writing submit script\")\n self._write_submit_script(template_string, script_path, job_name, job_config)\n\n if self.move_files:\n logger.debug(\"moving files\")\n channel_script_path = self.channel.push_file(script_path, self.channel.script_dir)\n else:\n logger.debug(\"not moving files\")\n channel_script_path = script_path\n\n retcode, stdout, stderr = self.execute_wait(\"sbatch {0}\".format(channel_script_path))\n\n job_id = None\n if retcode == 0:\n for line in stdout.split('\\n'):\n match = re.match(self.regex_job_id, line)\n if match:\n job_id = match.group(\"id\")\n self.resources[job_id] = {'job_id': job_id, 'status': JobStatus(JobState.PENDING)}\n break\n else:\n logger.error(\"Could not read job ID from submit command standard output.\")\n logger.error(\"Retcode:%s STDOUT:%s STDERR:%s\", retcode, stdout.strip(), stderr.strip())\n else:\n logger.error(\"Submit command failed\")\n logger.error(\"Retcode:%s STDOUT:%s STDERR:%s\", retcode, stdout.strip(), stderr.strip())\n return job_id\n\n def cancel(self, job_ids):\n ''' Cancels the jobs specified by a list of job ids\n\n Args:\n job_ids : [<job_id> ...]\n\n Returns :\n [True/False...] : If the cancel operation fails the entire list will be False.\n '''\n\n job_id_list = ' '.join(job_ids)\n retcode, stdout, stderr = self.execute_wait(\"scancel {0}\".format(job_id_list))\n rets = None\n if retcode == 0:\n for jid in job_ids:\n self.resources[jid]['status'] = JobStatus(JobState.CANCELLED) # Setting state to cancelled\n rets = [True for i in job_ids]\n else:\n rets = [False for i in job_ids]\n\n return rets\n\n @property\n def status_polling_interval(self):\n return 60\n", "path": "parsl/providers/slurm/slurm.py"}]}
3,959
124
gh_patches_debug_18444
rasdani/github-patches
git_diff
xonsh__xonsh-2006
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- PTK OSError Bad file descriptor When I run a command like `cat foo.json | py -m json.tool`, xonsh got crash with following errors after the command pipe finish (with output correct). I'm not sure it's a ptk issue or xonsh one. Either way I think we should add some `try-catch` for not crashing the shell at least and for stability. ``` Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xonsh/ptk/shortcuts.py", line 108, in prompt document = cli.run(reset_current_buffer=False) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/prompt_toolkit/interface.py", line 394, in run self.eventloop.run(self.input, self.create_eventloop_callbacks()) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/prompt_toolkit/eventloop/posix.py", line 93, in run self.add_reader(self._schedule_pipe[0], None) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/prompt_toolkit/eventloop/posix.py", line 280, in add_reader self.selector.register(fd) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/prompt_toolkit/eventloop/select.py", line 75, in register sel.register(fd) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/prompt_toolkit/eventloop/select.py", line 130, in register self._sel.register(fd, selectors.EVENT_READ, None) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/selectors.py", line 536, in register self._kqueue.control([kev], 0, 0) OSError: [Errno 9] Bad file descriptor During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.5/bin/xonsh", line 3, in <module> main() File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xonsh/main.py", line 276, in main shell.shell.cmdloop() File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xonsh/ptk/shell.py", line 132, in cmdloop line = self.singleline(auto_suggest=auto_suggest) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xonsh/ptk/shell.py", line 101, in singleline line = self.prompter.prompt(**prompt_args) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xonsh/ptk/shortcuts.py", line 113, in prompt eventloop.close() File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/prompt_toolkit/eventloop/posix.py", line 270, in close os.close(schedule_pipe[0]) OSError: [Errno 9] Bad file descriptor ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `xonsh/ptk/shortcuts.py` Content: ``` 1 """A prompt-toolkit inspired shortcut collection.""" 2 import builtins 3 import textwrap 4 5 from prompt_toolkit.interface import CommandLineInterface 6 from prompt_toolkit.enums import EditingMode 7 from prompt_toolkit.utils import DummyContext 8 from prompt_toolkit.shortcuts import (create_prompt_application, 9 create_eventloop, create_asyncio_eventloop, create_output) 10 11 from xonsh.platform import ptk_version_info 12 13 14 class Prompter(object): 15 16 def __init__(self, cli=None, *args, **kwargs): 17 """Implements a prompt that statefully holds a command-line 18 interface. When used as a context manager, it will return itself 19 on entry and reset itself on exit. 20 21 Parameters 22 ---------- 23 cli : CommandLineInterface or None, optional 24 If this is not a CommandLineInterface object, such an object 25 will be created when the prompt() method is called. 26 """ 27 self.cli = cli 28 self.major_minor = ptk_version_info()[:2] 29 30 def __enter__(self): 31 self.reset() 32 return self 33 34 def __exit__(self, exc_type, exc_value, traceback): 35 pass 36 37 def prompt(self, message='', **kwargs): 38 """Get input from the user and return it. 39 40 This is a wrapper around a lot of prompt_toolkit functionality and 41 can be a replacement for raw_input. (or GNU readline.) If you want 42 to keep your history across several calls, create one 43 `~prompt_toolkit.history.History instance and pass it every 44 time. This function accepts many keyword arguments. Except for the 45 following. they are a proxy to the arguments of 46 create_prompt_application(). 47 48 Parameters 49 ---------- 50 patch_stdout : file-like, optional 51 Replace ``sys.stdout`` by a proxy that ensures that print 52 statements from other threads won't destroy the prompt. (They 53 will be printed above the prompt instead.) 54 return_asyncio_coroutine : bool, optional 55 When True, return a asyncio coroutine. (Python >3.3) 56 57 Notes 58 ----- 59 This method was forked from the mainline prompt-toolkit repo. 60 Copyright (c) 2014, Jonathan Slenders, All rights reserved. 61 """ 62 patch_stdout = kwargs.pop('patch_stdout', False) 63 return_asyncio_coroutine = kwargs.pop('return_asyncio_coroutine', False) 64 if return_asyncio_coroutine: 65 eventloop = create_asyncio_eventloop() 66 else: 67 eventloop = kwargs.pop('eventloop', None) or create_eventloop() 68 69 # Create CommandLineInterface. 70 if self.cli is None: 71 if builtins.__xonsh_env__.get('VI_MODE'): 72 editing_mode = EditingMode.VI 73 else: 74 editing_mode = EditingMode.EMACS 75 kwargs['editing_mode'] = editing_mode 76 cli = CommandLineInterface( 77 application=create_prompt_application(message, **kwargs), 78 eventloop=eventloop, 79 output=create_output()) 80 self.cli = cli 81 else: 82 cli = self.cli 83 84 # Replace stdout. 85 patch_context = cli.patch_stdout_context() if patch_stdout else DummyContext() 86 87 # Read input and return it. 88 if return_asyncio_coroutine: 89 # Create an asyncio coroutine and call it. 90 exec_context = {'patch_context': patch_context, 'cli': cli} 91 exec(textwrap.dedent(''' 92 import asyncio 93 @asyncio.coroutine 94 def prompt_coro(): 95 with patch_context: 96 document = yield from cli.run_async(reset_current_buffer=False) 97 if document: 98 return document.text 99 '''), exec_context) 100 return exec_context['prompt_coro']() 101 else: 102 # Note: We pass `reset_current_buffer=False`, because that way 103 # it's easy to give DEFAULT_BUFFER a default value, without it 104 # getting erased. We don't have to reset anyway, because this is 105 # the first and only time that this CommandLineInterface will run. 106 try: 107 with patch_context: 108 document = cli.run(reset_current_buffer=False) 109 110 if document: 111 return document.text 112 finally: 113 eventloop.close() 114 115 def reset(self): 116 """Resets the prompt and cli to a pristine state on this object.""" 117 self.cli = None 118 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/xonsh/ptk/shortcuts.py b/xonsh/ptk/shortcuts.py --- a/xonsh/ptk/shortcuts.py +++ b/xonsh/ptk/shortcuts.py @@ -9,6 +9,7 @@ create_eventloop, create_asyncio_eventloop, create_output) from xonsh.platform import ptk_version_info +import xonsh.tools as xt class Prompter(object): @@ -106,9 +107,13 @@ try: with patch_context: document = cli.run(reset_current_buffer=False) - if document: return document.text + except Exception: + xt.print_exception() + # return something to prevent xonsh crash when any + # exceptions raise + return '' finally: eventloop.close()
{"golden_diff": "diff --git a/xonsh/ptk/shortcuts.py b/xonsh/ptk/shortcuts.py\n--- a/xonsh/ptk/shortcuts.py\n+++ b/xonsh/ptk/shortcuts.py\n@@ -9,6 +9,7 @@\n create_eventloop, create_asyncio_eventloop, create_output)\n \n from xonsh.platform import ptk_version_info\n+import xonsh.tools as xt\n \n \n class Prompter(object):\n@@ -106,9 +107,13 @@\n try:\n with patch_context:\n document = cli.run(reset_current_buffer=False)\n-\n if document:\n return document.text\n+ except Exception:\n+ xt.print_exception()\n+ # return something to prevent xonsh crash when any\n+ # exceptions raise\n+ return ''\n finally:\n eventloop.close()\n", "issue": "PTK OSError Bad file descriptor\nWhen I run a command like `cat foo.json | py -m json.tool`, xonsh got crash with following errors after the command pipe finish (with output correct). I'm not sure it's a ptk issue or xonsh one. Either way I think we should add some `try-catch` for not crashing the shell at least and for stability.\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xonsh/ptk/shortcuts.py\", line 108, in prompt\r\n document = cli.run(reset_current_buffer=False)\r\n File \"/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/prompt_toolkit/interface.py\", line 394, in run\r\n self.eventloop.run(self.input, self.create_eventloop_callbacks())\r\n File \"/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/prompt_toolkit/eventloop/posix.py\", line 93, in run\r\n self.add_reader(self._schedule_pipe[0], None)\r\n File \"/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/prompt_toolkit/eventloop/posix.py\", line 280, in add_reader\r\n self.selector.register(fd)\r\n File \"/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/prompt_toolkit/eventloop/select.py\", line 75, in register\r\n sel.register(fd)\r\n File \"/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/prompt_toolkit/eventloop/select.py\", line 130, in register\r\n self._sel.register(fd, selectors.EVENT_READ, None)\r\n File \"/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/selectors.py\", line 536, in register\r\n self._kqueue.control([kev], 0, 0)\r\nOSError: [Errno 9] Bad file descriptor\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/Library/Frameworks/Python.framework/Versions/3.5/bin/xonsh\", line 3, in <module>\r\n main()\r\n File \"/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xonsh/main.py\", line 276, in main\r\n shell.shell.cmdloop()\r\n File \"/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xonsh/ptk/shell.py\", line 132, in cmdloop\r\n line = self.singleline(auto_suggest=auto_suggest)\r\n File \"/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xonsh/ptk/shell.py\", line 101, in singleline\r\n line = self.prompter.prompt(**prompt_args)\r\n File \"/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xonsh/ptk/shortcuts.py\", line 113, in prompt\r\n eventloop.close()\r\n File \"/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/prompt_toolkit/eventloop/posix.py\", line 270, in close\r\n os.close(schedule_pipe[0])\r\nOSError: [Errno 9] Bad file descriptor\r\n```\n", "before_files": [{"content": "\"\"\"A prompt-toolkit inspired shortcut collection.\"\"\"\nimport builtins\nimport textwrap\n\nfrom prompt_toolkit.interface import CommandLineInterface\nfrom prompt_toolkit.enums import EditingMode\nfrom prompt_toolkit.utils import DummyContext\nfrom prompt_toolkit.shortcuts import (create_prompt_application,\n create_eventloop, create_asyncio_eventloop, create_output)\n\nfrom xonsh.platform import ptk_version_info\n\n\nclass Prompter(object):\n\n def __init__(self, cli=None, *args, **kwargs):\n \"\"\"Implements a prompt that statefully holds a command-line\n interface. When used as a context manager, it will return itself\n on entry and reset itself on exit.\n\n Parameters\n ----------\n cli : CommandLineInterface or None, optional\n If this is not a CommandLineInterface object, such an object\n will be created when the prompt() method is called.\n \"\"\"\n self.cli = cli\n self.major_minor = ptk_version_info()[:2]\n\n def __enter__(self):\n self.reset()\n return self\n\n def __exit__(self, exc_type, exc_value, traceback):\n pass\n\n def prompt(self, message='', **kwargs):\n \"\"\"Get input from the user and return it.\n\n This is a wrapper around a lot of prompt_toolkit functionality and\n can be a replacement for raw_input. (or GNU readline.) If you want\n to keep your history across several calls, create one\n `~prompt_toolkit.history.History instance and pass it every\n time. This function accepts many keyword arguments. Except for the\n following. they are a proxy to the arguments of\n create_prompt_application().\n\n Parameters\n ----------\n patch_stdout : file-like, optional\n Replace ``sys.stdout`` by a proxy that ensures that print\n statements from other threads won't destroy the prompt. (They\n will be printed above the prompt instead.)\n return_asyncio_coroutine : bool, optional\n When True, return a asyncio coroutine. (Python >3.3)\n\n Notes\n -----\n This method was forked from the mainline prompt-toolkit repo.\n Copyright (c) 2014, Jonathan Slenders, All rights reserved.\n \"\"\"\n patch_stdout = kwargs.pop('patch_stdout', False)\n return_asyncio_coroutine = kwargs.pop('return_asyncio_coroutine', False)\n if return_asyncio_coroutine:\n eventloop = create_asyncio_eventloop()\n else:\n eventloop = kwargs.pop('eventloop', None) or create_eventloop()\n\n # Create CommandLineInterface.\n if self.cli is None:\n if builtins.__xonsh_env__.get('VI_MODE'):\n editing_mode = EditingMode.VI\n else:\n editing_mode = EditingMode.EMACS\n kwargs['editing_mode'] = editing_mode\n cli = CommandLineInterface(\n application=create_prompt_application(message, **kwargs),\n eventloop=eventloop,\n output=create_output())\n self.cli = cli\n else:\n cli = self.cli\n\n # Replace stdout.\n patch_context = cli.patch_stdout_context() if patch_stdout else DummyContext()\n\n # Read input and return it.\n if return_asyncio_coroutine:\n # Create an asyncio coroutine and call it.\n exec_context = {'patch_context': patch_context, 'cli': cli}\n exec(textwrap.dedent('''\n import asyncio\n @asyncio.coroutine\n def prompt_coro():\n with patch_context:\n document = yield from cli.run_async(reset_current_buffer=False)\n if document:\n return document.text\n '''), exec_context)\n return exec_context['prompt_coro']()\n else:\n # Note: We pass `reset_current_buffer=False`, because that way\n # it's easy to give DEFAULT_BUFFER a default value, without it\n # getting erased. We don't have to reset anyway, because this is\n # the first and only time that this CommandLineInterface will run.\n try:\n with patch_context:\n document = cli.run(reset_current_buffer=False)\n\n if document:\n return document.text\n finally:\n eventloop.close()\n\n def reset(self):\n \"\"\"Resets the prompt and cli to a pristine state on this object.\"\"\"\n self.cli = None\n", "path": "xonsh/ptk/shortcuts.py"}], "after_files": [{"content": "\"\"\"A prompt-toolkit inspired shortcut collection.\"\"\"\nimport builtins\nimport textwrap\n\nfrom prompt_toolkit.interface import CommandLineInterface\nfrom prompt_toolkit.enums import EditingMode\nfrom prompt_toolkit.utils import DummyContext\nfrom prompt_toolkit.shortcuts import (create_prompt_application,\n create_eventloop, create_asyncio_eventloop, create_output)\n\nfrom xonsh.platform import ptk_version_info\nimport xonsh.tools as xt\n\n\nclass Prompter(object):\n\n def __init__(self, cli=None, *args, **kwargs):\n \"\"\"Implements a prompt that statefully holds a command-line\n interface. When used as a context manager, it will return itself\n on entry and reset itself on exit.\n\n Parameters\n ----------\n cli : CommandLineInterface or None, optional\n If this is not a CommandLineInterface object, such an object\n will be created when the prompt() method is called.\n \"\"\"\n self.cli = cli\n self.major_minor = ptk_version_info()[:2]\n\n def __enter__(self):\n self.reset()\n return self\n\n def __exit__(self, exc_type, exc_value, traceback):\n pass\n\n def prompt(self, message='', **kwargs):\n \"\"\"Get input from the user and return it.\n\n This is a wrapper around a lot of prompt_toolkit functionality and\n can be a replacement for raw_input. (or GNU readline.) If you want\n to keep your history across several calls, create one\n `~prompt_toolkit.history.History instance and pass it every\n time. This function accepts many keyword arguments. Except for the\n following. they are a proxy to the arguments of\n create_prompt_application().\n\n Parameters\n ----------\n patch_stdout : file-like, optional\n Replace ``sys.stdout`` by a proxy that ensures that print\n statements from other threads won't destroy the prompt. (They\n will be printed above the prompt instead.)\n return_asyncio_coroutine : bool, optional\n When True, return a asyncio coroutine. (Python >3.3)\n\n Notes\n -----\n This method was forked from the mainline prompt-toolkit repo.\n Copyright (c) 2014, Jonathan Slenders, All rights reserved.\n \"\"\"\n patch_stdout = kwargs.pop('patch_stdout', False)\n return_asyncio_coroutine = kwargs.pop('return_asyncio_coroutine', False)\n if return_asyncio_coroutine:\n eventloop = create_asyncio_eventloop()\n else:\n eventloop = kwargs.pop('eventloop', None) or create_eventloop()\n\n # Create CommandLineInterface.\n if self.cli is None:\n if builtins.__xonsh_env__.get('VI_MODE'):\n editing_mode = EditingMode.VI\n else:\n editing_mode = EditingMode.EMACS\n kwargs['editing_mode'] = editing_mode\n cli = CommandLineInterface(\n application=create_prompt_application(message, **kwargs),\n eventloop=eventloop,\n output=create_output())\n self.cli = cli\n else:\n cli = self.cli\n\n # Replace stdout.\n patch_context = cli.patch_stdout_context() if patch_stdout else DummyContext()\n\n # Read input and return it.\n if return_asyncio_coroutine:\n # Create an asyncio coroutine and call it.\n exec_context = {'patch_context': patch_context, 'cli': cli}\n exec(textwrap.dedent('''\n import asyncio\n @asyncio.coroutine\n def prompt_coro():\n with patch_context:\n document = yield from cli.run_async(reset_current_buffer=False)\n if document:\n return document.text\n '''), exec_context)\n return exec_context['prompt_coro']()\n else:\n # Note: We pass `reset_current_buffer=False`, because that way\n # it's easy to give DEFAULT_BUFFER a default value, without it\n # getting erased. We don't have to reset anyway, because this is\n # the first and only time that this CommandLineInterface will run.\n try:\n with patch_context:\n document = cli.run(reset_current_buffer=False)\n if document:\n return document.text\n except Exception:\n xt.print_exception()\n # return something to prevent xonsh crash when any\n # exceptions raise\n return ''\n finally:\n eventloop.close()\n\n def reset(self):\n \"\"\"Resets the prompt and cli to a pristine state on this object.\"\"\"\n self.cli = None\n", "path": "xonsh/ptk/shortcuts.py"}]}
2,193
190
gh_patches_debug_60855
rasdani/github-patches
git_diff
airctic__icevision-500
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Add tutorial with hard negative samples ## 📓 Documentation Update "how to use an image as background annotation" is a common question. We can provide a tutorial showing how to do that ### Racoon and dogs If you train a model on the racoon dataset and show the model a picture of a dog it will classify it as a racoon. We can add images of dogs to the dataset (without any annotations) and show how the difference of model performance in both scenarios. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `icevision/models/base_show_results.py` Content: ``` 1 __all__ = ["base_show_results"] 2 3 from icevision.imports import * 4 from icevision.utils import * 5 from icevision.core import * 6 from icevision.visualize import * 7 from icevision.data import * 8 9 10 def base_show_results( 11 predict_fn: callable, 12 build_infer_batch_fn: callable, 13 model: nn.Module, 14 dataset: Dataset, 15 class_map: Optional[ClassMap] = None, 16 num_samples: int = 6, 17 ncols: int = 3, 18 denormalize_fn: Optional[callable] = denormalize_imagenet, 19 show: bool = True, 20 ) -> None: 21 samples = [dataset[i] for i in range(num_samples)] 22 batch, samples = build_infer_batch_fn(samples) 23 preds = predict_fn(model, batch) 24 25 imgs = [sample["img"] for sample in samples] 26 show_preds( 27 imgs, 28 preds, 29 class_map=class_map, 30 denormalize_fn=denormalize_fn, 31 ncols=ncols, 32 show=show, 33 ) 34 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/icevision/models/base_show_results.py b/icevision/models/base_show_results.py --- a/icevision/models/base_show_results.py +++ b/icevision/models/base_show_results.py @@ -18,7 +18,7 @@ denormalize_fn: Optional[callable] = denormalize_imagenet, show: bool = True, ) -> None: - samples = [dataset[i] for i in range(num_samples)] + samples = random.choices(dataset, k=num_samples) batch, samples = build_infer_batch_fn(samples) preds = predict_fn(model, batch)
{"golden_diff": "diff --git a/icevision/models/base_show_results.py b/icevision/models/base_show_results.py\n--- a/icevision/models/base_show_results.py\n+++ b/icevision/models/base_show_results.py\n@@ -18,7 +18,7 @@\n denormalize_fn: Optional[callable] = denormalize_imagenet,\n show: bool = True,\n ) -> None:\n- samples = [dataset[i] for i in range(num_samples)]\n+ samples = random.choices(dataset, k=num_samples)\n batch, samples = build_infer_batch_fn(samples)\n preds = predict_fn(model, batch)\n", "issue": "Add tutorial with hard negative samples\n## \ud83d\udcd3 Documentation Update\r\n\"how to use an image as background annotation\" is a common question. We can provide a tutorial showing how to do that\r\n\r\n### Racoon and dogs\r\nIf you train a model on the racoon dataset and show the model a picture of a dog it will classify it as a racoon. We can add images of dogs to the dataset (without any annotations) and show how the difference of model performance in both scenarios.\n", "before_files": [{"content": "__all__ = [\"base_show_results\"]\n\nfrom icevision.imports import *\nfrom icevision.utils import *\nfrom icevision.core import *\nfrom icevision.visualize import *\nfrom icevision.data import *\n\n\ndef base_show_results(\n predict_fn: callable,\n build_infer_batch_fn: callable,\n model: nn.Module,\n dataset: Dataset,\n class_map: Optional[ClassMap] = None,\n num_samples: int = 6,\n ncols: int = 3,\n denormalize_fn: Optional[callable] = denormalize_imagenet,\n show: bool = True,\n) -> None:\n samples = [dataset[i] for i in range(num_samples)]\n batch, samples = build_infer_batch_fn(samples)\n preds = predict_fn(model, batch)\n\n imgs = [sample[\"img\"] for sample in samples]\n show_preds(\n imgs,\n preds,\n class_map=class_map,\n denormalize_fn=denormalize_fn,\n ncols=ncols,\n show=show,\n )\n", "path": "icevision/models/base_show_results.py"}], "after_files": [{"content": "__all__ = [\"base_show_results\"]\n\nfrom icevision.imports import *\nfrom icevision.utils import *\nfrom icevision.core import *\nfrom icevision.visualize import *\nfrom icevision.data import *\n\n\ndef base_show_results(\n predict_fn: callable,\n build_infer_batch_fn: callable,\n model: nn.Module,\n dataset: Dataset,\n class_map: Optional[ClassMap] = None,\n num_samples: int = 6,\n ncols: int = 3,\n denormalize_fn: Optional[callable] = denormalize_imagenet,\n show: bool = True,\n) -> None:\n samples = random.choices(dataset, k=num_samples)\n batch, samples = build_infer_batch_fn(samples)\n preds = predict_fn(model, batch)\n\n imgs = [sample[\"img\"] for sample in samples]\n show_preds(\n imgs,\n preds,\n class_map=class_map,\n denormalize_fn=denormalize_fn,\n ncols=ncols,\n show=show,\n )\n", "path": "icevision/models/base_show_results.py"}]}
643
133
gh_patches_debug_15676
rasdani/github-patches
git_diff
WeblateOrg__weblate-7984
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Translation Memory Maintenance - Delete Entries It would be good to be able to maintain the TM in one way or the other. Perhaps - [ ] Search & Replace in TM - [ ] Search & Delete entries in TM Or perhpas, as a "simple" (?) starting point, giving the translator the option to delete single entries from the TM when they see the result list in **Automatic Suggestions**. Like perhaps: - [ ] Delete single entry in **Automatic Suggestions** view: ![image](https://user-images.githubusercontent.com/26718215/130028004-4e00f2f1-0d30-4ddb-a904-d351711ee610.png) --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `weblate/memory/machine.py` Content: ``` 1 # 2 # Copyright © 2012–2022 Michal Čihař <[email protected]> 3 # 4 # This file is part of Weblate <https://weblate.org/> 5 # 6 # This program is free software: you can redistribute it and/or modify 7 # it under the terms of the GNU General Public License as published by 8 # the Free Software Foundation, either version 3 of the License, or 9 # (at your option) any later version. 10 # 11 # This program is distributed in the hope that it will be useful, 12 # but WITHOUT ANY WARRANTY; without even the implied warranty of 13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 # GNU General Public License for more details. 15 # 16 # You should have received a copy of the GNU General Public License 17 # along with this program. If not, see <https://www.gnu.org/licenses/>. 18 # 19 20 from weblate.machinery.base import MachineTranslation, get_machinery_language 21 from weblate.memory.models import Memory 22 23 24 class WeblateMemory(MachineTranslation): 25 """Translation service using strings already translated in Weblate.""" 26 27 name = "Weblate Translation Memory" 28 rank_boost = 2 29 cache_translations = False 30 same_languages = True 31 accounting_key = "internal" 32 do_cleanup = False 33 34 def convert_language(self, language): 35 """No conversion of language object.""" 36 return get_machinery_language(language) 37 38 def is_supported(self, source, language): 39 """Any language is supported.""" 40 return True 41 42 def is_rate_limited(self): 43 """This service has no rate limiting.""" 44 return False 45 46 def download_translations( 47 self, 48 source, 49 language, 50 text: str, 51 unit, 52 user, 53 search: bool, 54 threshold: int = 75, 55 ): 56 """Download list of possible translations from a service.""" 57 for result in Memory.objects.lookup( 58 source, 59 language, 60 text, 61 user, 62 unit.translation.component.project, 63 unit.translation.component.project.use_shared_tm, 64 ).iterator(): 65 quality = self.comparer.similarity(text, result.source) 66 if quality < 10 or (quality < threshold and not search): 67 continue 68 yield { 69 "text": result.target, 70 "quality": quality, 71 "service": self.name, 72 "origin": result.get_origin_display(), 73 "source": result.source, 74 "show_quality": True, 75 } 76 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/weblate/memory/machine.py b/weblate/memory/machine.py --- a/weblate/memory/machine.py +++ b/weblate/memory/machine.py @@ -17,6 +17,8 @@ # along with this program. If not, see <https://www.gnu.org/licenses/>. # +from django.urls import reverse + from weblate.machinery.base import MachineTranslation, get_machinery_language from weblate.memory.models import Memory @@ -72,4 +74,7 @@ "origin": result.get_origin_display(), "source": result.source, "show_quality": True, + "delete_url": reverse("api:memory-detail", kwargs={"pk": result.id}) + if user is not None and user.has_perm("memory.delete", result) + else None, }
{"golden_diff": "diff --git a/weblate/memory/machine.py b/weblate/memory/machine.py\n--- a/weblate/memory/machine.py\n+++ b/weblate/memory/machine.py\n@@ -17,6 +17,8 @@\n # along with this program. If not, see <https://www.gnu.org/licenses/>.\n #\n \n+from django.urls import reverse\n+\n from weblate.machinery.base import MachineTranslation, get_machinery_language\n from weblate.memory.models import Memory\n \n@@ -72,4 +74,7 @@\n \"origin\": result.get_origin_display(),\n \"source\": result.source,\n \"show_quality\": True,\n+ \"delete_url\": reverse(\"api:memory-detail\", kwargs={\"pk\": result.id})\n+ if user is not None and user.has_perm(\"memory.delete\", result)\n+ else None,\n }\n", "issue": "Translation Memory Maintenance - Delete Entries\nIt would be good to be able to maintain the TM in one way or the other.\r\n\r\nPerhaps \r\n- [ ] Search & Replace in TM\r\n- [ ] Search & Delete entries in TM\r\n\r\nOr perhpas, as a \"simple\" (?) starting point, giving the translator the option to delete single entries from the TM when they see the result list in **Automatic Suggestions**. Like perhaps:\r\n\r\n- [ ] Delete single entry in **Automatic Suggestions** view:\r\n![image](https://user-images.githubusercontent.com/26718215/130028004-4e00f2f1-0d30-4ddb-a904-d351711ee610.png)\r\n\r\n\n", "before_files": [{"content": "#\n# Copyright \u00a9 2012\u20132022 Michal \u010ciha\u0159 <[email protected]>\n#\n# This file is part of Weblate <https://weblate.org/>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <https://www.gnu.org/licenses/>.\n#\n\nfrom weblate.machinery.base import MachineTranslation, get_machinery_language\nfrom weblate.memory.models import Memory\n\n\nclass WeblateMemory(MachineTranslation):\n \"\"\"Translation service using strings already translated in Weblate.\"\"\"\n\n name = \"Weblate Translation Memory\"\n rank_boost = 2\n cache_translations = False\n same_languages = True\n accounting_key = \"internal\"\n do_cleanup = False\n\n def convert_language(self, language):\n \"\"\"No conversion of language object.\"\"\"\n return get_machinery_language(language)\n\n def is_supported(self, source, language):\n \"\"\"Any language is supported.\"\"\"\n return True\n\n def is_rate_limited(self):\n \"\"\"This service has no rate limiting.\"\"\"\n return False\n\n def download_translations(\n self,\n source,\n language,\n text: str,\n unit,\n user,\n search: bool,\n threshold: int = 75,\n ):\n \"\"\"Download list of possible translations from a service.\"\"\"\n for result in Memory.objects.lookup(\n source,\n language,\n text,\n user,\n unit.translation.component.project,\n unit.translation.component.project.use_shared_tm,\n ).iterator():\n quality = self.comparer.similarity(text, result.source)\n if quality < 10 or (quality < threshold and not search):\n continue\n yield {\n \"text\": result.target,\n \"quality\": quality,\n \"service\": self.name,\n \"origin\": result.get_origin_display(),\n \"source\": result.source,\n \"show_quality\": True,\n }\n", "path": "weblate/memory/machine.py"}], "after_files": [{"content": "#\n# Copyright \u00a9 2012\u20132022 Michal \u010ciha\u0159 <[email protected]>\n#\n# This file is part of Weblate <https://weblate.org/>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <https://www.gnu.org/licenses/>.\n#\n\nfrom django.urls import reverse\n\nfrom weblate.machinery.base import MachineTranslation, get_machinery_language\nfrom weblate.memory.models import Memory\n\n\nclass WeblateMemory(MachineTranslation):\n \"\"\"Translation service using strings already translated in Weblate.\"\"\"\n\n name = \"Weblate Translation Memory\"\n rank_boost = 2\n cache_translations = False\n same_languages = True\n accounting_key = \"internal\"\n do_cleanup = False\n\n def convert_language(self, language):\n \"\"\"No conversion of language object.\"\"\"\n return get_machinery_language(language)\n\n def is_supported(self, source, language):\n \"\"\"Any language is supported.\"\"\"\n return True\n\n def is_rate_limited(self):\n \"\"\"This service has no rate limiting.\"\"\"\n return False\n\n def download_translations(\n self,\n source,\n language,\n text: str,\n unit,\n user,\n search: bool,\n threshold: int = 75,\n ):\n \"\"\"Download list of possible translations from a service.\"\"\"\n for result in Memory.objects.lookup(\n source,\n language,\n text,\n user,\n unit.translation.component.project,\n unit.translation.component.project.use_shared_tm,\n ).iterator():\n quality = self.comparer.similarity(text, result.source)\n if quality < 10 or (quality < threshold and not search):\n continue\n yield {\n \"text\": result.target,\n \"quality\": quality,\n \"service\": self.name,\n \"origin\": result.get_origin_display(),\n \"source\": result.source,\n \"show_quality\": True,\n \"delete_url\": reverse(\"api:memory-detail\", kwargs={\"pk\": result.id})\n if user is not None and user.has_perm(\"memory.delete\", result)\n else None,\n }\n", "path": "weblate/memory/machine.py"}]}
1,094
185
gh_patches_debug_29785
rasdani/github-patches
git_diff
kivy__kivy-2429
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- DropDown defined as dynamic class in kv file does not work Test file and kivy log are here: https://gist.github.com/PARC6502/7e47667cdea5cbd0e6a4 I'm using this branch of kivy, not sure if that's the issue: https://github.com/matham/kivy/tree/focus-text If I define a custom DropDown widget in the kv file as a dynamic class, I get an error when I try to open it. If I have the exact same setup but the class is defined in python it works. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `kivy/uix/dropdown.py` Content: ``` 1 ''' 2 Drop-Down List 3 ============== 4 5 .. versionadded:: 1.4.0 6 7 A versatile drop-down list that can be used with custom widgets. It allows you 8 to display a list of widgets under a displayed widget. Unlike other toolkits, 9 the list of widgets can contain any type of widget: simple buttons, 10 images etc. 11 12 The positioning of the drop-down list is fully automatic: we will always try to 13 place the dropdown list in a way that the user can select an item in the list. 14 15 Basic example 16 ------------- 17 18 A button with a dropdown list of 10 possible values. All the buttons within the 19 dropdown list will trigger the dropdown :meth:`DropDown.select` method. After 20 being called, the main button text will display the selection of the 21 dropdown. :: 22 23 from kivy.uix.dropdown import DropDown 24 from kivy.uix.button import Button 25 from kivy.base import runTouchApp 26 27 # create a dropdown with 10 buttons 28 dropdown = DropDown() 29 for index in range(10): 30 # when adding widgets, we need to specify the height manually (disabling 31 # the size_hint_y) so the dropdown can calculate the area it needs. 32 btn = Button(text='Value %d' % index, size_hint_y=None, height=44) 33 34 # for each button, attach a callback that will call the select() method 35 # on the dropdown. We'll pass the text of the button as the data of the 36 # selection. 37 btn.bind(on_release=lambda btn: dropdown.select(btn.text)) 38 39 # then add the button inside the dropdown 40 dropdown.add_widget(btn) 41 42 # create a big main button 43 mainbutton = Button(text='Hello', size_hint=(None, None)) 44 45 # show the dropdown menu when the main button is released 46 # note: all the bind() calls pass the instance of the caller (here, the 47 # mainbutton instance) as the first argument of the callback (here, 48 # dropdown.open.). 49 mainbutton.bind(on_release=dropdown.open) 50 51 # one last thing, listen for the selection in the dropdown list and 52 # assign the data to the button text. 53 dropdown.bind(on_select=lambda instance, x: setattr(mainbutton, 'text', x)) 54 55 runTouchApp(mainbutton) 56 57 Extending dropdown in Kv 58 ------------------------ 59 60 You could create a dropdown directly from your kv:: 61 62 #:kivy 1.4.0 63 <CustomDropDown>: 64 Button: 65 text: 'My first Item' 66 size_hint_y: None 67 height: 44 68 on_release: root.select('item1') 69 Label: 70 text: 'Unselectable item' 71 size_hint_y: None 72 height: 44 73 Button: 74 text: 'My second Item' 75 size_hint_y: None 76 height: 44 77 on_release: root.select('item2') 78 79 And then, create the associated python class and use it:: 80 81 class CustomDropDown(DropDown): 82 pass 83 84 dropdown = CustomDropDown() 85 mainbutton = Button(text='Hello', size_hint=(None, None)) 86 mainbutton.bind(on_release=dropdown.open) 87 dropdown.bind(on_select=lambda instance, x: setattr(mainbutton, 'text', x)) 88 ''' 89 90 __all__ = ('DropDown', ) 91 92 from kivy.uix.scrollview import ScrollView 93 from kivy.properties import ObjectProperty, NumericProperty, BooleanProperty 94 from kivy.core.window import Window 95 from kivy.lang import Builder 96 97 Builder.load_string(''' 98 <DropDown>: 99 container: container 100 do_scroll_x: False 101 size_hint: None, None 102 103 GridLayout: 104 id: container 105 size_hint_y: None 106 height: self.minimum_size[1] 107 cols: 1 108 ''') 109 110 111 class DropDownException(Exception): 112 '''DropDownException class. 113 ''' 114 pass 115 116 117 class DropDown(ScrollView): 118 '''DropDown class. See module documentation for more information. 119 120 :Events: 121 `on_select`: data 122 Fired when a selection is done. The data of the selection is passed 123 in as the first argument and is what you pass in the :meth:`select` 124 method as the first argument. 125 `on_dismiss`: 126 .. versionadded:: 1.8.0 127 128 Fired when the DropDown is dismissed, either on selection or on 129 touching outside the widget. 130 ''' 131 132 auto_width = BooleanProperty(True) 133 '''By default, the width of the dropdown will be the same as the width of 134 the attached widget. Set to False if you want to provide your own width. 135 ''' 136 137 max_height = NumericProperty(None, allownone=True) 138 '''Indicate the maximum height that the dropdown can take. If None, it will 139 take the maximum height available until the top or bottom of the screen 140 is reached. 141 142 :attr:`max_height` is a :class:`~kivy.properties.NumericProperty` and 143 defaults to None. 144 ''' 145 146 dismiss_on_select = BooleanProperty(True) 147 '''By default, the dropdown will be automatically dismissed when a 148 selection has been done. Set to False to prevent the dismiss. 149 150 :attr:`dismiss_on_select` is a :class:`~kivy.properties.BooleanProperty` 151 and defaults to True. 152 ''' 153 154 auto_dismiss = BooleanProperty(True) 155 '''By default, the dropdown will be automatically dismissed when a 156 touch happens outside of it, this option allow to disable this 157 feature 158 159 :attr:`auto_dismiss` is a :class:`~kivy.properties.BooleanProperty` 160 and defaults to True. 161 162 .. versionadded:: 1.8.0 163 ''' 164 165 attach_to = ObjectProperty(allownone=True) 166 '''(internal) Property that will be set to the widget to which the 167 drop down list is attached. 168 169 The :meth:`open` method will automatically set this property whilst 170 :meth:`dismiss` will set it back to None. 171 ''' 172 173 container = ObjectProperty() 174 '''(internal) Property that will be set to the container of the dropdown 175 list. It is a :class:`~kivy.uix.gridlayout.GridLayout` by default. 176 ''' 177 178 __events__ = ('on_select', 'on_dismiss') 179 180 def __init__(self, **kwargs): 181 self._win = None 182 super(DropDown, self).__init__(**kwargs) 183 Window.bind(on_key_down=self.on_key_down) 184 self.bind(size=self._reposition) 185 186 def on_key_down(self, instance, key, scancode, codepoint, modifiers): 187 if key == 27 and self.get_parent_window(): 188 self.dismiss() 189 return True 190 191 def on_container(self, instance, value): 192 self.container.bind(minimum_size=self._container_minimum_size) 193 194 def open(self, widget): 195 '''Open the dropdown list and attach it to a specific widget. 196 Depending on the position of the widget within the window and 197 the height of the dropdown, the dropdown might be above or below 198 that widget. 199 ''' 200 # ensure we are not already attached 201 if self.attach_to is not None: 202 self.dismiss() 203 204 # we will attach ourself to the main window, so ensure the 205 # widget we are looking for have a window 206 self._win = widget.get_parent_window() 207 if self._win is None: 208 raise DropDownException( 209 'Cannot open a dropdown list on a hidden widget') 210 211 self.attach_to = widget 212 widget.bind(pos=self._reposition, size=self._reposition) 213 self._reposition() 214 215 # attach ourself to the main window 216 self._win.add_widget(self) 217 218 def dismiss(self, *largs): 219 '''Remove the dropdown widget from the window and detach it from 220 the attached widget. 221 ''' 222 if self.parent: 223 self.parent.remove_widget(self) 224 if self.attach_to: 225 self.attach_to.unbind(pos=self._reposition, size=self._reposition) 226 self.attach_to = None 227 self.dispatch('on_dismiss') 228 229 def on_dismiss(self): 230 pass 231 232 def select(self, data): 233 '''Call this method to trigger the `on_select` event with the `data` 234 selection. The `data` can be anything you want. 235 ''' 236 self.dispatch('on_select', data) 237 if self.dismiss_on_select: 238 self.dismiss() 239 240 def on_select(self, data): 241 pass 242 243 def _container_minimum_size(self, instance, size): 244 if self.max_height: 245 self.height = min(size[1], self.max_height) 246 self.do_scroll_y = size[1] > self.max_height 247 else: 248 self.height = size[1] 249 self.do_scroll_y = True 250 251 def add_widget(self, *largs): 252 if self.container: 253 return self.container.add_widget(*largs) 254 return super(DropDown, self).add_widget(*largs) 255 256 def remove_widget(self, *largs): 257 if self.container: 258 return self.container.remove_widget(*largs) 259 return super(DropDown, self).remove_widget(*largs) 260 261 def clear_widgets(self): 262 if self.container: 263 return self.container.clear_widgets() 264 return super(DropDown, self).clear_widgets() 265 266 def on_touch_down(self, touch): 267 if super(DropDown, self).on_touch_down(touch): 268 return True 269 if self.collide_point(*touch.pos): 270 return True 271 if self.attach_to and self.attach_to.collide_point(*touch.pos): 272 return True 273 if self.auto_dismiss: 274 self.dismiss() 275 276 def on_touch_up(self, touch): 277 if super(DropDown, self).on_touch_up(touch): 278 return True 279 if 'button' in touch.profile and touch.button.startswith('scroll'): 280 return 281 if self.auto_dismiss: 282 self.dismiss() 283 284 def _reposition(self, *largs): 285 # calculate the coordinate of the attached widget in the window 286 # coordinate system 287 win = self._win 288 widget = self.attach_to 289 if not widget or not win: 290 return 291 wx, wy = widget.to_window(*widget.pos) 292 wright, wtop = widget.to_window(widget.right, widget.top) 293 294 # set width and x 295 if self.auto_width: 296 self.width = wright - wx 297 298 # ensure the dropdown list doesn't get out on the X axis, with a 299 # preference to 0 in case the list is too wide. 300 x = wx 301 if x + self.width > win.width: 302 x = win.width - self.width 303 if x < 0: 304 x = 0 305 self.x = x 306 307 # determine if we display the dropdown upper or lower to the widget 308 h_bottom = wy - self.height 309 h_top = win.height - (wtop + self.height) 310 if h_bottom > 0: 311 self.top = wy 312 elif h_top > 0: 313 self.y = wtop 314 else: 315 # none of both top/bottom have enough place to display the 316 # widget at the current size. Take the best side, and fit to 317 # it. 318 height = max(h_bottom, h_top) 319 if height == h_bottom: 320 self.top = wy 321 self.height = wy 322 else: 323 self.y = wtop 324 self.height = win.height - wtop 325 326 327 if __name__ == '__main__': 328 from kivy.uix.button import Button 329 from kivy.base import runTouchApp 330 331 def show_dropdown(button, *largs): 332 dp = DropDown() 333 dp.bind(on_select=lambda instance, x: setattr(button, 'text', x)) 334 for i in range(10): 335 item = Button(text='hello %d' % i, size_hint_y=None, height=44) 336 item.bind(on_release=lambda btn: dp.select(btn.text)) 337 dp.add_widget(item) 338 dp.open(button) 339 340 def touch_move(instance, touch): 341 instance.center = touch.pos 342 343 btn = Button(text='SHOW', size_hint=(None, None), pos=(300, 200)) 344 btn.bind(on_release=show_dropdown, on_touch_move=touch_move) 345 346 runTouchApp(btn) 347 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/kivy/uix/dropdown.py b/kivy/uix/dropdown.py --- a/kivy/uix/dropdown.py +++ b/kivy/uix/dropdown.py @@ -94,18 +94,12 @@ from kivy.core.window import Window from kivy.lang import Builder -Builder.load_string(''' -<DropDown>: - container: container - do_scroll_x: False - size_hint: None, None - - GridLayout: - id: container - size_hint_y: None - height: self.minimum_size[1] - cols: 1 -''') +_grid_kv = ''' +GridLayout: + size_hint_y: None + height: self.minimum_size[1] + cols: 1 +''' class DropDownException(Exception): @@ -179,7 +173,18 @@ def __init__(self, **kwargs): self._win = None + if 'container' not in kwargs: + c = self.container = Builder.load_string(_grid_kv) + else: + c = None + kwargs.setdefault('do_scroll_x', False) + if 'size_hint' not in kwargs: + kwargs.setdefault('size_hint_x', None) + kwargs.setdefault('size_hint_y', None) super(DropDown, self).__init__(**kwargs) + if c is not None: + super(DropDown, self).add_widget(c) + self.on_container(self, c) Window.bind(on_key_down=self.on_key_down) self.bind(size=self._reposition) @@ -189,7 +194,8 @@ return True def on_container(self, instance, value): - self.container.bind(minimum_size=self._container_minimum_size) + if value is not None: + self.container.bind(minimum_size=self._container_minimum_size) def open(self, widget): '''Open the dropdown list and attach it to a specific widget.
{"golden_diff": "diff --git a/kivy/uix/dropdown.py b/kivy/uix/dropdown.py\n--- a/kivy/uix/dropdown.py\n+++ b/kivy/uix/dropdown.py\n@@ -94,18 +94,12 @@\n from kivy.core.window import Window\n from kivy.lang import Builder\n \n-Builder.load_string('''\n-<DropDown>:\n- container: container\n- do_scroll_x: False\n- size_hint: None, None\n-\n- GridLayout:\n- id: container\n- size_hint_y: None\n- height: self.minimum_size[1]\n- cols: 1\n-''')\n+_grid_kv = '''\n+GridLayout:\n+ size_hint_y: None\n+ height: self.minimum_size[1]\n+ cols: 1\n+'''\n \n \n class DropDownException(Exception):\n@@ -179,7 +173,18 @@\n \n def __init__(self, **kwargs):\n self._win = None\n+ if 'container' not in kwargs:\n+ c = self.container = Builder.load_string(_grid_kv)\n+ else:\n+ c = None\n+ kwargs.setdefault('do_scroll_x', False)\n+ if 'size_hint' not in kwargs:\n+ kwargs.setdefault('size_hint_x', None)\n+ kwargs.setdefault('size_hint_y', None)\n super(DropDown, self).__init__(**kwargs)\n+ if c is not None:\n+ super(DropDown, self).add_widget(c)\n+ self.on_container(self, c)\n Window.bind(on_key_down=self.on_key_down)\n self.bind(size=self._reposition)\n \n@@ -189,7 +194,8 @@\n return True\n \n def on_container(self, instance, value):\n- self.container.bind(minimum_size=self._container_minimum_size)\n+ if value is not None:\n+ self.container.bind(minimum_size=self._container_minimum_size)\n \n def open(self, widget):\n '''Open the dropdown list and attach it to a specific widget.\n", "issue": "DropDown defined as dynamic class in kv file does not work\nTest file and kivy log are here: https://gist.github.com/PARC6502/7e47667cdea5cbd0e6a4\n\nI'm using this branch of kivy, not sure if that's the issue: https://github.com/matham/kivy/tree/focus-text\n\nIf I define a custom DropDown widget in the kv file as a dynamic class, I get an error when I try to open it. If I have the exact same setup but the class is defined in python it works.\n\n", "before_files": [{"content": "'''\nDrop-Down List\n==============\n\n.. versionadded:: 1.4.0\n\nA versatile drop-down list that can be used with custom widgets. It allows you\nto display a list of widgets under a displayed widget. Unlike other toolkits,\nthe list of widgets can contain any type of widget: simple buttons,\nimages etc.\n\nThe positioning of the drop-down list is fully automatic: we will always try to\nplace the dropdown list in a way that the user can select an item in the list.\n\nBasic example\n-------------\n\nA button with a dropdown list of 10 possible values. All the buttons within the\ndropdown list will trigger the dropdown :meth:`DropDown.select` method. After\nbeing called, the main button text will display the selection of the\ndropdown. ::\n\n from kivy.uix.dropdown import DropDown\n from kivy.uix.button import Button\n from kivy.base import runTouchApp\n\n # create a dropdown with 10 buttons\n dropdown = DropDown()\n for index in range(10):\n # when adding widgets, we need to specify the height manually (disabling\n # the size_hint_y) so the dropdown can calculate the area it needs.\n btn = Button(text='Value %d' % index, size_hint_y=None, height=44)\n\n # for each button, attach a callback that will call the select() method\n # on the dropdown. We'll pass the text of the button as the data of the\n # selection.\n btn.bind(on_release=lambda btn: dropdown.select(btn.text))\n\n # then add the button inside the dropdown\n dropdown.add_widget(btn)\n\n # create a big main button\n mainbutton = Button(text='Hello', size_hint=(None, None))\n\n # show the dropdown menu when the main button is released\n # note: all the bind() calls pass the instance of the caller (here, the\n # mainbutton instance) as the first argument of the callback (here,\n # dropdown.open.).\n mainbutton.bind(on_release=dropdown.open)\n\n # one last thing, listen for the selection in the dropdown list and\n # assign the data to the button text.\n dropdown.bind(on_select=lambda instance, x: setattr(mainbutton, 'text', x))\n\n runTouchApp(mainbutton)\n\nExtending dropdown in Kv\n------------------------\n\nYou could create a dropdown directly from your kv::\n\n #:kivy 1.4.0\n <CustomDropDown>:\n Button:\n text: 'My first Item'\n size_hint_y: None\n height: 44\n on_release: root.select('item1')\n Label:\n text: 'Unselectable item'\n size_hint_y: None\n height: 44\n Button:\n text: 'My second Item'\n size_hint_y: None\n height: 44\n on_release: root.select('item2')\n\nAnd then, create the associated python class and use it::\n\n class CustomDropDown(DropDown):\n pass\n\n dropdown = CustomDropDown()\n mainbutton = Button(text='Hello', size_hint=(None, None))\n mainbutton.bind(on_release=dropdown.open)\n dropdown.bind(on_select=lambda instance, x: setattr(mainbutton, 'text', x))\n'''\n\n__all__ = ('DropDown', )\n\nfrom kivy.uix.scrollview import ScrollView\nfrom kivy.properties import ObjectProperty, NumericProperty, BooleanProperty\nfrom kivy.core.window import Window\nfrom kivy.lang import Builder\n\nBuilder.load_string('''\n<DropDown>:\n container: container\n do_scroll_x: False\n size_hint: None, None\n\n GridLayout:\n id: container\n size_hint_y: None\n height: self.minimum_size[1]\n cols: 1\n''')\n\n\nclass DropDownException(Exception):\n '''DropDownException class.\n '''\n pass\n\n\nclass DropDown(ScrollView):\n '''DropDown class. See module documentation for more information.\n\n :Events:\n `on_select`: data\n Fired when a selection is done. The data of the selection is passed\n in as the first argument and is what you pass in the :meth:`select`\n method as the first argument.\n `on_dismiss`:\n .. versionadded:: 1.8.0\n\n Fired when the DropDown is dismissed, either on selection or on\n touching outside the widget.\n '''\n\n auto_width = BooleanProperty(True)\n '''By default, the width of the dropdown will be the same as the width of\n the attached widget. Set to False if you want to provide your own width.\n '''\n\n max_height = NumericProperty(None, allownone=True)\n '''Indicate the maximum height that the dropdown can take. If None, it will\n take the maximum height available until the top or bottom of the screen\n is reached.\n\n :attr:`max_height` is a :class:`~kivy.properties.NumericProperty` and\n defaults to None.\n '''\n\n dismiss_on_select = BooleanProperty(True)\n '''By default, the dropdown will be automatically dismissed when a\n selection has been done. Set to False to prevent the dismiss.\n\n :attr:`dismiss_on_select` is a :class:`~kivy.properties.BooleanProperty`\n and defaults to True.\n '''\n\n auto_dismiss = BooleanProperty(True)\n '''By default, the dropdown will be automatically dismissed when a\n touch happens outside of it, this option allow to disable this\n feature\n\n :attr:`auto_dismiss` is a :class:`~kivy.properties.BooleanProperty`\n and defaults to True.\n\n .. versionadded:: 1.8.0\n '''\n\n attach_to = ObjectProperty(allownone=True)\n '''(internal) Property that will be set to the widget to which the\n drop down list is attached.\n\n The :meth:`open` method will automatically set this property whilst\n :meth:`dismiss` will set it back to None.\n '''\n\n container = ObjectProperty()\n '''(internal) Property that will be set to the container of the dropdown\n list. It is a :class:`~kivy.uix.gridlayout.GridLayout` by default.\n '''\n\n __events__ = ('on_select', 'on_dismiss')\n\n def __init__(self, **kwargs):\n self._win = None\n super(DropDown, self).__init__(**kwargs)\n Window.bind(on_key_down=self.on_key_down)\n self.bind(size=self._reposition)\n\n def on_key_down(self, instance, key, scancode, codepoint, modifiers):\n if key == 27 and self.get_parent_window():\n self.dismiss()\n return True\n\n def on_container(self, instance, value):\n self.container.bind(minimum_size=self._container_minimum_size)\n\n def open(self, widget):\n '''Open the dropdown list and attach it to a specific widget.\n Depending on the position of the widget within the window and\n the height of the dropdown, the dropdown might be above or below\n that widget.\n '''\n # ensure we are not already attached\n if self.attach_to is not None:\n self.dismiss()\n\n # we will attach ourself to the main window, so ensure the\n # widget we are looking for have a window\n self._win = widget.get_parent_window()\n if self._win is None:\n raise DropDownException(\n 'Cannot open a dropdown list on a hidden widget')\n\n self.attach_to = widget\n widget.bind(pos=self._reposition, size=self._reposition)\n self._reposition()\n\n # attach ourself to the main window\n self._win.add_widget(self)\n\n def dismiss(self, *largs):\n '''Remove the dropdown widget from the window and detach it from\n the attached widget.\n '''\n if self.parent:\n self.parent.remove_widget(self)\n if self.attach_to:\n self.attach_to.unbind(pos=self._reposition, size=self._reposition)\n self.attach_to = None\n self.dispatch('on_dismiss')\n\n def on_dismiss(self):\n pass\n\n def select(self, data):\n '''Call this method to trigger the `on_select` event with the `data`\n selection. The `data` can be anything you want.\n '''\n self.dispatch('on_select', data)\n if self.dismiss_on_select:\n self.dismiss()\n\n def on_select(self, data):\n pass\n\n def _container_minimum_size(self, instance, size):\n if self.max_height:\n self.height = min(size[1], self.max_height)\n self.do_scroll_y = size[1] > self.max_height\n else:\n self.height = size[1]\n self.do_scroll_y = True\n\n def add_widget(self, *largs):\n if self.container:\n return self.container.add_widget(*largs)\n return super(DropDown, self).add_widget(*largs)\n\n def remove_widget(self, *largs):\n if self.container:\n return self.container.remove_widget(*largs)\n return super(DropDown, self).remove_widget(*largs)\n\n def clear_widgets(self):\n if self.container:\n return self.container.clear_widgets()\n return super(DropDown, self).clear_widgets()\n\n def on_touch_down(self, touch):\n if super(DropDown, self).on_touch_down(touch):\n return True\n if self.collide_point(*touch.pos):\n return True\n if self.attach_to and self.attach_to.collide_point(*touch.pos):\n return True\n if self.auto_dismiss:\n self.dismiss()\n\n def on_touch_up(self, touch):\n if super(DropDown, self).on_touch_up(touch):\n return True\n if 'button' in touch.profile and touch.button.startswith('scroll'):\n return\n if self.auto_dismiss:\n self.dismiss()\n\n def _reposition(self, *largs):\n # calculate the coordinate of the attached widget in the window\n # coordinate system\n win = self._win\n widget = self.attach_to\n if not widget or not win:\n return\n wx, wy = widget.to_window(*widget.pos)\n wright, wtop = widget.to_window(widget.right, widget.top)\n\n # set width and x\n if self.auto_width:\n self.width = wright - wx\n\n # ensure the dropdown list doesn't get out on the X axis, with a\n # preference to 0 in case the list is too wide.\n x = wx\n if x + self.width > win.width:\n x = win.width - self.width\n if x < 0:\n x = 0\n self.x = x\n\n # determine if we display the dropdown upper or lower to the widget\n h_bottom = wy - self.height\n h_top = win.height - (wtop + self.height)\n if h_bottom > 0:\n self.top = wy\n elif h_top > 0:\n self.y = wtop\n else:\n # none of both top/bottom have enough place to display the\n # widget at the current size. Take the best side, and fit to\n # it.\n height = max(h_bottom, h_top)\n if height == h_bottom:\n self.top = wy\n self.height = wy\n else:\n self.y = wtop\n self.height = win.height - wtop\n\n\nif __name__ == '__main__':\n from kivy.uix.button import Button\n from kivy.base import runTouchApp\n\n def show_dropdown(button, *largs):\n dp = DropDown()\n dp.bind(on_select=lambda instance, x: setattr(button, 'text', x))\n for i in range(10):\n item = Button(text='hello %d' % i, size_hint_y=None, height=44)\n item.bind(on_release=lambda btn: dp.select(btn.text))\n dp.add_widget(item)\n dp.open(button)\n\n def touch_move(instance, touch):\n instance.center = touch.pos\n\n btn = Button(text='SHOW', size_hint=(None, None), pos=(300, 200))\n btn.bind(on_release=show_dropdown, on_touch_move=touch_move)\n\n runTouchApp(btn)\n", "path": "kivy/uix/dropdown.py"}], "after_files": [{"content": "'''\nDrop-Down List\n==============\n\n.. versionadded:: 1.4.0\n\nA versatile drop-down list that can be used with custom widgets. It allows you\nto display a list of widgets under a displayed widget. Unlike other toolkits,\nthe list of widgets can contain any type of widget: simple buttons,\nimages etc.\n\nThe positioning of the drop-down list is fully automatic: we will always try to\nplace the dropdown list in a way that the user can select an item in the list.\n\nBasic example\n-------------\n\nA button with a dropdown list of 10 possible values. All the buttons within the\ndropdown list will trigger the dropdown :meth:`DropDown.select` method. After\nbeing called, the main button text will display the selection of the\ndropdown. ::\n\n from kivy.uix.dropdown import DropDown\n from kivy.uix.button import Button\n from kivy.base import runTouchApp\n\n # create a dropdown with 10 buttons\n dropdown = DropDown()\n for index in range(10):\n # when adding widgets, we need to specify the height manually (disabling\n # the size_hint_y) so the dropdown can calculate the area it needs.\n btn = Button(text='Value %d' % index, size_hint_y=None, height=44)\n\n # for each button, attach a callback that will call the select() method\n # on the dropdown. We'll pass the text of the button as the data of the\n # selection.\n btn.bind(on_release=lambda btn: dropdown.select(btn.text))\n\n # then add the button inside the dropdown\n dropdown.add_widget(btn)\n\n # create a big main button\n mainbutton = Button(text='Hello', size_hint=(None, None))\n\n # show the dropdown menu when the main button is released\n # note: all the bind() calls pass the instance of the caller (here, the\n # mainbutton instance) as the first argument of the callback (here,\n # dropdown.open.).\n mainbutton.bind(on_release=dropdown.open)\n\n # one last thing, listen for the selection in the dropdown list and\n # assign the data to the button text.\n dropdown.bind(on_select=lambda instance, x: setattr(mainbutton, 'text', x))\n\n runTouchApp(mainbutton)\n\nExtending dropdown in Kv\n------------------------\n\nYou could create a dropdown directly from your kv::\n\n #:kivy 1.4.0\n <CustomDropDown>:\n Button:\n text: 'My first Item'\n size_hint_y: None\n height: 44\n on_release: root.select('item1')\n Label:\n text: 'Unselectable item'\n size_hint_y: None\n height: 44\n Button:\n text: 'My second Item'\n size_hint_y: None\n height: 44\n on_release: root.select('item2')\n\nAnd then, create the associated python class and use it::\n\n class CustomDropDown(DropDown):\n pass\n\n dropdown = CustomDropDown()\n mainbutton = Button(text='Hello', size_hint=(None, None))\n mainbutton.bind(on_release=dropdown.open)\n dropdown.bind(on_select=lambda instance, x: setattr(mainbutton, 'text', x))\n'''\n\n__all__ = ('DropDown', )\n\nfrom kivy.uix.scrollview import ScrollView\nfrom kivy.properties import ObjectProperty, NumericProperty, BooleanProperty\nfrom kivy.core.window import Window\nfrom kivy.lang import Builder\n\n_grid_kv = '''\nGridLayout:\n size_hint_y: None\n height: self.minimum_size[1]\n cols: 1\n'''\n\n\nclass DropDownException(Exception):\n '''DropDownException class.\n '''\n pass\n\n\nclass DropDown(ScrollView):\n '''DropDown class. See module documentation for more information.\n\n :Events:\n `on_select`: data\n Fired when a selection is done. The data of the selection is passed\n in as the first argument and is what you pass in the :meth:`select`\n method as the first argument.\n `on_dismiss`:\n .. versionadded:: 1.8.0\n\n Fired when the DropDown is dismissed, either on selection or on\n touching outside the widget.\n '''\n\n auto_width = BooleanProperty(True)\n '''By default, the width of the dropdown will be the same as the width of\n the attached widget. Set to False if you want to provide your own width.\n '''\n\n max_height = NumericProperty(None, allownone=True)\n '''Indicate the maximum height that the dropdown can take. If None, it will\n take the maximum height available until the top or bottom of the screen\n is reached.\n\n :attr:`max_height` is a :class:`~kivy.properties.NumericProperty` and\n defaults to None.\n '''\n\n dismiss_on_select = BooleanProperty(True)\n '''By default, the dropdown will be automatically dismissed when a\n selection has been done. Set to False to prevent the dismiss.\n\n :attr:`dismiss_on_select` is a :class:`~kivy.properties.BooleanProperty`\n and defaults to True.\n '''\n\n auto_dismiss = BooleanProperty(True)\n '''By default, the dropdown will be automatically dismissed when a\n touch happens outside of it, this option allow to disable this\n feature\n\n :attr:`auto_dismiss` is a :class:`~kivy.properties.BooleanProperty`\n and defaults to True.\n\n .. versionadded:: 1.8.0\n '''\n\n attach_to = ObjectProperty(allownone=True)\n '''(internal) Property that will be set to the widget to which the\n drop down list is attached.\n\n The :meth:`open` method will automatically set this property whilst\n :meth:`dismiss` will set it back to None.\n '''\n\n container = ObjectProperty()\n '''(internal) Property that will be set to the container of the dropdown\n list. It is a :class:`~kivy.uix.gridlayout.GridLayout` by default.\n '''\n\n __events__ = ('on_select', 'on_dismiss')\n\n def __init__(self, **kwargs):\n self._win = None\n if 'container' not in kwargs:\n c = self.container = Builder.load_string(_grid_kv)\n else:\n c = None\n kwargs.setdefault('do_scroll_x', False)\n if 'size_hint' not in kwargs:\n kwargs.setdefault('size_hint_x', None)\n kwargs.setdefault('size_hint_y', None)\n super(DropDown, self).__init__(**kwargs)\n if c is not None:\n super(DropDown, self).add_widget(c)\n self.on_container(self, c)\n Window.bind(on_key_down=self.on_key_down)\n self.bind(size=self._reposition)\n\n def on_key_down(self, instance, key, scancode, codepoint, modifiers):\n if key == 27 and self.get_parent_window():\n self.dismiss()\n return True\n\n def on_container(self, instance, value):\n if value is not None:\n self.container.bind(minimum_size=self._container_minimum_size)\n\n def open(self, widget):\n '''Open the dropdown list and attach it to a specific widget.\n Depending on the position of the widget within the window and\n the height of the dropdown, the dropdown might be above or below\n that widget.\n '''\n # ensure we are not already attached\n if self.attach_to is not None:\n self.dismiss()\n\n # we will attach ourself to the main window, so ensure the\n # widget we are looking for have a window\n self._win = widget.get_parent_window()\n if self._win is None:\n raise DropDownException(\n 'Cannot open a dropdown list on a hidden widget')\n\n self.attach_to = widget\n widget.bind(pos=self._reposition, size=self._reposition)\n self._reposition()\n\n # attach ourself to the main window\n self._win.add_widget(self)\n\n def dismiss(self, *largs):\n '''Remove the dropdown widget from the window and detach it from\n the attached widget.\n '''\n if self.parent:\n self.parent.remove_widget(self)\n if self.attach_to:\n self.attach_to.unbind(pos=self._reposition, size=self._reposition)\n self.attach_to = None\n self.dispatch('on_dismiss')\n\n def on_dismiss(self):\n pass\n\n def select(self, data):\n '''Call this method to trigger the `on_select` event with the `data`\n selection. The `data` can be anything you want.\n '''\n self.dispatch('on_select', data)\n if self.dismiss_on_select:\n self.dismiss()\n\n def on_select(self, data):\n pass\n\n def _container_minimum_size(self, instance, size):\n if self.max_height:\n self.height = min(size[1], self.max_height)\n self.do_scroll_y = size[1] > self.max_height\n else:\n self.height = size[1]\n self.do_scroll_y = True\n\n def add_widget(self, *largs):\n if self.container:\n return self.container.add_widget(*largs)\n return super(DropDown, self).add_widget(*largs)\n\n def remove_widget(self, *largs):\n if self.container:\n return self.container.remove_widget(*largs)\n return super(DropDown, self).remove_widget(*largs)\n\n def clear_widgets(self):\n if self.container:\n return self.container.clear_widgets()\n return super(DropDown, self).clear_widgets()\n\n def on_touch_down(self, touch):\n if super(DropDown, self).on_touch_down(touch):\n return True\n if self.collide_point(*touch.pos):\n return True\n if self.attach_to and self.attach_to.collide_point(*touch.pos):\n return True\n if self.auto_dismiss:\n self.dismiss()\n\n def on_touch_up(self, touch):\n if super(DropDown, self).on_touch_up(touch):\n return True\n if 'button' in touch.profile and touch.button.startswith('scroll'):\n return\n if self.auto_dismiss:\n self.dismiss()\n\n def _reposition(self, *largs):\n # calculate the coordinate of the attached widget in the window\n # coordinate system\n win = self._win\n widget = self.attach_to\n if not widget or not win:\n return\n wx, wy = widget.to_window(*widget.pos)\n wright, wtop = widget.to_window(widget.right, widget.top)\n\n # set width and x\n if self.auto_width:\n self.width = wright - wx\n\n # ensure the dropdown list doesn't get out on the X axis, with a\n # preference to 0 in case the list is too wide.\n x = wx\n if x + self.width > win.width:\n x = win.width - self.width\n if x < 0:\n x = 0\n self.x = x\n\n # determine if we display the dropdown upper or lower to the widget\n h_bottom = wy - self.height\n h_top = win.height - (wtop + self.height)\n if h_bottom > 0:\n self.top = wy\n elif h_top > 0:\n self.y = wtop\n else:\n # none of both top/bottom have enough place to display the\n # widget at the current size. Take the best side, and fit to\n # it.\n height = max(h_bottom, h_top)\n if height == h_bottom:\n self.top = wy\n self.height = wy\n else:\n self.y = wtop\n self.height = win.height - wtop\n\n\nif __name__ == '__main__':\n from kivy.uix.button import Button\n from kivy.base import runTouchApp\n\n def show_dropdown(button, *largs):\n dp = DropDown()\n dp.bind(on_select=lambda instance, x: setattr(button, 'text', x))\n for i in range(10):\n item = Button(text='hello %d' % i, size_hint_y=None, height=44)\n item.bind(on_release=lambda btn: dp.select(btn.text))\n dp.add_widget(item)\n dp.open(button)\n\n def touch_move(instance, touch):\n instance.center = touch.pos\n\n btn = Button(text='SHOW', size_hint=(None, None), pos=(300, 200))\n btn.bind(on_release=show_dropdown, on_touch_move=touch_move)\n\n runTouchApp(btn)\n", "path": "kivy/uix/dropdown.py"}]}
3,987
445
gh_patches_debug_37589
rasdani/github-patches
git_diff
ESMCI__cime-2777
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- PR #2759 has caused DAE.f19_f19.A to fail during phase RUN The errors can be reproduced on anlworkstation by running the following cime_developer test: `./create_test DAE.f19_f19.A --machine=anlworkstation --test-root=****` Errors were: ``` Exception during run: ERROR: ERROR: Expected 1 Initial run message(s), 0 found in DAE.f19_f19.A.anlworkstation_gnu.xxxx/run/case2run/da.log.xxxx.gz Traceback (most recent call last): File "scripts/lib/CIME/SystemTests/system_tests_common.py", line 145, in run self.run_phase() File "scripts/lib/CIME/SystemTests/dae.py", line 155, in run_phase "ERROR: Expected {} Initial run message(s), {} found in {}".format(expected_init, found_init, fname)) File "scripts/lib/CIME/utils.py", line 130, in expect raise exc_type(msg) SystemExit: ERROR: ERROR: Expected 1 Initial run message(s), 0 found in DAE.f19_f19.A.anlworkstation_gnu.xxxx/run/case2run/da.log.xxxx.gz ``` Do we need some settings on anlworkstation to make it work? --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `scripts/lib/CIME/SystemTests/dae.py` Content: ``` 1 """ 2 Implementation of the CIME data assimilation test: 3 Compares standard run with run broken into two data assimilation cycles. 4 Runs a simple DA script on each cycle which performs checks but does not 5 change any model state (restart files). Compares answers of two runs. 6 7 """ 8 9 import os.path 10 import logging 11 import glob 12 import gzip 13 14 import CIME.XML.standard_module_setup as sms 15 from CIME.SystemTests.system_tests_compare_two import SystemTestsCompareTwo 16 from CIME.utils import expect 17 18 ############################################################################### 19 class DAE(SystemTestsCompareTwo): 20 ############################################################################### 21 """ 22 Implementation of the CIME data assimilation test: 23 Compares standard run with a run broken into two data assimilation cycles. 24 Runs a simple DA script on each cycle which performs checks but does not 25 change any model state (restart files). Compares answers of two runs. 26 Refers to a faux data assimilation script in the 27 cime/scripts/data_assimilation directory 28 """ 29 30 ########################################################################### 31 def __init__(self, case): 32 ########################################################################### 33 SystemTestsCompareTwo.__init__(self, case, 34 separate_builds=False, 35 run_two_suffix='da', 36 run_one_description='no data assimilation', 37 run_two_description='data assimilation') 38 39 ########################################################################### 40 def _case_one_setup(self): 41 ########################################################################### 42 # Even though there may be test mods turning on data assimilation, 43 # case1 is the control so turn it off 44 self._case.set_value("DATA_ASSIMILATION_SCRIPT", "") 45 self._case.set_value("DATA_ASSIMILATION_CYCLES", 1) 46 47 ########################################################################### 48 def _case_two_setup(self): 49 ########################################################################### 50 # Allow testmods to set an assimilation script 51 if len(self._case.get_value("DATA_ASSIMILATION_SCRIPT")) == 0: 52 # We need to find the scripts/data_assimilation directory 53 # LIB_DIR should be our parent dir 54 da_dir = os.path.join(os.path.dirname(sms.LIB_DIR), "data_assimilation") 55 expect(os.path.isdir(da_dir), "ERROR: da_dir, '{}', does not exist".format(da_dir)) 56 da_file = os.path.join(da_dir, "da_no_data_mod.sh") 57 expect(os.path.isfile(da_file), "ERROR: da_file, '{}', does not exist".format(da_file)) 58 # Set up two data assimilation cycles each half of the full run 59 self._case.set_value("DATA_ASSIMILATION_SCRIPT", da_file) 60 61 # We need at least 2 DA cycles 62 da_cycles = self._case.get_value("DATA_ASSIMILATION_CYCLES") 63 if da_cycles < 2: 64 da_cycles = 2 65 self._case.set_value("DATA_ASSIMILATION_CYCLES", da_cycles) 66 stopn = self._case.get_value("STOP_N") 67 expect((stopn % da_cycles) == 0, "ERROR: DAE test with {0} cycles requires that STOP_N be divisible by {0}".format(da_cycles)) 68 stopn = int(stopn / da_cycles) 69 self._case.set_value("STOP_N", stopn) 70 71 self._case.flush() 72 73 ########################################################################### 74 def run_phase(self): # pylint: disable=arguments-differ 75 ########################################################################### 76 # Clean up any da.log files in case this is a re-run. 77 self._activate_case2() 78 case_root = self._get_caseroot2() 79 rundir2 = self._case.get_value("RUNDIR") 80 da_files = glob.glob(os.path.join(rundir2, 'da.log.*')) 81 for file_ in da_files: 82 os.remove(file_) 83 # End for 84 85 # CONTINUE_RUN ends up TRUE, set it back in case this is a re-run. 86 self._case.set_value("CONTINUE_RUN", False) 87 # Start normal run here 88 self._activate_case1() 89 SystemTestsCompareTwo.run_phase(self) 90 91 # Do some checks on the data assimilation 'output' from case2 92 self._activate_case2() 93 da_files = glob.glob(os.path.join(rundir2, 'da.log.*')) 94 if da_files is None: 95 logger = logging.getLogger(__name__) 96 path = os.path.join(case_root, 'da.log.*') 97 logger.warning("No DA files in {}".format(path)) 98 99 da_cycles = self._case.get_value("DATA_ASSIMILATION_CYCLES") 100 expect((da_files is not None) and (len(da_files) == da_cycles), 101 "ERROR: There were {:d} DA cycles in run but {:d} DA files were found".format(da_cycles, len(da_files) if da_files is not None else 0)) 102 da_files.sort() 103 cycle_num = 0 104 compset = self._case.get_value("COMPSET") 105 # Special case for DWAV so we can make sure other variables are set 106 is_dwav = '_DWAV' in compset 107 for fname in da_files: 108 found_caseroot = False 109 found_cycle = False 110 found_signal = 0 111 found_init = 0 112 if is_dwav: 113 expected_init = self._case.get_value("NINST_WAV") 114 else: 115 # Expect a signal from every instance of every DA component 116 expected_init = 0 117 for comp in self._case.get_values("COMP_CLASSES"): 118 if self._case.get_value("DATA_ASSIMILATION_{}".format(comp)): 119 expected_init = expected_init + self._case.get_value("NINST_{}".format(comp)) 120 121 # Adjust expected initial run and post-DA numbers 122 if cycle_num == 0: 123 expected_signal = 0 124 else: 125 expected_signal = expected_init 126 expected_init = 0 127 128 with gzip.open(fname, "r") as dfile: 129 for bline in dfile: 130 line = bline.decode("utf-8") 131 expect(not 'ERROR' in line, "ERROR, error line {} found in {}".format(line, fname)) 132 if 'caseroot' in line[0:8]: 133 found_caseroot = True 134 elif 'cycle' in line[0:5]: 135 found_cycle = True 136 expect(int(line[7:]) == cycle_num, 137 "ERROR: Wrong cycle ({:d}) found in {} (expected {:d})".format(int(line[7:]), fname, cycle_num)) 138 elif 'resume signal' in line: 139 found_signal = found_signal + 1 140 expect('Post-DA resume signal found' in line[0:27], 141 "ERROR: bad post-DA message found in {}".format(fname)) 142 elif 'Initial run' in line: 143 found_init = found_init + 1 144 expect('Initial run signal found' in line[0:24], 145 "ERROR: bad Initial run message found in {}".format(fname)) 146 else: 147 expect(False, "ERROR: Unrecognized line ('{}') found in {}".format(line, fname)) 148 149 # End of for loop 150 expect(found_caseroot, "ERROR: No caseroot found in {}".format(fname)) 151 expect(found_cycle, "ERROR: No cycle found in {}".format(fname)) 152 expect(found_signal == expected_signal, 153 "ERROR: Expected {} post-DA resume signal message(s), {} found in {}".format(expected_signal, found_signal, fname)) 154 expect(found_init == expected_init, 155 "ERROR: Expected {} Initial run message(s), {} found in {}".format(expected_init, found_init, fname)) 156 # End of with 157 cycle_num = cycle_num + 1 158 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/scripts/lib/CIME/SystemTests/dae.py b/scripts/lib/CIME/SystemTests/dae.py --- a/scripts/lib/CIME/SystemTests/dae.py +++ b/scripts/lib/CIME/SystemTests/dae.py @@ -84,6 +84,12 @@ # CONTINUE_RUN ends up TRUE, set it back in case this is a re-run. self._case.set_value("CONTINUE_RUN", False) + # Turn off post DA in case this is a re-run + for comp in self._case.get_values("COMP_CLASSES"): + if comp == "ESP": + continue + else: + self._case.set_value("DATA_ASSIMILATION_{}".format(comp), False) # Start normal run here self._activate_case1() SystemTestsCompareTwo.run_phase(self) @@ -115,7 +121,9 @@ # Expect a signal from every instance of every DA component expected_init = 0 for comp in self._case.get_values("COMP_CLASSES"): - if self._case.get_value("DATA_ASSIMILATION_{}".format(comp)): + if comp == "ESP": + continue + elif self._case.get_value("DATA_ASSIMILATION_{}".format(comp)): expected_init = expected_init + self._case.get_value("NINST_{}".format(comp)) # Adjust expected initial run and post-DA numbers @@ -146,12 +154,13 @@ else: expect(False, "ERROR: Unrecognized line ('{}') found in {}".format(line, fname)) - # End of for loop + # End for expect(found_caseroot, "ERROR: No caseroot found in {}".format(fname)) expect(found_cycle, "ERROR: No cycle found in {}".format(fname)) expect(found_signal == expected_signal, "ERROR: Expected {} post-DA resume signal message(s), {} found in {}".format(expected_signal, found_signal, fname)) expect(found_init == expected_init, "ERROR: Expected {} Initial run message(s), {} found in {}".format(expected_init, found_init, fname)) - # End of with + # End with cycle_num = cycle_num + 1 + # End for
{"golden_diff": "diff --git a/scripts/lib/CIME/SystemTests/dae.py b/scripts/lib/CIME/SystemTests/dae.py\n--- a/scripts/lib/CIME/SystemTests/dae.py\n+++ b/scripts/lib/CIME/SystemTests/dae.py\n@@ -84,6 +84,12 @@\n \n # CONTINUE_RUN ends up TRUE, set it back in case this is a re-run.\n self._case.set_value(\"CONTINUE_RUN\", False)\n+ # Turn off post DA in case this is a re-run\n+ for comp in self._case.get_values(\"COMP_CLASSES\"):\n+ if comp == \"ESP\":\n+ continue\n+ else:\n+ self._case.set_value(\"DATA_ASSIMILATION_{}\".format(comp), False)\n # Start normal run here\n self._activate_case1()\n SystemTestsCompareTwo.run_phase(self)\n@@ -115,7 +121,9 @@\n # Expect a signal from every instance of every DA component\n expected_init = 0\n for comp in self._case.get_values(\"COMP_CLASSES\"):\n- if self._case.get_value(\"DATA_ASSIMILATION_{}\".format(comp)):\n+ if comp == \"ESP\":\n+ continue\n+ elif self._case.get_value(\"DATA_ASSIMILATION_{}\".format(comp)):\n expected_init = expected_init + self._case.get_value(\"NINST_{}\".format(comp))\n \n # Adjust expected initial run and post-DA numbers\n@@ -146,12 +154,13 @@\n else:\n expect(False, \"ERROR: Unrecognized line ('{}') found in {}\".format(line, fname))\n \n- # End of for loop\n+ # End for\n expect(found_caseroot, \"ERROR: No caseroot found in {}\".format(fname))\n expect(found_cycle, \"ERROR: No cycle found in {}\".format(fname))\n expect(found_signal == expected_signal,\n \"ERROR: Expected {} post-DA resume signal message(s), {} found in {}\".format(expected_signal, found_signal, fname))\n expect(found_init == expected_init,\n \"ERROR: Expected {} Initial run message(s), {} found in {}\".format(expected_init, found_init, fname))\n- # End of with\n+ # End with\n cycle_num = cycle_num + 1\n+ # End for\n", "issue": "PR #2759 has caused DAE.f19_f19.A to fail during phase RUN\nThe errors can be reproduced on anlworkstation by running the following cime_developer test:\r\n`./create_test DAE.f19_f19.A --machine=anlworkstation --test-root=****`\r\n\r\nErrors were:\r\n```\r\nException during run:\r\nERROR: ERROR: Expected 1 Initial run message(s), 0 found in DAE.f19_f19.A.anlworkstation_gnu.xxxx/run/case2run/da.log.xxxx.gz\r\nTraceback (most recent call last):\r\n File \"scripts/lib/CIME/SystemTests/system_tests_common.py\", line 145, in run\r\n self.run_phase()\r\n File \"scripts/lib/CIME/SystemTests/dae.py\", line 155, in run_phase\r\n \"ERROR: Expected {} Initial run message(s), {} found in {}\".format(expected_init, found_init, fname))\r\n File \"scripts/lib/CIME/utils.py\", line 130, in expect\r\n raise exc_type(msg)\r\nSystemExit: ERROR: ERROR: Expected 1 Initial run message(s), 0 found in DAE.f19_f19.A.anlworkstation_gnu.xxxx/run/case2run/da.log.xxxx.gz\r\n```\r\nDo we need some settings on anlworkstation to make it work?\n", "before_files": [{"content": "\"\"\"\nImplementation of the CIME data assimilation test:\nCompares standard run with run broken into two data assimilation cycles.\nRuns a simple DA script on each cycle which performs checks but does not\nchange any model state (restart files). Compares answers of two runs.\n\n\"\"\"\n\nimport os.path\nimport logging\nimport glob\nimport gzip\n\nimport CIME.XML.standard_module_setup as sms\nfrom CIME.SystemTests.system_tests_compare_two import SystemTestsCompareTwo\nfrom CIME.utils import expect\n\n###############################################################################\nclass DAE(SystemTestsCompareTwo):\n###############################################################################\n \"\"\"\n Implementation of the CIME data assimilation test:\n Compares standard run with a run broken into two data assimilation cycles.\n Runs a simple DA script on each cycle which performs checks but does not\n change any model state (restart files). Compares answers of two runs.\n Refers to a faux data assimilation script in the\n cime/scripts/data_assimilation directory\n \"\"\"\n\n ###########################################################################\n def __init__(self, case):\n ###########################################################################\n SystemTestsCompareTwo.__init__(self, case,\n separate_builds=False,\n run_two_suffix='da',\n run_one_description='no data assimilation',\n run_two_description='data assimilation')\n\n ###########################################################################\n def _case_one_setup(self):\n ###########################################################################\n # Even though there may be test mods turning on data assimilation,\n # case1 is the control so turn it off\n self._case.set_value(\"DATA_ASSIMILATION_SCRIPT\", \"\")\n self._case.set_value(\"DATA_ASSIMILATION_CYCLES\", 1)\n\n ###########################################################################\n def _case_two_setup(self):\n ###########################################################################\n # Allow testmods to set an assimilation script\n if len(self._case.get_value(\"DATA_ASSIMILATION_SCRIPT\")) == 0:\n # We need to find the scripts/data_assimilation directory\n # LIB_DIR should be our parent dir\n da_dir = os.path.join(os.path.dirname(sms.LIB_DIR), \"data_assimilation\")\n expect(os.path.isdir(da_dir), \"ERROR: da_dir, '{}', does not exist\".format(da_dir))\n da_file = os.path.join(da_dir, \"da_no_data_mod.sh\")\n expect(os.path.isfile(da_file), \"ERROR: da_file, '{}', does not exist\".format(da_file))\n # Set up two data assimilation cycles each half of the full run\n self._case.set_value(\"DATA_ASSIMILATION_SCRIPT\", da_file)\n\n # We need at least 2 DA cycles\n da_cycles = self._case.get_value(\"DATA_ASSIMILATION_CYCLES\")\n if da_cycles < 2:\n da_cycles = 2\n self._case.set_value(\"DATA_ASSIMILATION_CYCLES\", da_cycles)\n stopn = self._case.get_value(\"STOP_N\")\n expect((stopn % da_cycles) == 0, \"ERROR: DAE test with {0} cycles requires that STOP_N be divisible by {0}\".format(da_cycles))\n stopn = int(stopn / da_cycles)\n self._case.set_value(\"STOP_N\", stopn)\n\n self._case.flush()\n\n ###########################################################################\n def run_phase(self): # pylint: disable=arguments-differ\n ###########################################################################\n # Clean up any da.log files in case this is a re-run.\n self._activate_case2()\n case_root = self._get_caseroot2()\n rundir2 = self._case.get_value(\"RUNDIR\")\n da_files = glob.glob(os.path.join(rundir2, 'da.log.*'))\n for file_ in da_files:\n os.remove(file_)\n # End for\n\n # CONTINUE_RUN ends up TRUE, set it back in case this is a re-run.\n self._case.set_value(\"CONTINUE_RUN\", False)\n # Start normal run here\n self._activate_case1()\n SystemTestsCompareTwo.run_phase(self)\n\n # Do some checks on the data assimilation 'output' from case2\n self._activate_case2()\n da_files = glob.glob(os.path.join(rundir2, 'da.log.*'))\n if da_files is None:\n logger = logging.getLogger(__name__)\n path = os.path.join(case_root, 'da.log.*')\n logger.warning(\"No DA files in {}\".format(path))\n\n da_cycles = self._case.get_value(\"DATA_ASSIMILATION_CYCLES\")\n expect((da_files is not None) and (len(da_files) == da_cycles),\n \"ERROR: There were {:d} DA cycles in run but {:d} DA files were found\".format(da_cycles, len(da_files) if da_files is not None else 0))\n da_files.sort()\n cycle_num = 0\n compset = self._case.get_value(\"COMPSET\")\n # Special case for DWAV so we can make sure other variables are set\n is_dwav = '_DWAV' in compset\n for fname in da_files:\n found_caseroot = False\n found_cycle = False\n found_signal = 0\n found_init = 0\n if is_dwav:\n expected_init = self._case.get_value(\"NINST_WAV\")\n else:\n # Expect a signal from every instance of every DA component\n expected_init = 0\n for comp in self._case.get_values(\"COMP_CLASSES\"):\n if self._case.get_value(\"DATA_ASSIMILATION_{}\".format(comp)):\n expected_init = expected_init + self._case.get_value(\"NINST_{}\".format(comp))\n\n # Adjust expected initial run and post-DA numbers\n if cycle_num == 0:\n expected_signal = 0\n else:\n expected_signal = expected_init\n expected_init = 0\n\n with gzip.open(fname, \"r\") as dfile:\n for bline in dfile:\n line = bline.decode(\"utf-8\")\n expect(not 'ERROR' in line, \"ERROR, error line {} found in {}\".format(line, fname))\n if 'caseroot' in line[0:8]:\n found_caseroot = True\n elif 'cycle' in line[0:5]:\n found_cycle = True\n expect(int(line[7:]) == cycle_num,\n \"ERROR: Wrong cycle ({:d}) found in {} (expected {:d})\".format(int(line[7:]), fname, cycle_num))\n elif 'resume signal' in line:\n found_signal = found_signal + 1\n expect('Post-DA resume signal found' in line[0:27],\n \"ERROR: bad post-DA message found in {}\".format(fname))\n elif 'Initial run' in line:\n found_init = found_init + 1\n expect('Initial run signal found' in line[0:24],\n \"ERROR: bad Initial run message found in {}\".format(fname))\n else:\n expect(False, \"ERROR: Unrecognized line ('{}') found in {}\".format(line, fname))\n\n # End of for loop\n expect(found_caseroot, \"ERROR: No caseroot found in {}\".format(fname))\n expect(found_cycle, \"ERROR: No cycle found in {}\".format(fname))\n expect(found_signal == expected_signal,\n \"ERROR: Expected {} post-DA resume signal message(s), {} found in {}\".format(expected_signal, found_signal, fname))\n expect(found_init == expected_init,\n \"ERROR: Expected {} Initial run message(s), {} found in {}\".format(expected_init, found_init, fname))\n # End of with\n cycle_num = cycle_num + 1\n", "path": "scripts/lib/CIME/SystemTests/dae.py"}], "after_files": [{"content": "\"\"\"\nImplementation of the CIME data assimilation test:\nCompares standard run with run broken into two data assimilation cycles.\nRuns a simple DA script on each cycle which performs checks but does not\nchange any model state (restart files). Compares answers of two runs.\n\n\"\"\"\n\nimport os.path\nimport logging\nimport glob\nimport gzip\n\nimport CIME.XML.standard_module_setup as sms\nfrom CIME.SystemTests.system_tests_compare_two import SystemTestsCompareTwo\nfrom CIME.utils import expect\n\n###############################################################################\nclass DAE(SystemTestsCompareTwo):\n###############################################################################\n \"\"\"\n Implementation of the CIME data assimilation test:\n Compares standard run with a run broken into two data assimilation cycles.\n Runs a simple DA script on each cycle which performs checks but does not\n change any model state (restart files). Compares answers of two runs.\n Refers to a faux data assimilation script in the\n cime/scripts/data_assimilation directory\n \"\"\"\n\n ###########################################################################\n def __init__(self, case):\n ###########################################################################\n SystemTestsCompareTwo.__init__(self, case,\n separate_builds=False,\n run_two_suffix='da',\n run_one_description='no data assimilation',\n run_two_description='data assimilation')\n\n ###########################################################################\n def _case_one_setup(self):\n ###########################################################################\n # Even though there may be test mods turning on data assimilation,\n # case1 is the control so turn it off\n self._case.set_value(\"DATA_ASSIMILATION_SCRIPT\", \"\")\n self._case.set_value(\"DATA_ASSIMILATION_CYCLES\", 1)\n\n ###########################################################################\n def _case_two_setup(self):\n ###########################################################################\n # Allow testmods to set an assimilation script\n if len(self._case.get_value(\"DATA_ASSIMILATION_SCRIPT\")) == 0:\n # We need to find the scripts/data_assimilation directory\n # LIB_DIR should be our parent dir\n da_dir = os.path.join(os.path.dirname(sms.LIB_DIR), \"data_assimilation\")\n expect(os.path.isdir(da_dir), \"ERROR: da_dir, '{}', does not exist\".format(da_dir))\n da_file = os.path.join(da_dir, \"da_no_data_mod.sh\")\n expect(os.path.isfile(da_file), \"ERROR: da_file, '{}', does not exist\".format(da_file))\n # Set up two data assimilation cycles each half of the full run\n self._case.set_value(\"DATA_ASSIMILATION_SCRIPT\", da_file)\n\n # We need at least 2 DA cycles\n da_cycles = self._case.get_value(\"DATA_ASSIMILATION_CYCLES\")\n if da_cycles < 2:\n da_cycles = 2\n self._case.set_value(\"DATA_ASSIMILATION_CYCLES\", da_cycles)\n stopn = self._case.get_value(\"STOP_N\")\n expect((stopn % da_cycles) == 0, \"ERROR: DAE test with {0} cycles requires that STOP_N be divisible by {0}\".format(da_cycles))\n stopn = int(stopn / da_cycles)\n self._case.set_value(\"STOP_N\", stopn)\n\n self._case.flush()\n\n ###########################################################################\n def run_phase(self): # pylint: disable=arguments-differ\n ###########################################################################\n # Clean up any da.log files in case this is a re-run.\n self._activate_case2()\n case_root = self._get_caseroot2()\n rundir2 = self._case.get_value(\"RUNDIR\")\n da_files = glob.glob(os.path.join(rundir2, 'da.log.*'))\n for file_ in da_files:\n os.remove(file_)\n # End for\n\n # CONTINUE_RUN ends up TRUE, set it back in case this is a re-run.\n self._case.set_value(\"CONTINUE_RUN\", False)\n # Turn off post DA in case this is a re-run\n for comp in self._case.get_values(\"COMP_CLASSES\"):\n if comp == \"ESP\":\n continue\n else:\n self._case.set_value(\"DATA_ASSIMILATION_{}\".format(comp), False)\n # Start normal run here\n self._activate_case1()\n SystemTestsCompareTwo.run_phase(self)\n\n # Do some checks on the data assimilation 'output' from case2\n self._activate_case2()\n da_files = glob.glob(os.path.join(rundir2, 'da.log.*'))\n if da_files is None:\n logger = logging.getLogger(__name__)\n path = os.path.join(case_root, 'da.log.*')\n logger.warning(\"No DA files in {}\".format(path))\n\n da_cycles = self._case.get_value(\"DATA_ASSIMILATION_CYCLES\")\n expect((da_files is not None) and (len(da_files) == da_cycles),\n \"ERROR: There were {:d} DA cycles in run but {:d} DA files were found\".format(da_cycles, len(da_files) if da_files is not None else 0))\n da_files.sort()\n cycle_num = 0\n compset = self._case.get_value(\"COMPSET\")\n # Special case for DWAV so we can make sure other variables are set\n is_dwav = '_DWAV' in compset\n for fname in da_files:\n found_caseroot = False\n found_cycle = False\n found_signal = 0\n found_init = 0\n if is_dwav:\n expected_init = self._case.get_value(\"NINST_WAV\")\n else:\n # Expect a signal from every instance of every DA component\n expected_init = 0\n for comp in self._case.get_values(\"COMP_CLASSES\"):\n if comp == \"ESP\":\n continue\n elif self._case.get_value(\"DATA_ASSIMILATION_{}\".format(comp)):\n expected_init = expected_init + self._case.get_value(\"NINST_{}\".format(comp))\n\n # Adjust expected initial run and post-DA numbers\n if cycle_num == 0:\n expected_signal = 0\n else:\n expected_signal = expected_init\n expected_init = 0\n\n with gzip.open(fname, \"r\") as dfile:\n for bline in dfile:\n line = bline.decode(\"utf-8\")\n expect(not 'ERROR' in line, \"ERROR, error line {} found in {}\".format(line, fname))\n if 'caseroot' in line[0:8]:\n found_caseroot = True\n elif 'cycle' in line[0:5]:\n found_cycle = True\n expect(int(line[7:]) == cycle_num,\n \"ERROR: Wrong cycle ({:d}) found in {} (expected {:d})\".format(int(line[7:]), fname, cycle_num))\n elif 'resume signal' in line:\n found_signal = found_signal + 1\n expect('Post-DA resume signal found' in line[0:27],\n \"ERROR: bad post-DA message found in {}\".format(fname))\n elif 'Initial run' in line:\n found_init = found_init + 1\n expect('Initial run signal found' in line[0:24],\n \"ERROR: bad Initial run message found in {}\".format(fname))\n else:\n expect(False, \"ERROR: Unrecognized line ('{}') found in {}\".format(line, fname))\n\n # End for\n expect(found_caseroot, \"ERROR: No caseroot found in {}\".format(fname))\n expect(found_cycle, \"ERROR: No cycle found in {}\".format(fname))\n expect(found_signal == expected_signal,\n \"ERROR: Expected {} post-DA resume signal message(s), {} found in {}\".format(expected_signal, found_signal, fname))\n expect(found_init == expected_init,\n \"ERROR: Expected {} Initial run message(s), {} found in {}\".format(expected_init, found_init, fname))\n # End with\n cycle_num = cycle_num + 1\n # End for\n", "path": "scripts/lib/CIME/SystemTests/dae.py"}]}
2,553
500
gh_patches_debug_39639
rasdani/github-patches
git_diff
DataDog__dd-trace-py-959
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Stuck in "Waiting for GIL" Hi, We found a thread that seems blocked forever: ``` gdb python 30107 ``` Here's the single thread: ``` (gdb) info threads Id Target Id Frame * 1 Thread 0x7f1dd852e4c0 (LWP 30107) "/opt/simon/rele" 0x00007f1dd7d2e146 in do_futex_wait.constprop () from /lib64/libpthread.so.0 ``` Here's the backtrace: ``` raceback (most recent call first): Waiting for the GIL File "/opt/simon/releases/b038662ce7d140609b1e4883a3ea0edf77851537/venv/lib/python2.7/site-packages/ddtrace/sampler.py", line 88, in sample with self._lock: File "/opt/simon/releases/b038662ce7d140609b1e4883a3ea0edf77851537/venv/lib/python2.7/site-packages/ddtrace/tracer.py", line 251, in start_span if self.priority_sampler.sample(span): File "/opt/simon/releases/b038662ce7d140609b1e4883a3ea0edf77851537/venv/lib/python2.7/site-packages/ddtrace/tracer.py", line 376, in trace span_type=span_type, File "/opt/simon/releases/b038662ce7d140609b1e4883a3ea0edf77851537/venv/lib/python2.7/site-packages/ddtrace/contrib/django/cache.py", line 56, in wrapped with tracer.trace('django.cache', span_type=TYPE, service=cache_service_name) as span: ``` I'm not sure how to troubleshoot from here. Have you seen this before, or have any suggestions on what I could look at? It's the only thread in process so nothing should be holding the GIL. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `ddtrace/sampler.py` Content: ``` 1 """Samplers manage the client-side trace sampling 2 3 Any `sampled = False` trace won't be written, and can be ignored by the instrumentation. 4 """ 5 from threading import Lock 6 7 from .compat import iteritems 8 from .internal.logger import get_logger 9 10 log = get_logger(__name__) 11 12 MAX_TRACE_ID = 2 ** 64 13 14 # Has to be the same factor and key as the Agent to allow chained sampling 15 KNUTH_FACTOR = 1111111111111111111 16 17 18 class AllSampler(object): 19 """Sampler sampling all the traces""" 20 21 def sample(self, span): 22 return True 23 24 25 class RateSampler(object): 26 """Sampler based on a rate 27 28 Keep (100 * `sample_rate`)% of the traces. 29 It samples randomly, its main purpose is to reduce the instrumentation footprint. 30 """ 31 32 def __init__(self, sample_rate=1): 33 if sample_rate <= 0: 34 log.error('sample_rate is negative or null, disable the Sampler') 35 sample_rate = 1 36 elif sample_rate > 1: 37 sample_rate = 1 38 39 self.set_sample_rate(sample_rate) 40 41 log.debug('initialized RateSampler, sample %s%% of traces', 100 * sample_rate) 42 43 def set_sample_rate(self, sample_rate): 44 self.sample_rate = sample_rate 45 self.sampling_id_threshold = sample_rate * MAX_TRACE_ID 46 47 def sample(self, span): 48 sampled = ((span.trace_id * KNUTH_FACTOR) % MAX_TRACE_ID) <= self.sampling_id_threshold 49 50 return sampled 51 52 53 def _key(service=None, env=None): 54 service = service or '' 55 env = env or '' 56 return 'service:' + service + ',env:' + env 57 58 59 _default_key = _key() 60 61 62 class RateByServiceSampler(object): 63 """Sampler based on a rate, by service 64 65 Keep (100 * `sample_rate`)% of the traces. 66 The sample rate is kept independently for each service/env tuple. 67 """ 68 69 def __init__(self, sample_rate=1): 70 self._lock = Lock() 71 self._by_service_samplers = {} 72 self._by_service_samplers[_default_key] = RateSampler(sample_rate) 73 74 def _set_sample_rate_by_key(self, sample_rate, key): 75 with self._lock: 76 if key in self._by_service_samplers: 77 self._by_service_samplers[key].set_sample_rate(sample_rate) 78 else: 79 self._by_service_samplers[key] = RateSampler(sample_rate) 80 81 def set_sample_rate(self, sample_rate, service='', env=''): 82 self._set_sample_rate_by_key(sample_rate, _key(service, env)) 83 84 def sample(self, span): 85 tags = span.tracer().tags 86 env = tags['env'] if 'env' in tags else None 87 key = _key(span.service, env) 88 with self._lock: 89 if key in self._by_service_samplers: 90 return self._by_service_samplers[key].sample(span) 91 return self._by_service_samplers[_default_key].sample(span) 92 93 def set_sample_rate_by_service(self, rate_by_service): 94 for key, sample_rate in iteritems(rate_by_service): 95 self._set_sample_rate_by_key(sample_rate, key) 96 with self._lock: 97 for key in list(self._by_service_samplers): 98 if key not in rate_by_service and key != _default_key: 99 del self._by_service_samplers[key] 100 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/ddtrace/sampler.py b/ddtrace/sampler.py --- a/ddtrace/sampler.py +++ b/ddtrace/sampler.py @@ -2,8 +2,6 @@ Any `sampled = False` trace won't be written, and can be ignored by the instrumentation. """ -from threading import Lock - from .compat import iteritems from .internal.logger import get_logger @@ -50,15 +48,6 @@ return sampled -def _key(service=None, env=None): - service = service or '' - env = env or '' - return 'service:' + service + ',env:' + env - - -_default_key = _key() - - class RateByServiceSampler(object): """Sampler based on a rate, by service @@ -66,34 +55,40 @@ The sample rate is kept independently for each service/env tuple. """ + @staticmethod + def _key(service=None, env=None): + """Compute a key with the same format used by the Datadog agent API.""" + service = service or '' + env = env or '' + return 'service:' + service + ',env:' + env + def __init__(self, sample_rate=1): - self._lock = Lock() - self._by_service_samplers = {} - self._by_service_samplers[_default_key] = RateSampler(sample_rate) + self.sample_rate = sample_rate + self._by_service_samplers = self._get_new_by_service_sampler() - def _set_sample_rate_by_key(self, sample_rate, key): - with self._lock: - if key in self._by_service_samplers: - self._by_service_samplers[key].set_sample_rate(sample_rate) - else: - self._by_service_samplers[key] = RateSampler(sample_rate) + def _get_new_by_service_sampler(self): + return { + self._default_key: RateSampler(self.sample_rate) + } def set_sample_rate(self, sample_rate, service='', env=''): - self._set_sample_rate_by_key(sample_rate, _key(service, env)) + self._by_service_samplers[self._key(service, env)] = RateSampler(sample_rate) def sample(self, span): tags = span.tracer().tags env = tags['env'] if 'env' in tags else None - key = _key(span.service, env) - with self._lock: - if key in self._by_service_samplers: - return self._by_service_samplers[key].sample(span) - return self._by_service_samplers[_default_key].sample(span) + key = self._key(span.service, env) + return self._by_service_samplers.get( + key, self._by_service_samplers[self._default_key] + ).sample(span) def set_sample_rate_by_service(self, rate_by_service): + new_by_service_samplers = self._get_new_by_service_sampler() for key, sample_rate in iteritems(rate_by_service): - self._set_sample_rate_by_key(sample_rate, key) - with self._lock: - for key in list(self._by_service_samplers): - if key not in rate_by_service and key != _default_key: - del self._by_service_samplers[key] + new_by_service_samplers[key] = RateSampler(sample_rate) + + self._by_service_samplers = new_by_service_samplers + + +# Default key for service with no specific rate +RateByServiceSampler._default_key = RateByServiceSampler._key()
{"golden_diff": "diff --git a/ddtrace/sampler.py b/ddtrace/sampler.py\n--- a/ddtrace/sampler.py\n+++ b/ddtrace/sampler.py\n@@ -2,8 +2,6 @@\n \n Any `sampled = False` trace won't be written, and can be ignored by the instrumentation.\n \"\"\"\n-from threading import Lock\n-\n from .compat import iteritems\n from .internal.logger import get_logger\n \n@@ -50,15 +48,6 @@\n return sampled\n \n \n-def _key(service=None, env=None):\n- service = service or ''\n- env = env or ''\n- return 'service:' + service + ',env:' + env\n-\n-\n-_default_key = _key()\n-\n-\n class RateByServiceSampler(object):\n \"\"\"Sampler based on a rate, by service\n \n@@ -66,34 +55,40 @@\n The sample rate is kept independently for each service/env tuple.\n \"\"\"\n \n+ @staticmethod\n+ def _key(service=None, env=None):\n+ \"\"\"Compute a key with the same format used by the Datadog agent API.\"\"\"\n+ service = service or ''\n+ env = env or ''\n+ return 'service:' + service + ',env:' + env\n+\n def __init__(self, sample_rate=1):\n- self._lock = Lock()\n- self._by_service_samplers = {}\n- self._by_service_samplers[_default_key] = RateSampler(sample_rate)\n+ self.sample_rate = sample_rate\n+ self._by_service_samplers = self._get_new_by_service_sampler()\n \n- def _set_sample_rate_by_key(self, sample_rate, key):\n- with self._lock:\n- if key in self._by_service_samplers:\n- self._by_service_samplers[key].set_sample_rate(sample_rate)\n- else:\n- self._by_service_samplers[key] = RateSampler(sample_rate)\n+ def _get_new_by_service_sampler(self):\n+ return {\n+ self._default_key: RateSampler(self.sample_rate)\n+ }\n \n def set_sample_rate(self, sample_rate, service='', env=''):\n- self._set_sample_rate_by_key(sample_rate, _key(service, env))\n+ self._by_service_samplers[self._key(service, env)] = RateSampler(sample_rate)\n \n def sample(self, span):\n tags = span.tracer().tags\n env = tags['env'] if 'env' in tags else None\n- key = _key(span.service, env)\n- with self._lock:\n- if key in self._by_service_samplers:\n- return self._by_service_samplers[key].sample(span)\n- return self._by_service_samplers[_default_key].sample(span)\n+ key = self._key(span.service, env)\n+ return self._by_service_samplers.get(\n+ key, self._by_service_samplers[self._default_key]\n+ ).sample(span)\n \n def set_sample_rate_by_service(self, rate_by_service):\n+ new_by_service_samplers = self._get_new_by_service_sampler()\n for key, sample_rate in iteritems(rate_by_service):\n- self._set_sample_rate_by_key(sample_rate, key)\n- with self._lock:\n- for key in list(self._by_service_samplers):\n- if key not in rate_by_service and key != _default_key:\n- del self._by_service_samplers[key]\n+ new_by_service_samplers[key] = RateSampler(sample_rate)\n+\n+ self._by_service_samplers = new_by_service_samplers\n+\n+\n+# Default key for service with no specific rate\n+RateByServiceSampler._default_key = RateByServiceSampler._key()\n", "issue": "Stuck in \"Waiting for GIL\"\nHi,\r\n\r\nWe found a thread that seems blocked forever:\r\n\r\n```\r\ngdb python 30107\r\n```\r\n\r\nHere's the single thread:\r\n\r\n```\r\n(gdb) info threads\r\n Id Target Id Frame\r\n* 1 Thread 0x7f1dd852e4c0 (LWP 30107) \"/opt/simon/rele\" 0x00007f1dd7d2e146 in do_futex_wait.constprop () from /lib64/libpthread.so.0\r\n```\r\n\r\nHere's the backtrace:\r\n\r\n```\r\nraceback (most recent call first):\r\n Waiting for the GIL\r\n File \"/opt/simon/releases/b038662ce7d140609b1e4883a3ea0edf77851537/venv/lib/python2.7/site-packages/ddtrace/sampler.py\", line 88, in sample\r\n with self._lock:\r\n File \"/opt/simon/releases/b038662ce7d140609b1e4883a3ea0edf77851537/venv/lib/python2.7/site-packages/ddtrace/tracer.py\", line 251, in start_span\r\n if self.priority_sampler.sample(span):\r\n File \"/opt/simon/releases/b038662ce7d140609b1e4883a3ea0edf77851537/venv/lib/python2.7/site-packages/ddtrace/tracer.py\", line 376, in trace\r\n span_type=span_type,\r\n File \"/opt/simon/releases/b038662ce7d140609b1e4883a3ea0edf77851537/venv/lib/python2.7/site-packages/ddtrace/contrib/django/cache.py\", line 56, in wrapped\r\n with tracer.trace('django.cache', span_type=TYPE, service=cache_service_name) as span:\r\n```\r\n\r\nI'm not sure how to troubleshoot from here. Have you seen this before, or have any suggestions on what I could look at? It's the only thread in process so nothing should be holding the GIL.\n", "before_files": [{"content": "\"\"\"Samplers manage the client-side trace sampling\n\nAny `sampled = False` trace won't be written, and can be ignored by the instrumentation.\n\"\"\"\nfrom threading import Lock\n\nfrom .compat import iteritems\nfrom .internal.logger import get_logger\n\nlog = get_logger(__name__)\n\nMAX_TRACE_ID = 2 ** 64\n\n# Has to be the same factor and key as the Agent to allow chained sampling\nKNUTH_FACTOR = 1111111111111111111\n\n\nclass AllSampler(object):\n \"\"\"Sampler sampling all the traces\"\"\"\n\n def sample(self, span):\n return True\n\n\nclass RateSampler(object):\n \"\"\"Sampler based on a rate\n\n Keep (100 * `sample_rate`)% of the traces.\n It samples randomly, its main purpose is to reduce the instrumentation footprint.\n \"\"\"\n\n def __init__(self, sample_rate=1):\n if sample_rate <= 0:\n log.error('sample_rate is negative or null, disable the Sampler')\n sample_rate = 1\n elif sample_rate > 1:\n sample_rate = 1\n\n self.set_sample_rate(sample_rate)\n\n log.debug('initialized RateSampler, sample %s%% of traces', 100 * sample_rate)\n\n def set_sample_rate(self, sample_rate):\n self.sample_rate = sample_rate\n self.sampling_id_threshold = sample_rate * MAX_TRACE_ID\n\n def sample(self, span):\n sampled = ((span.trace_id * KNUTH_FACTOR) % MAX_TRACE_ID) <= self.sampling_id_threshold\n\n return sampled\n\n\ndef _key(service=None, env=None):\n service = service or ''\n env = env or ''\n return 'service:' + service + ',env:' + env\n\n\n_default_key = _key()\n\n\nclass RateByServiceSampler(object):\n \"\"\"Sampler based on a rate, by service\n\n Keep (100 * `sample_rate`)% of the traces.\n The sample rate is kept independently for each service/env tuple.\n \"\"\"\n\n def __init__(self, sample_rate=1):\n self._lock = Lock()\n self._by_service_samplers = {}\n self._by_service_samplers[_default_key] = RateSampler(sample_rate)\n\n def _set_sample_rate_by_key(self, sample_rate, key):\n with self._lock:\n if key in self._by_service_samplers:\n self._by_service_samplers[key].set_sample_rate(sample_rate)\n else:\n self._by_service_samplers[key] = RateSampler(sample_rate)\n\n def set_sample_rate(self, sample_rate, service='', env=''):\n self._set_sample_rate_by_key(sample_rate, _key(service, env))\n\n def sample(self, span):\n tags = span.tracer().tags\n env = tags['env'] if 'env' in tags else None\n key = _key(span.service, env)\n with self._lock:\n if key in self._by_service_samplers:\n return self._by_service_samplers[key].sample(span)\n return self._by_service_samplers[_default_key].sample(span)\n\n def set_sample_rate_by_service(self, rate_by_service):\n for key, sample_rate in iteritems(rate_by_service):\n self._set_sample_rate_by_key(sample_rate, key)\n with self._lock:\n for key in list(self._by_service_samplers):\n if key not in rate_by_service and key != _default_key:\n del self._by_service_samplers[key]\n", "path": "ddtrace/sampler.py"}], "after_files": [{"content": "\"\"\"Samplers manage the client-side trace sampling\n\nAny `sampled = False` trace won't be written, and can be ignored by the instrumentation.\n\"\"\"\nfrom .compat import iteritems\nfrom .internal.logger import get_logger\n\nlog = get_logger(__name__)\n\nMAX_TRACE_ID = 2 ** 64\n\n# Has to be the same factor and key as the Agent to allow chained sampling\nKNUTH_FACTOR = 1111111111111111111\n\n\nclass AllSampler(object):\n \"\"\"Sampler sampling all the traces\"\"\"\n\n def sample(self, span):\n return True\n\n\nclass RateSampler(object):\n \"\"\"Sampler based on a rate\n\n Keep (100 * `sample_rate`)% of the traces.\n It samples randomly, its main purpose is to reduce the instrumentation footprint.\n \"\"\"\n\n def __init__(self, sample_rate=1):\n if sample_rate <= 0:\n log.error('sample_rate is negative or null, disable the Sampler')\n sample_rate = 1\n elif sample_rate > 1:\n sample_rate = 1\n\n self.set_sample_rate(sample_rate)\n\n log.debug('initialized RateSampler, sample %s%% of traces', 100 * sample_rate)\n\n def set_sample_rate(self, sample_rate):\n self.sample_rate = sample_rate\n self.sampling_id_threshold = sample_rate * MAX_TRACE_ID\n\n def sample(self, span):\n sampled = ((span.trace_id * KNUTH_FACTOR) % MAX_TRACE_ID) <= self.sampling_id_threshold\n\n return sampled\n\n\nclass RateByServiceSampler(object):\n \"\"\"Sampler based on a rate, by service\n\n Keep (100 * `sample_rate`)% of the traces.\n The sample rate is kept independently for each service/env tuple.\n \"\"\"\n\n @staticmethod\n def _key(service=None, env=None):\n \"\"\"Compute a key with the same format used by the Datadog agent API.\"\"\"\n service = service or ''\n env = env or ''\n return 'service:' + service + ',env:' + env\n\n def __init__(self, sample_rate=1):\n self.sample_rate = sample_rate\n self._by_service_samplers = self._get_new_by_service_sampler()\n\n def _get_new_by_service_sampler(self):\n return {\n self._default_key: RateSampler(self.sample_rate)\n }\n\n def set_sample_rate(self, sample_rate, service='', env=''):\n self._by_service_samplers[self._key(service, env)] = RateSampler(sample_rate)\n\n def sample(self, span):\n tags = span.tracer().tags\n env = tags['env'] if 'env' in tags else None\n key = self._key(span.service, env)\n return self._by_service_samplers.get(\n key, self._by_service_samplers[self._default_key]\n ).sample(span)\n\n def set_sample_rate_by_service(self, rate_by_service):\n new_by_service_samplers = self._get_new_by_service_sampler()\n for key, sample_rate in iteritems(rate_by_service):\n new_by_service_samplers[key] = RateSampler(sample_rate)\n\n self._by_service_samplers = new_by_service_samplers\n\n\n# Default key for service with no specific rate\nRateByServiceSampler._default_key = RateByServiceSampler._key()\n", "path": "ddtrace/sampler.py"}]}
1,751
815
gh_patches_debug_10488
rasdani/github-patches
git_diff
PlasmaPy__PlasmaPy-644
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Add binder links to generated example notebooks Stumbled upon at #638 * Binder is a cool tool I'd love to learn more about * https://sphinx-gallery.github.io/configuration.html#binder-links is pretty cool * https://github.com/mne-tools/mne-bids/pull/207/files is an implementation --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `docs/conf.py` Content: ``` 1 #!/usr/bin/env python3.6 2 # -*- coding: utf-8 -*- 3 # 4 # PlasmaPy documentation build configuration file, created by 5 # sphinx-quickstart on Wed May 31 18:16:46 2017. 6 # 7 # This file is execfile()d with the current directory set to its 8 # containing dir. 9 # 10 # Note that not all possible configuration values are present in this 11 # autogenerated file. 12 # 13 # All configuration values have a default; values that are commented out 14 # serve to show the default. 15 16 # If extensions (or modules to document with autodoc) are in another directory, 17 # add these directories to sys.path here. If the directory is relative to the 18 # documentation root, use os.path.abspath to make it absolute, like shown here. 19 # 20 21 import os 22 import sys 23 24 sys.path.insert(0, os.path.abspath('..')) 25 26 # -- General configuration ------------------------------------------------ 27 28 # If your documentation needs a minimal Sphinx version, state it here. 29 # 30 # needs_sphinx = '1.0' 31 32 # Add any Sphinx extension module names here, as strings. They can be 33 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 34 # ones. 35 extensions = ['sphinx.ext.autodoc', 'sphinx.ext.mathjax', 36 'sphinx.ext.napoleon', 'sphinx.ext.intersphinx', 37 'sphinx_automodapi.automodapi', 38 'sphinx_automodapi.smart_resolver', 39 'sphinx_gallery.gen_gallery', 40 'sphinx.ext.graphviz'] 41 42 43 intersphinx_mapping = { 44 'python': ('https://docs.python.org/3', None), 45 'numpy': ('https://docs.scipy.org/doc/numpy', None), 46 'scipy': ('https://docs.scipy.org/doc/scipy/reference/', None), 47 'pandas': ('http://pandas.pydata.org/pandas-docs/stable/', None), 48 'astropy': ('http://docs.astropy.org/en/stable/', None)} 49 # Add any paths that contain templates here, relative to this directory. 50 templates_path = ['_templates'] 51 52 # The suffix(es) of source filenames. 53 # You can specify multiple suffix as a list of string: 54 # 55 # source_suffix = ['.rst', '.md'] 56 source_suffix = '.rst' 57 58 # The master toctree document. 59 master_doc = 'index' 60 61 # General information about the project. 62 project = 'PlasmaPy' 63 copyright = '2015-2019, PlasmaPy Community' 64 author = 'PlasmaPy Community' 65 66 # The version info for the project you're documenting, acts as replacement for 67 # |version| and |release|, also used in various other places throughout the 68 # built documents. 69 # 70 # The short X.Y version. 71 version = '0.2' 72 # The full version, including alpha/beta/rc tags. 73 release = '0.2.0' 74 75 # The language for content autogenerated by Sphinx. Refer to documentation 76 # for a list of supported languages. 77 # 78 # This is also used if you do content translation via gettext catalogs. 79 # Usually you set "language" from the command line for these cases. 80 language = None 81 82 # List of patterns, relative to source directory, that match files and 83 # directories to ignore when looking for source files. 84 # This patterns also effect to html_static_path and html_extra_path 85 exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store'] 86 87 # The name of the Pygments (syntax highlighting) style to use. 88 pygments_style = 'sphinx' 89 90 # If true, `todo` and `todoList` produce output, else they produce nothing. 91 todo_include_todos = False 92 93 default_role = 'obj' 94 95 # -- Options for HTML output ---------------------------------------------- 96 97 # The theme to use for HTML and HTML Help pages. See the documentation for 98 # a list of builtin themes. 99 # 100 # html_theme = 'alabaster' 101 # html_theme = 'traditional' 102 # html_theme = 'agogo' 103 html_theme = "sphinx_rtd_theme" 104 105 # Theme options are theme-specific and customize the look and feel of a theme 106 # further. For a list of options available for each theme, see the 107 # documentation. 108 # 109 # html_theme_options = {} 110 111 # Add any paths that contain custom static files (such as style sheets) here, 112 # relative to this directory. They are copied after the builtin static files, 113 # so a file named "default.css" will overwrite the builtin "default.css". 114 # html_static_path = ['_static'] 115 116 117 # -- Options for HTMLHelp output ------------------------------------------ 118 119 # Output file base name for HTML help builder. 120 htmlhelp_basename = 'PlasmaPydoc' 121 122 123 # -- Options for LaTeX output --------------------------------------------- 124 125 latex_elements = { 126 # The paper size ('letterpaper' or 'a4paper'). 127 # 128 # 'papersize': 'letterpaper', 129 130 # The font size ('10pt', '11pt' or '12pt'). 131 # 132 # 'pointsize': '10pt', 133 134 # Additional stuff for the LaTeX preamble. 135 # 136 # 'preamble': '', 137 138 # Latex figure (float) alignment 139 # 140 # 'figure_align': 'htbp', 141 } 142 143 # Grouping the document tree into LaTeX files. List of tuples 144 # (source start file, target name, title, 145 # author, documentclass [howto, manual, or own class]). 146 latex_documents = [ 147 (master_doc, 'PlasmaPy.tex', 'PlasmaPy Documentation', 148 'PlasmaPy Community', 'manual'), 149 ] 150 151 152 # -- Options for manual page output --------------------------------------- 153 154 # One entry per manual page. List of tuples 155 # (source start file, name, description, authors, manual section). 156 man_pages = [ 157 (master_doc, 'plasmapy', 'PlasmaPy Documentation', 158 [author], 1) 159 ] 160 161 162 # -- Options for Texinfo output ------------------------------------------- 163 164 # Grouping the document tree into Texinfo files. List of tuples 165 # (source start file, target name, title, author, 166 # dir menu entry, description, category) 167 texinfo_documents = [ 168 (master_doc, 'PlasmaPy', 'PlasmaPy Documentation', 169 author, 'PlasmaPy', 'Python package for plasma physics', 170 'Miscellaneous'), 171 ] 172 173 html_favicon = "./_static/icon.ico" 174 175 # -- Options for Sphinx Gallery ----------------- 176 177 sphinx_gallery_conf = { 178 # path to your examples scripts 179 'examples_dirs': '../plasmapy/examples', 180 # path where to save gallery generated examples 181 'backreferences_dir': 'gen_modules/backreferences', 182 'gallery_dirs': 'auto_examples', 183 } 184 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/docs/conf.py b/docs/conf.py --- a/docs/conf.py +++ b/docs/conf.py @@ -180,4 +180,15 @@ # path where to save gallery generated examples 'backreferences_dir': 'gen_modules/backreferences', 'gallery_dirs': 'auto_examples', + 'binder': { + # Required keys + 'org': 'PlasmaPy', + 'repo': 'PlasmaPy', + 'branch': 'master', # noqa: E501 Can be any branch, tag, or commit hash. Use a branch that hosts your docs. + 'binderhub_url': 'https://mybinder.org', # noqa: E501 Any URL of a binderhub deployment. Must be full URL (e.g. https://mybinder.org). + 'dependencies': [ + '../requirements/environment.yml' + ], + 'use_jupyter_lab': True, + }, }
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -180,4 +180,15 @@\n # path where to save gallery generated examples\n 'backreferences_dir': 'gen_modules/backreferences',\n 'gallery_dirs': 'auto_examples',\n+ 'binder': {\n+ # Required keys\n+ 'org': 'PlasmaPy',\n+ 'repo': 'PlasmaPy',\n+ 'branch': 'master', # noqa: E501 Can be any branch, tag, or commit hash. Use a branch that hosts your docs.\n+ 'binderhub_url': 'https://mybinder.org', # noqa: E501 Any URL of a binderhub deployment. Must be full URL (e.g. https://mybinder.org).\n+ 'dependencies': [\n+ '../requirements/environment.yml'\n+ ],\n+ 'use_jupyter_lab': True,\n+ },\n }\n", "issue": "Add binder links to generated example notebooks\nStumbled upon at #638 \r\n\r\n* Binder is a cool tool I'd love to learn more about\r\n* https://sphinx-gallery.github.io/configuration.html#binder-links is pretty cool\r\n\t* https://github.com/mne-tools/mne-bids/pull/207/files is an implementation\n", "before_files": [{"content": "#!/usr/bin/env python3.6\n# -*- coding: utf-8 -*-\n#\n# PlasmaPy documentation build configuration file, created by\n# sphinx-quickstart on Wed May 31 18:16:46 2017.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n\nimport os\nimport sys\n\nsys.path.insert(0, os.path.abspath('..'))\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = ['sphinx.ext.autodoc', 'sphinx.ext.mathjax',\n 'sphinx.ext.napoleon', 'sphinx.ext.intersphinx',\n 'sphinx_automodapi.automodapi',\n 'sphinx_automodapi.smart_resolver',\n 'sphinx_gallery.gen_gallery',\n 'sphinx.ext.graphviz']\n\n\nintersphinx_mapping = {\n 'python': ('https://docs.python.org/3', None),\n 'numpy': ('https://docs.scipy.org/doc/numpy', None),\n 'scipy': ('https://docs.scipy.org/doc/scipy/reference/', None),\n 'pandas': ('http://pandas.pydata.org/pandas-docs/stable/', None),\n 'astropy': ('http://docs.astropy.org/en/stable/', None)}\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = 'PlasmaPy'\ncopyright = '2015-2019, PlasmaPy Community'\nauthor = 'PlasmaPy Community'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '0.2'\n# The full version, including alpha/beta/rc tags.\nrelease = '0.2.0'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\ndefault_role = 'obj'\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\n# html_theme = 'alabaster'\n# html_theme = 'traditional'\n# html_theme = 'agogo'\nhtml_theme = \"sphinx_rtd_theme\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\n# html_static_path = ['_static']\n\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'PlasmaPydoc'\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'PlasmaPy.tex', 'PlasmaPy Documentation',\n 'PlasmaPy Community', 'manual'),\n]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'plasmapy', 'PlasmaPy Documentation',\n [author], 1)\n]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'PlasmaPy', 'PlasmaPy Documentation',\n author, 'PlasmaPy', 'Python package for plasma physics',\n 'Miscellaneous'),\n]\n\nhtml_favicon = \"./_static/icon.ico\"\n\n# -- Options for Sphinx Gallery -----------------\n\nsphinx_gallery_conf = {\n # path to your examples scripts\n 'examples_dirs': '../plasmapy/examples',\n # path where to save gallery generated examples\n 'backreferences_dir': 'gen_modules/backreferences',\n 'gallery_dirs': 'auto_examples',\n}\n", "path": "docs/conf.py"}], "after_files": [{"content": "#!/usr/bin/env python3.6\n# -*- coding: utf-8 -*-\n#\n# PlasmaPy documentation build configuration file, created by\n# sphinx-quickstart on Wed May 31 18:16:46 2017.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n\nimport os\nimport sys\n\nsys.path.insert(0, os.path.abspath('..'))\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = ['sphinx.ext.autodoc', 'sphinx.ext.mathjax',\n 'sphinx.ext.napoleon', 'sphinx.ext.intersphinx',\n 'sphinx_automodapi.automodapi',\n 'sphinx_automodapi.smart_resolver',\n 'sphinx_gallery.gen_gallery',\n 'sphinx.ext.graphviz']\n\n\nintersphinx_mapping = {\n 'python': ('https://docs.python.org/3', None),\n 'numpy': ('https://docs.scipy.org/doc/numpy', None),\n 'scipy': ('https://docs.scipy.org/doc/scipy/reference/', None),\n 'pandas': ('http://pandas.pydata.org/pandas-docs/stable/', None),\n 'astropy': ('http://docs.astropy.org/en/stable/', None)}\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = 'PlasmaPy'\ncopyright = '2015-2019, PlasmaPy Community'\nauthor = 'PlasmaPy Community'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '0.2'\n# The full version, including alpha/beta/rc tags.\nrelease = '0.2.0'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\ndefault_role = 'obj'\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\n# html_theme = 'alabaster'\n# html_theme = 'traditional'\n# html_theme = 'agogo'\nhtml_theme = \"sphinx_rtd_theme\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\n# html_static_path = ['_static']\n\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'PlasmaPydoc'\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'PlasmaPy.tex', 'PlasmaPy Documentation',\n 'PlasmaPy Community', 'manual'),\n]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'plasmapy', 'PlasmaPy Documentation',\n [author], 1)\n]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'PlasmaPy', 'PlasmaPy Documentation',\n author, 'PlasmaPy', 'Python package for plasma physics',\n 'Miscellaneous'),\n]\n\nhtml_favicon = \"./_static/icon.ico\"\n\n# -- Options for Sphinx Gallery -----------------\n\nsphinx_gallery_conf = {\n # path to your examples scripts\n 'examples_dirs': '../plasmapy/examples',\n # path where to save gallery generated examples\n 'backreferences_dir': 'gen_modules/backreferences',\n 'gallery_dirs': 'auto_examples',\n 'binder': {\n # Required keys\n 'org': 'PlasmaPy',\n 'repo': 'PlasmaPy',\n 'branch': 'master', # noqa: E501 Can be any branch, tag, or commit hash. Use a branch that hosts your docs.\n 'binderhub_url': 'https://mybinder.org', # noqa: E501 Any URL of a binderhub deployment. Must be full URL (e.g. https://mybinder.org).\n 'dependencies': [\n '../requirements/environment.yml'\n ],\n 'use_jupyter_lab': True,\n },\n}\n", "path": "docs/conf.py"}]}
2,206
215
gh_patches_debug_32598
rasdani/github-patches
git_diff
apache__airflow-15680
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- MongoToS3Operator failed when running with a single query (not aggregate pipeline) **Apache Airflow version**: 2.0.2 **What happened**: `MongoToS3Operator` failed when running with a single query (not aggregate pipeline): ```sh Traceback (most recent call last): File "/home/airflow//bin/airflow", line 8, in <module> sys.exit(main()) File "/home/airflow//lib/python3.8/site-packages/airflow/__main__.py", line 40, in main args.func(args) File "/home/airflow//lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 48, in command return func(*args, **kwargs) File "/home/airflow//lib/python3.8/site-packages/airflow/utils/cli.py", line 89, in wrapper return f(*args, **kwargs) File "/home/airflow//lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 385, in task_test ti.run(ignore_task_deps=True, ignore_ti_state=True, test_mode=True) File "/home/airflow//lib/python3.8/site-packages/airflow/utils/session.py", line 70, in wrapper return func(*args, session=session, **kwargs) File "/home/airflow//lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1413, in run self._run_raw_task( File "/home/airflow//lib/python3.8/site-packages/airflow/utils/session.py", line 67, in wrapper return func(*args, **kwargs) File "/home/airflow//lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1138, in _run_raw_task self._prepare_and_execute_task_with_callbacks(context, task) File "/home/airflow//lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1311, in _prepare_and_execute_task_with_callbacks result = self._execute_task(context, task_copy) File "/home/airflow//lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1341, in _execute_task result = task_copy.execute(context=context) File "/home/airflow//lib/python3.8/site-packages/airflow/providers/amazon/aws/transfers/mongo_to_s3.py", line 116, in execute results = MongoHook(self.mongo_conn_id).find( File "/home/airflow//lib/python3.8/site-packages/airflow/providers/mongo/hooks/mongo.py", line 144, in find return collection.find(query, **kwargs) File "/home/airflow//lib/python3.8/site-packages/pymongo/collection.py", line 1523, in find return Cursor(self, *args, **kwargs) TypeError: __init__() got an unexpected keyword argument 'allowDiskUse' ``` **What you expected to happen**: I expect the data from MongoDB to be exported to a file in S3 with no errors. **How to reproduce it**: Run the following operator with a single `mongo_query` (no aggregate pipeline): ```python export_to_s3 = MongoToS3Operator( task_id='export_to_s3', mongo_conn_id=Variable.get('mongo_conn_id'), s3_conn_id=Variable.get('aws_conn_id'), mongo_collection='my_mongo_collection', mongo_query={}, s3_bucket=Variable.get('s3_bucket'), s3_key="my_data.json", replace=True, dag=dag, ) ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `airflow/providers/amazon/aws/transfers/mongo_to_s3.py` Content: ``` 1 # 2 # Licensed to the Apache Software Foundation (ASF) under one 3 # or more contributor license agreements. See the NOTICE file 4 # distributed with this work for additional information 5 # regarding copyright ownership. The ASF licenses this file 6 # to you under the Apache License, Version 2.0 (the 7 # "License"); you may not use this file except in compliance 8 # with the License. You may obtain a copy of the License at 9 # 10 # http://www.apache.org/licenses/LICENSE-2.0 11 # 12 # Unless required by applicable law or agreed to in writing, 13 # software distributed under the License is distributed on an 14 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 # KIND, either express or implied. See the License for the 16 # specific language governing permissions and limitations 17 # under the License. 18 import json 19 import warnings 20 from typing import Any, Iterable, Optional, Union, cast 21 22 from bson import json_util 23 24 from airflow.models import BaseOperator 25 from airflow.providers.amazon.aws.hooks.s3 import S3Hook 26 from airflow.providers.mongo.hooks.mongo import MongoHook 27 28 _DEPRECATION_MSG = ( 29 "The s3_conn_id parameter has been deprecated. You should pass instead the aws_conn_id parameter." 30 ) 31 32 33 class MongoToS3Operator(BaseOperator): 34 """Operator meant to move data from mongo via pymongo to s3 via boto. 35 36 :param mongo_conn_id: reference to a specific mongo connection 37 :type mongo_conn_id: str 38 :param aws_conn_id: reference to a specific S3 connection 39 :type aws_conn_id: str 40 :param mongo_collection: reference to a specific collection in your mongo db 41 :type mongo_collection: str 42 :param mongo_query: query to execute. A list including a dict of the query 43 :type mongo_query: list 44 :param s3_bucket: reference to a specific S3 bucket to store the data 45 :type s3_bucket: str 46 :param s3_key: in which S3 key the file will be stored 47 :type s3_key: str 48 :param mongo_db: reference to a specific mongo database 49 :type mongo_db: str 50 :param replace: whether or not to replace the file in S3 if it previously existed 51 :type replace: bool 52 :param allow_disk_use: in the case you are retrieving a lot of data, you may have 53 to use the disk to save it instead of saving all in the RAM 54 :type allow_disk_use: bool 55 :param compression: type of compression to use for output file in S3. Currently only gzip is supported. 56 :type compression: str 57 """ 58 59 template_fields = ('s3_bucket', 's3_key', 'mongo_query', 'mongo_collection') 60 ui_color = '#589636' 61 template_fields_renderers = {"mongo_query": "py"} 62 # pylint: disable=too-many-instance-attributes 63 64 def __init__( 65 self, 66 *, 67 s3_conn_id: Optional[str] = None, 68 mongo_conn_id: str = 'mongo_default', 69 aws_conn_id: str = 'aws_default', 70 mongo_collection: str, 71 mongo_query: Union[list, dict], 72 s3_bucket: str, 73 s3_key: str, 74 mongo_db: Optional[str] = None, 75 replace: bool = False, 76 allow_disk_use: bool = False, 77 compression: Optional[str] = None, 78 **kwargs, 79 ) -> None: 80 super().__init__(**kwargs) 81 if s3_conn_id: 82 warnings.warn(_DEPRECATION_MSG, DeprecationWarning, stacklevel=3) 83 aws_conn_id = s3_conn_id 84 85 self.mongo_conn_id = mongo_conn_id 86 self.aws_conn_id = aws_conn_id 87 self.mongo_db = mongo_db 88 self.mongo_collection = mongo_collection 89 90 # Grab query and determine if we need to run an aggregate pipeline 91 self.mongo_query = mongo_query 92 self.is_pipeline = isinstance(self.mongo_query, list) 93 94 self.s3_bucket = s3_bucket 95 self.s3_key = s3_key 96 self.replace = replace 97 self.allow_disk_use = allow_disk_use 98 self.compression = compression 99 100 def execute(self, context) -> bool: 101 """Is written to depend on transform method""" 102 s3_conn = S3Hook(self.aws_conn_id) 103 104 # Grab collection and execute query according to whether or not it is a pipeline 105 if self.is_pipeline: 106 results = MongoHook(self.mongo_conn_id).aggregate( 107 mongo_collection=self.mongo_collection, 108 aggregate_query=cast(list, self.mongo_query), 109 mongo_db=self.mongo_db, 110 allowDiskUse=self.allow_disk_use, 111 ) 112 113 else: 114 results = MongoHook(self.mongo_conn_id).find( 115 mongo_collection=self.mongo_collection, 116 query=cast(dict, self.mongo_query), 117 mongo_db=self.mongo_db, 118 allowDiskUse=self.allow_disk_use, 119 ) 120 121 # Performs transform then stringifies the docs results into json format 122 docs_str = self._stringify(self.transform(results)) 123 124 s3_conn.load_string( 125 string_data=docs_str, 126 key=self.s3_key, 127 bucket_name=self.s3_bucket, 128 replace=self.replace, 129 compression=self.compression, 130 ) 131 132 @staticmethod 133 def _stringify(iterable: Iterable, joinable: str = '\n') -> str: 134 """ 135 Takes an iterable (pymongo Cursor or Array) containing dictionaries and 136 returns a stringified version using python join 137 """ 138 return joinable.join([json.dumps(doc, default=json_util.default) for doc in iterable]) 139 140 @staticmethod 141 def transform(docs: Any) -> Any: 142 """This method is meant to be extended by child classes 143 to perform transformations unique to those operators needs. 144 Processes pyMongo cursor and returns an iterable with each element being 145 a JSON serializable dictionary 146 147 Base transform() assumes no processing is needed 148 ie. docs is a pyMongo cursor of documents and cursor just 149 needs to be passed through 150 151 Override this method for custom transformations 152 """ 153 return docs 154 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/airflow/providers/amazon/aws/transfers/mongo_to_s3.py b/airflow/providers/amazon/aws/transfers/mongo_to_s3.py --- a/airflow/providers/amazon/aws/transfers/mongo_to_s3.py +++ b/airflow/providers/amazon/aws/transfers/mongo_to_s3.py @@ -40,7 +40,7 @@ :param mongo_collection: reference to a specific collection in your mongo db :type mongo_collection: str :param mongo_query: query to execute. A list including a dict of the query - :type mongo_query: list + :type mongo_query: Union[list, dict] :param s3_bucket: reference to a specific S3 bucket to store the data :type s3_bucket: str :param s3_key: in which S3 key the file will be stored @@ -49,8 +49,8 @@ :type mongo_db: str :param replace: whether or not to replace the file in S3 if it previously existed :type replace: bool - :param allow_disk_use: in the case you are retrieving a lot of data, you may have - to use the disk to save it instead of saving all in the RAM + :param allow_disk_use: enables writing to temporary files in the case you are handling large dataset. + This only takes effect when `mongo_query` is a list - running an aggregate pipeline :type allow_disk_use: bool :param compression: type of compression to use for output file in S3. Currently only gzip is supported. :type compression: str @@ -115,7 +115,6 @@ mongo_collection=self.mongo_collection, query=cast(dict, self.mongo_query), mongo_db=self.mongo_db, - allowDiskUse=self.allow_disk_use, ) # Performs transform then stringifies the docs results into json format
{"golden_diff": "diff --git a/airflow/providers/amazon/aws/transfers/mongo_to_s3.py b/airflow/providers/amazon/aws/transfers/mongo_to_s3.py\n--- a/airflow/providers/amazon/aws/transfers/mongo_to_s3.py\n+++ b/airflow/providers/amazon/aws/transfers/mongo_to_s3.py\n@@ -40,7 +40,7 @@\n :param mongo_collection: reference to a specific collection in your mongo db\n :type mongo_collection: str\n :param mongo_query: query to execute. A list including a dict of the query\n- :type mongo_query: list\n+ :type mongo_query: Union[list, dict]\n :param s3_bucket: reference to a specific S3 bucket to store the data\n :type s3_bucket: str\n :param s3_key: in which S3 key the file will be stored\n@@ -49,8 +49,8 @@\n :type mongo_db: str\n :param replace: whether or not to replace the file in S3 if it previously existed\n :type replace: bool\n- :param allow_disk_use: in the case you are retrieving a lot of data, you may have\n- to use the disk to save it instead of saving all in the RAM\n+ :param allow_disk_use: enables writing to temporary files in the case you are handling large dataset.\n+ This only takes effect when `mongo_query` is a list - running an aggregate pipeline\n :type allow_disk_use: bool\n :param compression: type of compression to use for output file in S3. Currently only gzip is supported.\n :type compression: str\n@@ -115,7 +115,6 @@\n mongo_collection=self.mongo_collection,\n query=cast(dict, self.mongo_query),\n mongo_db=self.mongo_db,\n- allowDiskUse=self.allow_disk_use,\n )\n \n # Performs transform then stringifies the docs results into json format\n", "issue": "MongoToS3Operator failed when running with a single query (not aggregate pipeline)\n**Apache Airflow version**: 2.0.2\r\n\r\n**What happened**:\r\n\r\n`MongoToS3Operator` failed when running with a single query (not aggregate pipeline):\r\n\r\n```sh\r\nTraceback (most recent call last):\r\n File \"/home/airflow//bin/airflow\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/home/airflow//lib/python3.8/site-packages/airflow/__main__.py\", line 40, in main\r\n args.func(args)\r\n File \"/home/airflow//lib/python3.8/site-packages/airflow/cli/cli_parser.py\", line 48, in command\r\n return func(*args, **kwargs)\r\n File \"/home/airflow//lib/python3.8/site-packages/airflow/utils/cli.py\", line 89, in wrapper\r\n return f(*args, **kwargs)\r\n File \"/home/airflow//lib/python3.8/site-packages/airflow/cli/commands/task_command.py\", line 385, in task_test\r\n ti.run(ignore_task_deps=True, ignore_ti_state=True, test_mode=True)\r\n File \"/home/airflow//lib/python3.8/site-packages/airflow/utils/session.py\", line 70, in wrapper\r\n return func(*args, session=session, **kwargs)\r\n File \"/home/airflow//lib/python3.8/site-packages/airflow/models/taskinstance.py\", line 1413, in run\r\n self._run_raw_task(\r\n File \"/home/airflow//lib/python3.8/site-packages/airflow/utils/session.py\", line 67, in wrapper\r\n return func(*args, **kwargs)\r\n File \"/home/airflow//lib/python3.8/site-packages/airflow/models/taskinstance.py\", line 1138, in _run_raw_task\r\n self._prepare_and_execute_task_with_callbacks(context, task)\r\n File \"/home/airflow//lib/python3.8/site-packages/airflow/models/taskinstance.py\", line 1311, in _prepare_and_execute_task_with_callbacks\r\n result = self._execute_task(context, task_copy)\r\n File \"/home/airflow//lib/python3.8/site-packages/airflow/models/taskinstance.py\", line 1341, in _execute_task\r\n result = task_copy.execute(context=context)\r\n File \"/home/airflow//lib/python3.8/site-packages/airflow/providers/amazon/aws/transfers/mongo_to_s3.py\", line 116, in execute\r\n results = MongoHook(self.mongo_conn_id).find(\r\n File \"/home/airflow//lib/python3.8/site-packages/airflow/providers/mongo/hooks/mongo.py\", line 144, in find\r\n return collection.find(query, **kwargs)\r\n File \"/home/airflow//lib/python3.8/site-packages/pymongo/collection.py\", line 1523, in find\r\n return Cursor(self, *args, **kwargs)\r\nTypeError: __init__() got an unexpected keyword argument 'allowDiskUse'\r\n```\r\n\r\n**What you expected to happen**:\r\n\r\nI expect the data from MongoDB to be exported to a file in S3 with no errors.\r\n\r\n**How to reproduce it**:\r\n\r\nRun the following operator with a single `mongo_query` (no aggregate pipeline):\r\n\r\n```python\r\nexport_to_s3 = MongoToS3Operator(\r\n task_id='export_to_s3',\r\n mongo_conn_id=Variable.get('mongo_conn_id'),\r\n s3_conn_id=Variable.get('aws_conn_id'),\r\n mongo_collection='my_mongo_collection',\r\n mongo_query={},\r\n s3_bucket=Variable.get('s3_bucket'),\r\n s3_key=\"my_data.json\",\r\n replace=True,\r\n dag=dag,\r\n)\r\n```\n", "before_files": [{"content": "#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\nimport json\nimport warnings\nfrom typing import Any, Iterable, Optional, Union, cast\n\nfrom bson import json_util\n\nfrom airflow.models import BaseOperator\nfrom airflow.providers.amazon.aws.hooks.s3 import S3Hook\nfrom airflow.providers.mongo.hooks.mongo import MongoHook\n\n_DEPRECATION_MSG = (\n \"The s3_conn_id parameter has been deprecated. You should pass instead the aws_conn_id parameter.\"\n)\n\n\nclass MongoToS3Operator(BaseOperator):\n \"\"\"Operator meant to move data from mongo via pymongo to s3 via boto.\n\n :param mongo_conn_id: reference to a specific mongo connection\n :type mongo_conn_id: str\n :param aws_conn_id: reference to a specific S3 connection\n :type aws_conn_id: str\n :param mongo_collection: reference to a specific collection in your mongo db\n :type mongo_collection: str\n :param mongo_query: query to execute. A list including a dict of the query\n :type mongo_query: list\n :param s3_bucket: reference to a specific S3 bucket to store the data\n :type s3_bucket: str\n :param s3_key: in which S3 key the file will be stored\n :type s3_key: str\n :param mongo_db: reference to a specific mongo database\n :type mongo_db: str\n :param replace: whether or not to replace the file in S3 if it previously existed\n :type replace: bool\n :param allow_disk_use: in the case you are retrieving a lot of data, you may have\n to use the disk to save it instead of saving all in the RAM\n :type allow_disk_use: bool\n :param compression: type of compression to use for output file in S3. Currently only gzip is supported.\n :type compression: str\n \"\"\"\n\n template_fields = ('s3_bucket', 's3_key', 'mongo_query', 'mongo_collection')\n ui_color = '#589636'\n template_fields_renderers = {\"mongo_query\": \"py\"}\n # pylint: disable=too-many-instance-attributes\n\n def __init__(\n self,\n *,\n s3_conn_id: Optional[str] = None,\n mongo_conn_id: str = 'mongo_default',\n aws_conn_id: str = 'aws_default',\n mongo_collection: str,\n mongo_query: Union[list, dict],\n s3_bucket: str,\n s3_key: str,\n mongo_db: Optional[str] = None,\n replace: bool = False,\n allow_disk_use: bool = False,\n compression: Optional[str] = None,\n **kwargs,\n ) -> None:\n super().__init__(**kwargs)\n if s3_conn_id:\n warnings.warn(_DEPRECATION_MSG, DeprecationWarning, stacklevel=3)\n aws_conn_id = s3_conn_id\n\n self.mongo_conn_id = mongo_conn_id\n self.aws_conn_id = aws_conn_id\n self.mongo_db = mongo_db\n self.mongo_collection = mongo_collection\n\n # Grab query and determine if we need to run an aggregate pipeline\n self.mongo_query = mongo_query\n self.is_pipeline = isinstance(self.mongo_query, list)\n\n self.s3_bucket = s3_bucket\n self.s3_key = s3_key\n self.replace = replace\n self.allow_disk_use = allow_disk_use\n self.compression = compression\n\n def execute(self, context) -> bool:\n \"\"\"Is written to depend on transform method\"\"\"\n s3_conn = S3Hook(self.aws_conn_id)\n\n # Grab collection and execute query according to whether or not it is a pipeline\n if self.is_pipeline:\n results = MongoHook(self.mongo_conn_id).aggregate(\n mongo_collection=self.mongo_collection,\n aggregate_query=cast(list, self.mongo_query),\n mongo_db=self.mongo_db,\n allowDiskUse=self.allow_disk_use,\n )\n\n else:\n results = MongoHook(self.mongo_conn_id).find(\n mongo_collection=self.mongo_collection,\n query=cast(dict, self.mongo_query),\n mongo_db=self.mongo_db,\n allowDiskUse=self.allow_disk_use,\n )\n\n # Performs transform then stringifies the docs results into json format\n docs_str = self._stringify(self.transform(results))\n\n s3_conn.load_string(\n string_data=docs_str,\n key=self.s3_key,\n bucket_name=self.s3_bucket,\n replace=self.replace,\n compression=self.compression,\n )\n\n @staticmethod\n def _stringify(iterable: Iterable, joinable: str = '\\n') -> str:\n \"\"\"\n Takes an iterable (pymongo Cursor or Array) containing dictionaries and\n returns a stringified version using python join\n \"\"\"\n return joinable.join([json.dumps(doc, default=json_util.default) for doc in iterable])\n\n @staticmethod\n def transform(docs: Any) -> Any:\n \"\"\"This method is meant to be extended by child classes\n to perform transformations unique to those operators needs.\n Processes pyMongo cursor and returns an iterable with each element being\n a JSON serializable dictionary\n\n Base transform() assumes no processing is needed\n ie. docs is a pyMongo cursor of documents and cursor just\n needs to be passed through\n\n Override this method for custom transformations\n \"\"\"\n return docs\n", "path": "airflow/providers/amazon/aws/transfers/mongo_to_s3.py"}], "after_files": [{"content": "#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\nimport json\nimport warnings\nfrom typing import Any, Iterable, Optional, Union, cast\n\nfrom bson import json_util\n\nfrom airflow.models import BaseOperator\nfrom airflow.providers.amazon.aws.hooks.s3 import S3Hook\nfrom airflow.providers.mongo.hooks.mongo import MongoHook\n\n_DEPRECATION_MSG = (\n \"The s3_conn_id parameter has been deprecated. You should pass instead the aws_conn_id parameter.\"\n)\n\n\nclass MongoToS3Operator(BaseOperator):\n \"\"\"Operator meant to move data from mongo via pymongo to s3 via boto.\n\n :param mongo_conn_id: reference to a specific mongo connection\n :type mongo_conn_id: str\n :param aws_conn_id: reference to a specific S3 connection\n :type aws_conn_id: str\n :param mongo_collection: reference to a specific collection in your mongo db\n :type mongo_collection: str\n :param mongo_query: query to execute. A list including a dict of the query\n :type mongo_query: Union[list, dict]\n :param s3_bucket: reference to a specific S3 bucket to store the data\n :type s3_bucket: str\n :param s3_key: in which S3 key the file will be stored\n :type s3_key: str\n :param mongo_db: reference to a specific mongo database\n :type mongo_db: str\n :param replace: whether or not to replace the file in S3 if it previously existed\n :type replace: bool\n :param allow_disk_use: enables writing to temporary files in the case you are handling large dataset.\n This only takes effect when `mongo_query` is a list - running an aggregate pipeline\n :type allow_disk_use: bool\n :param compression: type of compression to use for output file in S3. Currently only gzip is supported.\n :type compression: str\n \"\"\"\n\n template_fields = ('s3_bucket', 's3_key', 'mongo_query', 'mongo_collection')\n ui_color = '#589636'\n template_fields_renderers = {\"mongo_query\": \"py\"}\n # pylint: disable=too-many-instance-attributes\n\n def __init__(\n self,\n *,\n s3_conn_id: Optional[str] = None,\n mongo_conn_id: str = 'mongo_default',\n aws_conn_id: str = 'aws_default',\n mongo_collection: str,\n mongo_query: Union[list, dict],\n s3_bucket: str,\n s3_key: str,\n mongo_db: Optional[str] = None,\n replace: bool = False,\n allow_disk_use: bool = False,\n compression: Optional[str] = None,\n **kwargs,\n ) -> None:\n super().__init__(**kwargs)\n if s3_conn_id:\n warnings.warn(_DEPRECATION_MSG, DeprecationWarning, stacklevel=3)\n aws_conn_id = s3_conn_id\n\n self.mongo_conn_id = mongo_conn_id\n self.aws_conn_id = aws_conn_id\n self.mongo_db = mongo_db\n self.mongo_collection = mongo_collection\n\n # Grab query and determine if we need to run an aggregate pipeline\n self.mongo_query = mongo_query\n self.is_pipeline = isinstance(self.mongo_query, list)\n\n self.s3_bucket = s3_bucket\n self.s3_key = s3_key\n self.replace = replace\n self.allow_disk_use = allow_disk_use\n self.compression = compression\n\n def execute(self, context) -> bool:\n \"\"\"Is written to depend on transform method\"\"\"\n s3_conn = S3Hook(self.aws_conn_id)\n\n # Grab collection and execute query according to whether or not it is a pipeline\n if self.is_pipeline:\n results = MongoHook(self.mongo_conn_id).aggregate(\n mongo_collection=self.mongo_collection,\n aggregate_query=cast(list, self.mongo_query),\n mongo_db=self.mongo_db,\n allowDiskUse=self.allow_disk_use,\n )\n\n else:\n results = MongoHook(self.mongo_conn_id).find(\n mongo_collection=self.mongo_collection,\n query=cast(dict, self.mongo_query),\n mongo_db=self.mongo_db,\n )\n\n # Performs transform then stringifies the docs results into json format\n docs_str = self._stringify(self.transform(results))\n\n s3_conn.load_string(\n string_data=docs_str,\n key=self.s3_key,\n bucket_name=self.s3_bucket,\n replace=self.replace,\n compression=self.compression,\n )\n\n @staticmethod\n def _stringify(iterable: Iterable, joinable: str = '\\n') -> str:\n \"\"\"\n Takes an iterable (pymongo Cursor or Array) containing dictionaries and\n returns a stringified version using python join\n \"\"\"\n return joinable.join([json.dumps(doc, default=json_util.default) for doc in iterable])\n\n @staticmethod\n def transform(docs: Any) -> Any:\n \"\"\"This method is meant to be extended by child classes\n to perform transformations unique to those operators needs.\n Processes pyMongo cursor and returns an iterable with each element being\n a JSON serializable dictionary\n\n Base transform() assumes no processing is needed\n ie. docs is a pyMongo cursor of documents and cursor just\n needs to be passed through\n\n Override this method for custom transformations\n \"\"\"\n return docs\n", "path": "airflow/providers/amazon/aws/transfers/mongo_to_s3.py"}]}
2,770
427
gh_patches_debug_27527
rasdani/github-patches
git_diff
getredash__redash-2503
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Athena query runner doesn't paginate schemas <!-- ##################################################################### # # Need support? USE THE FORUM! https://discuss.redash.io/c/support. # # Don't have steps to reproduce and actually not sure it's a bug? # Use the forum! https://discuss.redash.io/c/support. # ##################################################################### **Got an idea for a new feature?** Check if it isn't on the roadmap already: http://bit.ly/redash-roadmap and start a new discussion in the features category: https://discuss.redash.io/c/feature-requests 🌟. Found a bug? Please fill out the sections below... thank you 👍 --> ### Issue Summary Hi, AWS Athena query runner paginates only tables but not schemas. We have ~170 schemas, however, only the first 100 are shown in schema browser. ### Steps to Reproduce 1. Connect to Athena with Glue metastore which has more than 100 schemas 2. In the new query window, only 100 schemas are shown. I've looked into the code and looks like you paginate only on tables but not on schemas (https://github.com/getredash/redash/blob/v4.0.0/redash/query_runner/athena.py#L130). ### Technical details: * Redash Version: 4.0.0 * Browser/OS: * How did you install Redash: docker Kind regards, Ievgen. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `redash/query_runner/athena.py` Content: ``` 1 import logging 2 import os 3 4 from redash.query_runner import * 5 from redash.settings import parse_boolean 6 from redash.utils import json_dumps, json_loads 7 8 logger = logging.getLogger(__name__) 9 ANNOTATE_QUERY = parse_boolean(os.environ.get('ATHENA_ANNOTATE_QUERY', 'true')) 10 SHOW_EXTRA_SETTINGS = parse_boolean(os.environ.get('ATHENA_SHOW_EXTRA_SETTINGS', 'true')) 11 OPTIONAL_CREDENTIALS = parse_boolean(os.environ.get('ATHENA_OPTIONAL_CREDENTIALS', 'true')) 12 13 try: 14 import pyathena 15 import boto3 16 enabled = True 17 except ImportError: 18 enabled = False 19 20 21 _TYPE_MAPPINGS = { 22 'boolean': TYPE_BOOLEAN, 23 'tinyint': TYPE_INTEGER, 24 'smallint': TYPE_INTEGER, 25 'integer': TYPE_INTEGER, 26 'bigint': TYPE_INTEGER, 27 'double': TYPE_FLOAT, 28 'varchar': TYPE_STRING, 29 'timestamp': TYPE_DATETIME, 30 'date': TYPE_DATE, 31 'varbinary': TYPE_STRING, 32 'array': TYPE_STRING, 33 'map': TYPE_STRING, 34 'row': TYPE_STRING, 35 'decimal': TYPE_FLOAT, 36 } 37 38 39 class SimpleFormatter(object): 40 def format(self, operation, parameters=None): 41 return operation 42 43 44 class Athena(BaseQueryRunner): 45 noop_query = 'SELECT 1' 46 47 @classmethod 48 def name(cls): 49 return "Amazon Athena" 50 51 @classmethod 52 def configuration_schema(cls): 53 schema = { 54 'type': 'object', 55 'properties': { 56 'region': { 57 'type': 'string', 58 'title': 'AWS Region' 59 }, 60 'aws_access_key': { 61 'type': 'string', 62 'title': 'AWS Access Key' 63 }, 64 'aws_secret_key': { 65 'type': 'string', 66 'title': 'AWS Secret Key' 67 }, 68 's3_staging_dir': { 69 'type': 'string', 70 'title': 'S3 Staging (Query Results) Bucket Path' 71 }, 72 'schema': { 73 'type': 'string', 74 'title': 'Schema Name', 75 'default': 'default' 76 }, 77 'glue': { 78 'type': 'boolean', 79 'title': 'Use Glue Data Catalog', 80 }, 81 }, 82 'required': ['region', 's3_staging_dir'], 83 'order': ['region', 'aws_access_key', 'aws_secret_key', 's3_staging_dir', 'schema'], 84 'secret': ['aws_secret_key'] 85 } 86 87 if SHOW_EXTRA_SETTINGS: 88 schema['properties'].update({ 89 'encryption_option': { 90 'type': 'string', 91 'title': 'Encryption Option', 92 }, 93 'kms_key': { 94 'type': 'string', 95 'title': 'KMS Key', 96 }, 97 }) 98 99 if not OPTIONAL_CREDENTIALS: 100 schema['required'] += ['aws_access_key', 'aws_secret_key'] 101 102 return schema 103 104 @classmethod 105 def enabled(cls): 106 return enabled 107 108 @classmethod 109 def annotate_query(cls): 110 return ANNOTATE_QUERY 111 112 @classmethod 113 def type(cls): 114 return "athena" 115 116 def __get_schema_from_glue(self): 117 client = boto3.client( 118 'glue', 119 aws_access_key_id=self.configuration.get('aws_access_key', None), 120 aws_secret_access_key=self.configuration.get('aws_secret_key', None), 121 region_name=self.configuration['region'] 122 ) 123 schema = {} 124 paginator = client.get_paginator('get_tables') 125 126 for database in client.get_databases()['DatabaseList']: 127 iterator = paginator.paginate(DatabaseName=database['Name']) 128 for table in iterator.search('TableList[]'): 129 table_name = '%s.%s' % (database['Name'], table['Name']) 130 if table_name not in schema: 131 column = [columns['Name'] for columns in table['StorageDescriptor']['Columns']] 132 schema[table_name] = {'name': table_name, 'columns': column} 133 for partition in table.get('PartitionKeys', []): 134 schema[table_name]['columns'].append(partition['Name']) 135 136 return schema.values() 137 138 def get_schema(self, get_stats=False): 139 if self.configuration.get('glue', False): 140 return self.__get_schema_from_glue() 141 142 schema = {} 143 query = """ 144 SELECT table_schema, table_name, column_name 145 FROM information_schema.columns 146 WHERE table_schema NOT IN ('information_schema') 147 """ 148 149 results, error = self.run_query(query, None) 150 if error is not None: 151 raise Exception("Failed getting schema.") 152 153 results = json_loads(results) 154 for row in results['rows']: 155 table_name = '{0}.{1}'.format(row['table_schema'], row['table_name']) 156 if table_name not in schema: 157 schema[table_name] = {'name': table_name, 'columns': []} 158 schema[table_name]['columns'].append(row['column_name']) 159 160 return schema.values() 161 162 def run_query(self, query, user): 163 cursor = pyathena.connect( 164 s3_staging_dir=self.configuration['s3_staging_dir'], 165 region_name=self.configuration['region'], 166 aws_access_key_id=self.configuration.get('aws_access_key', None), 167 aws_secret_access_key=self.configuration.get('aws_secret_key', None), 168 schema_name=self.configuration.get('schema', 'default'), 169 encryption_option=self.configuration.get('encryption_option', None), 170 kms_key=self.configuration.get('kms_key', None), 171 formatter=SimpleFormatter()).cursor() 172 173 try: 174 cursor.execute(query) 175 column_tuples = [(i[0], _TYPE_MAPPINGS.get(i[1], None)) for i in cursor.description] 176 columns = self.fetch_columns(column_tuples) 177 rows = [dict(zip(([c['name'] for c in columns]), r)) for i, r in enumerate(cursor.fetchall())] 178 qbytes = None 179 athena_query_id = None 180 try: 181 qbytes = cursor.data_scanned_in_bytes 182 except AttributeError as e: 183 logger.debug("Athena Upstream can't get data_scanned_in_bytes: %s", e) 184 try: 185 athena_query_id = cursor.query_id 186 except AttributeError as e: 187 logger.debug("Athena Upstream can't get query_id: %s", e) 188 data = { 189 'columns': columns, 190 'rows': rows, 191 'metadata': { 192 'data_scanned': qbytes, 193 'athena_query_id': athena_query_id 194 } 195 } 196 json_data = json_dumps(data, ignore_nan=True) 197 error = None 198 except KeyboardInterrupt: 199 if cursor.query_id: 200 cursor.cancel() 201 error = "Query cancelled by user." 202 json_data = None 203 except Exception as ex: 204 if cursor.query_id: 205 cursor.cancel() 206 error = ex.message 207 json_data = None 208 209 return json_data, error 210 211 212 register(Athena) 213 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/redash/query_runner/athena.py b/redash/query_runner/athena.py --- a/redash/query_runner/athena.py +++ b/redash/query_runner/athena.py @@ -121,18 +121,20 @@ region_name=self.configuration['region'] ) schema = {} - paginator = client.get_paginator('get_tables') - - for database in client.get_databases()['DatabaseList']: - iterator = paginator.paginate(DatabaseName=database['Name']) - for table in iterator.search('TableList[]'): - table_name = '%s.%s' % (database['Name'], table['Name']) - if table_name not in schema: - column = [columns['Name'] for columns in table['StorageDescriptor']['Columns']] - schema[table_name] = {'name': table_name, 'columns': column} - for partition in table.get('PartitionKeys', []): - schema[table_name]['columns'].append(partition['Name']) + database_paginator = client.get_paginator('get_databases') + table_paginator = client.get_paginator('get_tables') + + for databases in database_paginator.paginate(): + for database in databases['DatabaseList']: + iterator = table_paginator.paginate(DatabaseName=database['Name']) + for table in iterator.search('TableList[]'): + table_name = '%s.%s' % (database['Name'], table['Name']) + if table_name not in schema: + column = [columns['Name'] for columns in table['StorageDescriptor']['Columns']] + schema[table_name] = {'name': table_name, 'columns': column} + for partition in table.get('PartitionKeys', []): + schema[table_name]['columns'].append(partition['Name']) return schema.values() def get_schema(self, get_stats=False):
{"golden_diff": "diff --git a/redash/query_runner/athena.py b/redash/query_runner/athena.py\n--- a/redash/query_runner/athena.py\n+++ b/redash/query_runner/athena.py\n@@ -121,18 +121,20 @@\n region_name=self.configuration['region']\n )\n schema = {}\n- paginator = client.get_paginator('get_tables')\n-\n- for database in client.get_databases()['DatabaseList']:\n- iterator = paginator.paginate(DatabaseName=database['Name'])\n- for table in iterator.search('TableList[]'):\n- table_name = '%s.%s' % (database['Name'], table['Name'])\n- if table_name not in schema:\n- column = [columns['Name'] for columns in table['StorageDescriptor']['Columns']]\n- schema[table_name] = {'name': table_name, 'columns': column}\n- for partition in table.get('PartitionKeys', []):\n- schema[table_name]['columns'].append(partition['Name'])\n \n+ database_paginator = client.get_paginator('get_databases')\n+ table_paginator = client.get_paginator('get_tables')\n+\n+ for databases in database_paginator.paginate():\n+ for database in databases['DatabaseList']:\n+ iterator = table_paginator.paginate(DatabaseName=database['Name'])\n+ for table in iterator.search('TableList[]'):\n+ table_name = '%s.%s' % (database['Name'], table['Name'])\n+ if table_name not in schema:\n+ column = [columns['Name'] for columns in table['StorageDescriptor']['Columns']]\n+ schema[table_name] = {'name': table_name, 'columns': column}\n+ for partition in table.get('PartitionKeys', []):\n+ schema[table_name]['columns'].append(partition['Name'])\n return schema.values()\n \n def get_schema(self, get_stats=False):\n", "issue": "Athena query runner doesn't paginate schemas\n<!--\r\n#####################################################################\r\n#\r\n# Need support? USE THE FORUM! https://discuss.redash.io/c/support.\r\n#\r\n# Don't have steps to reproduce and actually not sure it's a bug?\r\n# Use the forum! https://discuss.redash.io/c/support.\r\n#\r\n#####################################################################\r\n\r\n**Got an idea for a new feature?** Check if it isn't on the roadmap already: http://bit.ly/redash-roadmap and start a new discussion in the features category: https://discuss.redash.io/c/feature-requests \ud83c\udf1f.\r\n\r\nFound a bug? Please fill out the sections below... thank you \ud83d\udc4d\r\n-->\r\n\r\n### Issue Summary\r\nHi,\r\n\r\nAWS Athena query runner paginates only tables but not schemas.\r\nWe have ~170 schemas, however, only the first 100 are shown in schema browser.\r\n \r\n### Steps to Reproduce\r\n\r\n1. Connect to Athena with Glue metastore which has more than 100 schemas\r\n2. In the new query window, only 100 schemas are shown.\r\n\r\nI've looked into the code and looks like you paginate only on tables but not on schemas (https://github.com/getredash/redash/blob/v4.0.0/redash/query_runner/athena.py#L130).\r\n### Technical details:\r\n\r\n* Redash Version: 4.0.0\r\n* Browser/OS:\r\n* How did you install Redash: docker\r\n\r\n\r\nKind regards,\r\nIevgen.\n", "before_files": [{"content": "import logging\nimport os\n\nfrom redash.query_runner import *\nfrom redash.settings import parse_boolean\nfrom redash.utils import json_dumps, json_loads\n\nlogger = logging.getLogger(__name__)\nANNOTATE_QUERY = parse_boolean(os.environ.get('ATHENA_ANNOTATE_QUERY', 'true'))\nSHOW_EXTRA_SETTINGS = parse_boolean(os.environ.get('ATHENA_SHOW_EXTRA_SETTINGS', 'true'))\nOPTIONAL_CREDENTIALS = parse_boolean(os.environ.get('ATHENA_OPTIONAL_CREDENTIALS', 'true'))\n\ntry:\n import pyathena\n import boto3\n enabled = True\nexcept ImportError:\n enabled = False\n\n\n_TYPE_MAPPINGS = {\n 'boolean': TYPE_BOOLEAN,\n 'tinyint': TYPE_INTEGER,\n 'smallint': TYPE_INTEGER,\n 'integer': TYPE_INTEGER,\n 'bigint': TYPE_INTEGER,\n 'double': TYPE_FLOAT,\n 'varchar': TYPE_STRING,\n 'timestamp': TYPE_DATETIME,\n 'date': TYPE_DATE,\n 'varbinary': TYPE_STRING,\n 'array': TYPE_STRING,\n 'map': TYPE_STRING,\n 'row': TYPE_STRING,\n 'decimal': TYPE_FLOAT,\n}\n\n\nclass SimpleFormatter(object):\n def format(self, operation, parameters=None):\n return operation\n\n\nclass Athena(BaseQueryRunner):\n noop_query = 'SELECT 1'\n\n @classmethod\n def name(cls):\n return \"Amazon Athena\"\n\n @classmethod\n def configuration_schema(cls):\n schema = {\n 'type': 'object',\n 'properties': {\n 'region': {\n 'type': 'string',\n 'title': 'AWS Region'\n },\n 'aws_access_key': {\n 'type': 'string',\n 'title': 'AWS Access Key'\n },\n 'aws_secret_key': {\n 'type': 'string',\n 'title': 'AWS Secret Key'\n },\n 's3_staging_dir': {\n 'type': 'string',\n 'title': 'S3 Staging (Query Results) Bucket Path'\n },\n 'schema': {\n 'type': 'string',\n 'title': 'Schema Name',\n 'default': 'default'\n },\n 'glue': {\n 'type': 'boolean',\n 'title': 'Use Glue Data Catalog',\n },\n },\n 'required': ['region', 's3_staging_dir'],\n 'order': ['region', 'aws_access_key', 'aws_secret_key', 's3_staging_dir', 'schema'],\n 'secret': ['aws_secret_key']\n }\n\n if SHOW_EXTRA_SETTINGS:\n schema['properties'].update({\n 'encryption_option': {\n 'type': 'string',\n 'title': 'Encryption Option',\n },\n 'kms_key': {\n 'type': 'string',\n 'title': 'KMS Key',\n },\n })\n\n if not OPTIONAL_CREDENTIALS:\n schema['required'] += ['aws_access_key', 'aws_secret_key']\n\n return schema\n\n @classmethod\n def enabled(cls):\n return enabled\n\n @classmethod\n def annotate_query(cls):\n return ANNOTATE_QUERY\n\n @classmethod\n def type(cls):\n return \"athena\"\n\n def __get_schema_from_glue(self):\n client = boto3.client(\n 'glue',\n aws_access_key_id=self.configuration.get('aws_access_key', None),\n aws_secret_access_key=self.configuration.get('aws_secret_key', None),\n region_name=self.configuration['region']\n )\n schema = {}\n paginator = client.get_paginator('get_tables')\n\n for database in client.get_databases()['DatabaseList']:\n iterator = paginator.paginate(DatabaseName=database['Name'])\n for table in iterator.search('TableList[]'):\n table_name = '%s.%s' % (database['Name'], table['Name'])\n if table_name not in schema:\n column = [columns['Name'] for columns in table['StorageDescriptor']['Columns']]\n schema[table_name] = {'name': table_name, 'columns': column}\n for partition in table.get('PartitionKeys', []):\n schema[table_name]['columns'].append(partition['Name'])\n\n return schema.values()\n\n def get_schema(self, get_stats=False):\n if self.configuration.get('glue', False):\n return self.__get_schema_from_glue()\n\n schema = {}\n query = \"\"\"\n SELECT table_schema, table_name, column_name\n FROM information_schema.columns\n WHERE table_schema NOT IN ('information_schema')\n \"\"\"\n\n results, error = self.run_query(query, None)\n if error is not None:\n raise Exception(\"Failed getting schema.\")\n\n results = json_loads(results)\n for row in results['rows']:\n table_name = '{0}.{1}'.format(row['table_schema'], row['table_name'])\n if table_name not in schema:\n schema[table_name] = {'name': table_name, 'columns': []}\n schema[table_name]['columns'].append(row['column_name'])\n\n return schema.values()\n\n def run_query(self, query, user):\n cursor = pyathena.connect(\n s3_staging_dir=self.configuration['s3_staging_dir'],\n region_name=self.configuration['region'],\n aws_access_key_id=self.configuration.get('aws_access_key', None),\n aws_secret_access_key=self.configuration.get('aws_secret_key', None),\n schema_name=self.configuration.get('schema', 'default'),\n encryption_option=self.configuration.get('encryption_option', None),\n kms_key=self.configuration.get('kms_key', None),\n formatter=SimpleFormatter()).cursor()\n\n try:\n cursor.execute(query)\n column_tuples = [(i[0], _TYPE_MAPPINGS.get(i[1], None)) for i in cursor.description]\n columns = self.fetch_columns(column_tuples)\n rows = [dict(zip(([c['name'] for c in columns]), r)) for i, r in enumerate(cursor.fetchall())]\n qbytes = None\n athena_query_id = None\n try:\n qbytes = cursor.data_scanned_in_bytes\n except AttributeError as e:\n logger.debug(\"Athena Upstream can't get data_scanned_in_bytes: %s\", e)\n try:\n athena_query_id = cursor.query_id\n except AttributeError as e:\n logger.debug(\"Athena Upstream can't get query_id: %s\", e)\n data = {\n 'columns': columns,\n 'rows': rows,\n 'metadata': {\n 'data_scanned': qbytes,\n 'athena_query_id': athena_query_id\n }\n }\n json_data = json_dumps(data, ignore_nan=True)\n error = None\n except KeyboardInterrupt:\n if cursor.query_id:\n cursor.cancel()\n error = \"Query cancelled by user.\"\n json_data = None\n except Exception as ex:\n if cursor.query_id:\n cursor.cancel()\n error = ex.message\n json_data = None\n\n return json_data, error\n\n\nregister(Athena)\n", "path": "redash/query_runner/athena.py"}], "after_files": [{"content": "import logging\nimport os\n\nfrom redash.query_runner import *\nfrom redash.settings import parse_boolean\nfrom redash.utils import json_dumps, json_loads\n\nlogger = logging.getLogger(__name__)\nANNOTATE_QUERY = parse_boolean(os.environ.get('ATHENA_ANNOTATE_QUERY', 'true'))\nSHOW_EXTRA_SETTINGS = parse_boolean(os.environ.get('ATHENA_SHOW_EXTRA_SETTINGS', 'true'))\nOPTIONAL_CREDENTIALS = parse_boolean(os.environ.get('ATHENA_OPTIONAL_CREDENTIALS', 'true'))\n\ntry:\n import pyathena\n import boto3\n enabled = True\nexcept ImportError:\n enabled = False\n\n\n_TYPE_MAPPINGS = {\n 'boolean': TYPE_BOOLEAN,\n 'tinyint': TYPE_INTEGER,\n 'smallint': TYPE_INTEGER,\n 'integer': TYPE_INTEGER,\n 'bigint': TYPE_INTEGER,\n 'double': TYPE_FLOAT,\n 'varchar': TYPE_STRING,\n 'timestamp': TYPE_DATETIME,\n 'date': TYPE_DATE,\n 'varbinary': TYPE_STRING,\n 'array': TYPE_STRING,\n 'map': TYPE_STRING,\n 'row': TYPE_STRING,\n 'decimal': TYPE_FLOAT,\n}\n\n\nclass SimpleFormatter(object):\n def format(self, operation, parameters=None):\n return operation\n\n\nclass Athena(BaseQueryRunner):\n noop_query = 'SELECT 1'\n\n @classmethod\n def name(cls):\n return \"Amazon Athena\"\n\n @classmethod\n def configuration_schema(cls):\n schema = {\n 'type': 'object',\n 'properties': {\n 'region': {\n 'type': 'string',\n 'title': 'AWS Region'\n },\n 'aws_access_key': {\n 'type': 'string',\n 'title': 'AWS Access Key'\n },\n 'aws_secret_key': {\n 'type': 'string',\n 'title': 'AWS Secret Key'\n },\n 's3_staging_dir': {\n 'type': 'string',\n 'title': 'S3 Staging (Query Results) Bucket Path'\n },\n 'schema': {\n 'type': 'string',\n 'title': 'Schema Name',\n 'default': 'default'\n },\n 'glue': {\n 'type': 'boolean',\n 'title': 'Use Glue Data Catalog',\n },\n },\n 'required': ['region', 's3_staging_dir'],\n 'order': ['region', 'aws_access_key', 'aws_secret_key', 's3_staging_dir', 'schema'],\n 'secret': ['aws_secret_key']\n }\n\n if SHOW_EXTRA_SETTINGS:\n schema['properties'].update({\n 'encryption_option': {\n 'type': 'string',\n 'title': 'Encryption Option',\n },\n 'kms_key': {\n 'type': 'string',\n 'title': 'KMS Key',\n },\n })\n\n if not OPTIONAL_CREDENTIALS:\n schema['required'] += ['aws_access_key', 'aws_secret_key']\n\n return schema\n\n @classmethod\n def enabled(cls):\n return enabled\n\n @classmethod\n def annotate_query(cls):\n return ANNOTATE_QUERY\n\n @classmethod\n def type(cls):\n return \"athena\"\n\n def __get_schema_from_glue(self):\n client = boto3.client(\n 'glue',\n aws_access_key_id=self.configuration.get('aws_access_key', None),\n aws_secret_access_key=self.configuration.get('aws_secret_key', None),\n region_name=self.configuration['region']\n )\n schema = {}\n\n database_paginator = client.get_paginator('get_databases')\n table_paginator = client.get_paginator('get_tables')\n\n for databases in database_paginator.paginate():\n for database in databases['DatabaseList']:\n iterator = table_paginator.paginate(DatabaseName=database['Name'])\n for table in iterator.search('TableList[]'):\n table_name = '%s.%s' % (database['Name'], table['Name'])\n if table_name not in schema:\n column = [columns['Name'] for columns in table['StorageDescriptor']['Columns']]\n schema[table_name] = {'name': table_name, 'columns': column}\n for partition in table.get('PartitionKeys', []):\n schema[table_name]['columns'].append(partition['Name'])\n return schema.values()\n\n def get_schema(self, get_stats=False):\n if self.configuration.get('glue', False):\n return self.__get_schema_from_glue()\n\n schema = {}\n query = \"\"\"\n SELECT table_schema, table_name, column_name\n FROM information_schema.columns\n WHERE table_schema NOT IN ('information_schema')\n \"\"\"\n\n results, error = self.run_query(query, None)\n if error is not None:\n raise Exception(\"Failed getting schema.\")\n\n results = json_loads(results)\n for row in results['rows']:\n table_name = '{0}.{1}'.format(row['table_schema'], row['table_name'])\n if table_name not in schema:\n schema[table_name] = {'name': table_name, 'columns': []}\n schema[table_name]['columns'].append(row['column_name'])\n\n return schema.values()\n\n def run_query(self, query, user):\n cursor = pyathena.connect(\n s3_staging_dir=self.configuration['s3_staging_dir'],\n region_name=self.configuration['region'],\n aws_access_key_id=self.configuration.get('aws_access_key', None),\n aws_secret_access_key=self.configuration.get('aws_secret_key', None),\n schema_name=self.configuration.get('schema', 'default'),\n encryption_option=self.configuration.get('encryption_option', None),\n kms_key=self.configuration.get('kms_key', None),\n formatter=SimpleFormatter()).cursor()\n\n try:\n cursor.execute(query)\n column_tuples = [(i[0], _TYPE_MAPPINGS.get(i[1], None)) for i in cursor.description]\n columns = self.fetch_columns(column_tuples)\n rows = [dict(zip(([c['name'] for c in columns]), r)) for i, r in enumerate(cursor.fetchall())]\n qbytes = None\n athena_query_id = None\n try:\n qbytes = cursor.data_scanned_in_bytes\n except AttributeError as e:\n logger.debug(\"Athena Upstream can't get data_scanned_in_bytes: %s\", e)\n try:\n athena_query_id = cursor.query_id\n except AttributeError as e:\n logger.debug(\"Athena Upstream can't get query_id: %s\", e)\n data = {\n 'columns': columns,\n 'rows': rows,\n 'metadata': {\n 'data_scanned': qbytes,\n 'athena_query_id': athena_query_id\n }\n }\n json_data = json_dumps(data, ignore_nan=True)\n error = None\n except KeyboardInterrupt:\n if cursor.query_id:\n cursor.cancel()\n error = \"Query cancelled by user.\"\n json_data = None\n except Exception as ex:\n if cursor.query_id:\n cursor.cancel()\n error = ex.message\n json_data = None\n\n return json_data, error\n\n\nregister(Athena)\n", "path": "redash/query_runner/athena.py"}]}
2,603
408
gh_patches_debug_49038
rasdani/github-patches
git_diff
facebookresearch__hydra-907
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [Bug] Hydra's override grammar allows quoted resolver args, but OmegaConf does not recognize them # 🐛 Bug ## Description Best explained with an example (assume a basic `my_app.py` that prints the resolved config): ```bash python my_app.py +port='${env:PORT}' # works, may crash if PORT is not defined python my_app.py +port='${env:PORT,80}' # crashes: not recognized by Hydra's grammar python my_app.py +port='${env:PORT,"80"}' # accepted by Hydra but OmegaConf does not resolve it ``` This is because in the current version of OmegaConf, quotes are not allowed inside interpolations. ## Checklist - [x] I checked on the latest version of Hydra - [x] I created a minimal repro ## To reproduce ** Minimal Code/Config snippet to reproduce ** ```python from omegaconf import DictConfig, OmegaConf import hydra @hydra.main() def my_app(cfg: DictConfig) -> None: print(OmegaConf.to_yaml(cfg, resolve=True)) if __name__ == "__main__": my_app() ``` Then try the examples above. ** Stack trace/error message ** Just showing the last example: ``` port: ${env:PORT,"80"} ``` ## Expected Behavior Hydra should allow the same characters as OmegaConf. ## System information - **Hydra Version** : current master (7afee097) - **Python version** : 3.8.3 - **Virtual environment type and version** : conda 4.8.3 - **Operating system** : MacOS --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `hydra/core/override_parser/overrides_parser.py` Content: ``` 1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved 2 import sys 3 from typing import Any, List, Optional 4 5 from antlr4.error.Errors import LexerNoViableAltException, RecognitionException 6 7 from hydra._internal.grammar import grammar_functions 8 from hydra._internal.grammar.functions import Functions 9 from hydra.core.config_loader import ConfigLoader 10 from hydra.core.override_parser.overrides_visitor import ( 11 HydraErrorListener, 12 HydraOverrideVisitor, 13 ) 14 from hydra.core.override_parser.types import Override 15 from hydra.errors import HydraException, OverrideParseException 16 17 try: 18 from hydra.grammar.gen.OverrideLexer import ( 19 CommonTokenStream, 20 InputStream, 21 OverrideLexer, 22 ) 23 from hydra.grammar.gen.OverrideParser import OverrideParser 24 25 except ModuleNotFoundError: 26 print( 27 "Error importing generated parsers, run `python setup.py antlr` to regenerate." 28 ) 29 sys.exit(1) 30 31 # The set of parser rules that require the lexer to be in lexical mode `KEY`. 32 KEY_RULES = {"key", "override", "package", "packageOrGroup"} 33 34 35 class OverridesParser: 36 functions: Functions 37 38 @classmethod 39 def create(cls, config_loader: Optional[ConfigLoader] = None) -> "OverridesParser": 40 functions = create_functions() 41 return cls(functions=functions, config_loader=config_loader) 42 43 def __init__( 44 self, functions: Functions, config_loader: Optional[ConfigLoader] = None 45 ): 46 self.functions = functions 47 self.config_loader = config_loader 48 49 def parse_rule(self, s: str, rule_name: str) -> Any: 50 error_listener = HydraErrorListener() 51 istream = InputStream(s) 52 lexer = OverrideLexer(istream) 53 lexer.removeErrorListeners() 54 lexer.addErrorListener(error_listener) 55 56 # Set the lexer in the correct mode to parse the desired rule. 57 lexer_mode = "KEY" if rule_name in KEY_RULES else "VALUE" 58 lexer.mode(getattr(OverrideLexer, lexer_mode)) 59 60 stream = CommonTokenStream(lexer) 61 parser = OverrideParser(stream) 62 parser.removeErrorListeners() 63 parser.addErrorListener(error_listener) 64 visitor = HydraOverrideVisitor(self.functions) 65 rule = getattr(parser, rule_name) 66 tree = rule() 67 ret = visitor.visit(tree) 68 if isinstance(ret, Override): 69 ret.input_line = s 70 return ret 71 72 def parse_override(self, s: str) -> Override: 73 ret = self.parse_rule(s, "override") 74 assert isinstance(ret, Override) 75 return ret 76 77 def parse_overrides(self, overrides: List[str]) -> List[Override]: 78 ret: List[Override] = [] 79 for override in overrides: 80 try: 81 parsed = self.parse_rule(override, "override") 82 except HydraException as e: 83 cause = e.__cause__ 84 if isinstance(cause, LexerNoViableAltException): 85 prefix = "LexerNoViableAltException: " 86 start = len(prefix) + cause.startIndex + 1 87 msg = f"{prefix}{override}" f"\n{'^'.rjust(start)}" 88 e.__cause__ = None 89 elif isinstance(cause, RecognitionException): 90 prefix = f"{e}" 91 msg = f"{prefix}" 92 e.__cause__ = None 93 else: 94 msg = f"Error parsing override '{override}'" f"\n{e}" 95 raise OverrideParseException( 96 override=override, 97 message=f"{msg}" 98 f"\nSee https://hydra.cc/docs/next/advanced/override_grammar/basic for details", 99 ) from e.__cause__ 100 assert isinstance(parsed, Override) 101 parsed.config_loader = self.config_loader 102 ret.append(parsed) 103 return ret 104 105 106 def create_functions() -> Functions: 107 functions = Functions() 108 # casts 109 functions.register(name="int", func=grammar_functions.cast_int) 110 functions.register(name="str", func=grammar_functions.cast_str) 111 functions.register(name="bool", func=grammar_functions.cast_bool) 112 functions.register(name="float", func=grammar_functions.cast_float) 113 # sweeps 114 functions.register(name="choice", func=grammar_functions.choice) 115 functions.register(name="range", func=grammar_functions.range) 116 functions.register(name="interval", func=grammar_functions.interval) 117 # misc 118 functions.register(name="tag", func=grammar_functions.tag) 119 functions.register(name="sort", func=grammar_functions.sort) 120 functions.register(name="shuffle", func=grammar_functions.shuffle) 121 functions.register(name="glob", func=grammar_functions.glob) 122 return functions 123 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/hydra/core/override_parser/overrides_parser.py b/hydra/core/override_parser/overrides_parser.py --- a/hydra/core/override_parser/overrides_parser.py +++ b/hydra/core/override_parser/overrides_parser.py @@ -54,8 +54,8 @@ lexer.addErrorListener(error_listener) # Set the lexer in the correct mode to parse the desired rule. - lexer_mode = "KEY" if rule_name in KEY_RULES else "VALUE" - lexer.mode(getattr(OverrideLexer, lexer_mode)) + if rule_name not in KEY_RULES: + lexer.mode(OverrideLexer.VALUE_MODE) stream = CommonTokenStream(lexer) parser = OverrideParser(stream)
{"golden_diff": "diff --git a/hydra/core/override_parser/overrides_parser.py b/hydra/core/override_parser/overrides_parser.py\n--- a/hydra/core/override_parser/overrides_parser.py\n+++ b/hydra/core/override_parser/overrides_parser.py\n@@ -54,8 +54,8 @@\n lexer.addErrorListener(error_listener)\n \n # Set the lexer in the correct mode to parse the desired rule.\n- lexer_mode = \"KEY\" if rule_name in KEY_RULES else \"VALUE\"\n- lexer.mode(getattr(OverrideLexer, lexer_mode))\n+ if rule_name not in KEY_RULES:\n+ lexer.mode(OverrideLexer.VALUE_MODE)\n \n stream = CommonTokenStream(lexer)\n parser = OverrideParser(stream)\n", "issue": "[Bug] Hydra's override grammar allows quoted resolver args, but OmegaConf does not recognize them\n# \ud83d\udc1b Bug\r\n\r\n## Description\r\n\r\nBest explained with an example (assume a basic `my_app.py` that prints the resolved config):\r\n\r\n```bash\r\npython my_app.py +port='${env:PORT}' # works, may crash if PORT is not defined\r\npython my_app.py +port='${env:PORT,80}' # crashes: not recognized by Hydra's grammar\r\npython my_app.py +port='${env:PORT,\"80\"}' # accepted by Hydra but OmegaConf does not resolve it\r\n```\r\n\r\nThis is because in the current version of OmegaConf, quotes are not allowed inside interpolations.\r\n\r\n## Checklist\r\n- [x] I checked on the latest version of Hydra\r\n- [x] I created a minimal repro\r\n\r\n## To reproduce\r\n\r\n** Minimal Code/Config snippet to reproduce **\r\n\r\n```python\r\nfrom omegaconf import DictConfig, OmegaConf\r\nimport hydra\r\n\r\[email protected]()\r\ndef my_app(cfg: DictConfig) -> None:\r\n print(OmegaConf.to_yaml(cfg, resolve=True))\r\n\r\nif __name__ == \"__main__\":\r\n my_app()\r\n```\r\n\r\nThen try the examples above.\r\n\r\n** Stack trace/error message **\r\n\r\nJust showing the last example:\r\n```\r\nport: ${env:PORT,\"80\"}\r\n```\r\n\r\n## Expected Behavior\r\nHydra should allow the same characters as OmegaConf.\r\n\r\n## System information\r\n- **Hydra Version** : current master (7afee097)\r\n- **Python version** : 3.8.3\r\n- **Virtual environment type and version** : conda 4.8.3\r\n- **Operating system** : MacOS\r\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport sys\nfrom typing import Any, List, Optional\n\nfrom antlr4.error.Errors import LexerNoViableAltException, RecognitionException\n\nfrom hydra._internal.grammar import grammar_functions\nfrom hydra._internal.grammar.functions import Functions\nfrom hydra.core.config_loader import ConfigLoader\nfrom hydra.core.override_parser.overrides_visitor import (\n HydraErrorListener,\n HydraOverrideVisitor,\n)\nfrom hydra.core.override_parser.types import Override\nfrom hydra.errors import HydraException, OverrideParseException\n\ntry:\n from hydra.grammar.gen.OverrideLexer import (\n CommonTokenStream,\n InputStream,\n OverrideLexer,\n )\n from hydra.grammar.gen.OverrideParser import OverrideParser\n\nexcept ModuleNotFoundError:\n print(\n \"Error importing generated parsers, run `python setup.py antlr` to regenerate.\"\n )\n sys.exit(1)\n\n# The set of parser rules that require the lexer to be in lexical mode `KEY`.\nKEY_RULES = {\"key\", \"override\", \"package\", \"packageOrGroup\"}\n\n\nclass OverridesParser:\n functions: Functions\n\n @classmethod\n def create(cls, config_loader: Optional[ConfigLoader] = None) -> \"OverridesParser\":\n functions = create_functions()\n return cls(functions=functions, config_loader=config_loader)\n\n def __init__(\n self, functions: Functions, config_loader: Optional[ConfigLoader] = None\n ):\n self.functions = functions\n self.config_loader = config_loader\n\n def parse_rule(self, s: str, rule_name: str) -> Any:\n error_listener = HydraErrorListener()\n istream = InputStream(s)\n lexer = OverrideLexer(istream)\n lexer.removeErrorListeners()\n lexer.addErrorListener(error_listener)\n\n # Set the lexer in the correct mode to parse the desired rule.\n lexer_mode = \"KEY\" if rule_name in KEY_RULES else \"VALUE\"\n lexer.mode(getattr(OverrideLexer, lexer_mode))\n\n stream = CommonTokenStream(lexer)\n parser = OverrideParser(stream)\n parser.removeErrorListeners()\n parser.addErrorListener(error_listener)\n visitor = HydraOverrideVisitor(self.functions)\n rule = getattr(parser, rule_name)\n tree = rule()\n ret = visitor.visit(tree)\n if isinstance(ret, Override):\n ret.input_line = s\n return ret\n\n def parse_override(self, s: str) -> Override:\n ret = self.parse_rule(s, \"override\")\n assert isinstance(ret, Override)\n return ret\n\n def parse_overrides(self, overrides: List[str]) -> List[Override]:\n ret: List[Override] = []\n for override in overrides:\n try:\n parsed = self.parse_rule(override, \"override\")\n except HydraException as e:\n cause = e.__cause__\n if isinstance(cause, LexerNoViableAltException):\n prefix = \"LexerNoViableAltException: \"\n start = len(prefix) + cause.startIndex + 1\n msg = f\"{prefix}{override}\" f\"\\n{'^'.rjust(start)}\"\n e.__cause__ = None\n elif isinstance(cause, RecognitionException):\n prefix = f\"{e}\"\n msg = f\"{prefix}\"\n e.__cause__ = None\n else:\n msg = f\"Error parsing override '{override}'\" f\"\\n{e}\"\n raise OverrideParseException(\n override=override,\n message=f\"{msg}\"\n f\"\\nSee https://hydra.cc/docs/next/advanced/override_grammar/basic for details\",\n ) from e.__cause__\n assert isinstance(parsed, Override)\n parsed.config_loader = self.config_loader\n ret.append(parsed)\n return ret\n\n\ndef create_functions() -> Functions:\n functions = Functions()\n # casts\n functions.register(name=\"int\", func=grammar_functions.cast_int)\n functions.register(name=\"str\", func=grammar_functions.cast_str)\n functions.register(name=\"bool\", func=grammar_functions.cast_bool)\n functions.register(name=\"float\", func=grammar_functions.cast_float)\n # sweeps\n functions.register(name=\"choice\", func=grammar_functions.choice)\n functions.register(name=\"range\", func=grammar_functions.range)\n functions.register(name=\"interval\", func=grammar_functions.interval)\n # misc\n functions.register(name=\"tag\", func=grammar_functions.tag)\n functions.register(name=\"sort\", func=grammar_functions.sort)\n functions.register(name=\"shuffle\", func=grammar_functions.shuffle)\n functions.register(name=\"glob\", func=grammar_functions.glob)\n return functions\n", "path": "hydra/core/override_parser/overrides_parser.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport sys\nfrom typing import Any, List, Optional\n\nfrom antlr4.error.Errors import LexerNoViableAltException, RecognitionException\n\nfrom hydra._internal.grammar import grammar_functions\nfrom hydra._internal.grammar.functions import Functions\nfrom hydra.core.config_loader import ConfigLoader\nfrom hydra.core.override_parser.overrides_visitor import (\n HydraErrorListener,\n HydraOverrideVisitor,\n)\nfrom hydra.core.override_parser.types import Override\nfrom hydra.errors import HydraException, OverrideParseException\n\ntry:\n from hydra.grammar.gen.OverrideLexer import (\n CommonTokenStream,\n InputStream,\n OverrideLexer,\n )\n from hydra.grammar.gen.OverrideParser import OverrideParser\n\nexcept ModuleNotFoundError:\n print(\n \"Error importing generated parsers, run `python setup.py antlr` to regenerate.\"\n )\n sys.exit(1)\n\n# The set of parser rules that require the lexer to be in lexical mode `KEY`.\nKEY_RULES = {\"key\", \"override\", \"package\", \"packageOrGroup\"}\n\n\nclass OverridesParser:\n functions: Functions\n\n @classmethod\n def create(cls, config_loader: Optional[ConfigLoader] = None) -> \"OverridesParser\":\n functions = create_functions()\n return cls(functions=functions, config_loader=config_loader)\n\n def __init__(\n self, functions: Functions, config_loader: Optional[ConfigLoader] = None\n ):\n self.functions = functions\n self.config_loader = config_loader\n\n def parse_rule(self, s: str, rule_name: str) -> Any:\n error_listener = HydraErrorListener()\n istream = InputStream(s)\n lexer = OverrideLexer(istream)\n lexer.removeErrorListeners()\n lexer.addErrorListener(error_listener)\n\n # Set the lexer in the correct mode to parse the desired rule.\n if rule_name not in KEY_RULES:\n lexer.mode(OverrideLexer.VALUE_MODE)\n\n stream = CommonTokenStream(lexer)\n parser = OverrideParser(stream)\n parser.removeErrorListeners()\n parser.addErrorListener(error_listener)\n visitor = HydraOverrideVisitor(self.functions)\n rule = getattr(parser, rule_name)\n tree = rule()\n ret = visitor.visit(tree)\n if isinstance(ret, Override):\n ret.input_line = s\n return ret\n\n def parse_override(self, s: str) -> Override:\n ret = self.parse_rule(s, \"override\")\n assert isinstance(ret, Override)\n return ret\n\n def parse_overrides(self, overrides: List[str]) -> List[Override]:\n ret: List[Override] = []\n for override in overrides:\n try:\n parsed = self.parse_rule(override, \"override\")\n except HydraException as e:\n cause = e.__cause__\n if isinstance(cause, LexerNoViableAltException):\n prefix = \"LexerNoViableAltException: \"\n start = len(prefix) + cause.startIndex + 1\n msg = f\"{prefix}{override}\" f\"\\n{'^'.rjust(start)}\"\n e.__cause__ = None\n elif isinstance(cause, RecognitionException):\n prefix = f\"{e}\"\n msg = f\"{prefix}\"\n e.__cause__ = None\n else:\n msg = f\"Error parsing override '{override}'\" f\"\\n{e}\"\n raise OverrideParseException(\n override=override,\n message=f\"{msg}\"\n f\"\\nSee https://hydra.cc/docs/next/advanced/override_grammar/basic for details\",\n ) from e.__cause__\n assert isinstance(parsed, Override)\n parsed.config_loader = self.config_loader\n ret.append(parsed)\n return ret\n\n\ndef create_functions() -> Functions:\n functions = Functions()\n # casts\n functions.register(name=\"int\", func=grammar_functions.cast_int)\n functions.register(name=\"str\", func=grammar_functions.cast_str)\n functions.register(name=\"bool\", func=grammar_functions.cast_bool)\n functions.register(name=\"float\", func=grammar_functions.cast_float)\n # sweeps\n functions.register(name=\"choice\", func=grammar_functions.choice)\n functions.register(name=\"range\", func=grammar_functions.range)\n functions.register(name=\"interval\", func=grammar_functions.interval)\n # misc\n functions.register(name=\"tag\", func=grammar_functions.tag)\n functions.register(name=\"sort\", func=grammar_functions.sort)\n functions.register(name=\"shuffle\", func=grammar_functions.shuffle)\n functions.register(name=\"glob\", func=grammar_functions.glob)\n return functions\n", "path": "hydra/core/override_parser/overrides_parser.py"}]}
1,857
162
gh_patches_debug_28643
rasdani/github-patches
git_diff
pyjanitor-devs__pyjanitor-550
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [ENH] Decorator utility for handling NAs and errors (for transform_column) # Brief Description If say `df["x"]` is a string column with NAs, then the routine `df["x"].apply(lambda x : x+"1")` doesn't work due to the NAs. It would be useful to have a decorator such that ```python df.transform_column("x", skipna(lambda x : x + "1")) ``` would work # Example API ```python from janitor.func_utils import skipna, skiperror df.transform_column("x", skipna(lambda x : x + "1")) df.transform_column("y", skiperror(func)) @skiperror def foo(x): ... pass ``` # Example implementation ```python def skipna(f): def _wrapped(x): if type(x) is float and np.isnan(x) or x is None: return np.nan else: return f(x) return _wrapped def skiperror(f, return_val=None, return_x=False): def _wrapped(x): try: return f(x) except: if return_x: return x return return_val return _wrapped ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `janitor/utils.py` Content: ``` 1 """ Miscellaneous internal PyJanitor helper functions. """ 2 3 import functools 4 import warnings 5 from typing import Callable, Dict, List, Union 6 7 import pandas as pd 8 9 from .errors import JanitorError 10 11 12 def check(varname: str, value, expected_types: list): 13 """ 14 One-liner syntactic sugar for checking types. 15 16 Should be used like this:: 17 18 check('x', x, [int, float]) 19 20 :param varname: The name of the variable. 21 :param value: The value of the varname. 22 :param expected_types: The types we expect the item to be. 23 :returns: TypeError if data is not the expected type. 24 """ 25 is_expected_type = False 26 for t in expected_types: 27 if isinstance(value, t): 28 is_expected_type = True 29 break 30 31 if not is_expected_type: 32 raise TypeError( 33 "{varname} should be one of {expected_types}".format( 34 varname=varname, expected_types=expected_types 35 ) 36 ) 37 38 39 def _clean_accounting_column(x: str) -> float: 40 """ 41 Perform the logic for the `cleaning_style == "accounting"` attribute. 42 43 This is a private function, not intended to be used outside of 44 ``currency_column_to_numeric``. 45 46 It is intended to be used in a pandas `apply` method. 47 48 :returns: An object with a cleaned column. 49 """ 50 y = x.strip() 51 y = y.replace(",", "") 52 y = y.replace(")", "") 53 y = y.replace("(", "-") 54 if y == "-": 55 return 0.00 56 return float(y) 57 58 59 def _currency_column_to_numeric(x, cast_non_numeric=None) -> str: 60 """ 61 Perform logic for changing cell values. 62 63 This is a private function intended to be used only in 64 ``currency_column_to_numeric``. 65 66 It is intended to be used in a pandas `apply` method, after being passed 67 through `partial`. 68 """ 69 acceptable_currency_characters = { 70 "-", 71 ".", 72 "1", 73 "2", 74 "3", 75 "4", 76 "5", 77 "6", 78 "7", 79 "8", 80 "9", 81 "0", 82 } 83 if len(x) == 0: 84 return "ORIGINAL_NA" 85 86 if cast_non_numeric: 87 if x in cast_non_numeric.keys(): 88 check( 89 "{%r: %r}" % (x, str(cast_non_numeric[x])), 90 cast_non_numeric[x], 91 [int, float], 92 ) 93 return cast_non_numeric[x] 94 else: 95 return "".join(i for i in x if i in acceptable_currency_characters) 96 else: 97 return "".join(i for i in x if i in acceptable_currency_characters) 98 99 100 def _replace_empty_string_with_none(x): 101 if isinstance(x, int): 102 return x 103 104 elif isinstance(x, float): 105 return x 106 107 elif len(x): 108 return x 109 110 111 def _replace_original_empty_string_with_none(x): 112 if x != "ORIGINAL_NA": 113 return x 114 115 116 def _strip_underscores( 117 df: pd.DataFrame, strip_underscores: Union[str, bool] = None 118 ) -> pd.DataFrame: 119 """ 120 Strip underscores from DataFrames column names. 121 122 Underscores can be stripped from the beginning, end or both. 123 124 .. code-block:: python 125 126 df = _strip_underscores(df, strip_underscores='left') 127 128 :param df: The pandas DataFrame object. 129 :param strip_underscores: (optional) Removes the outer underscores from all 130 column names. Default None keeps outer underscores. Values can be 131 either 'left', 'right' or 'both' or the respective shorthand 'l', 'r' 132 and True. 133 :returns: A pandas DataFrame with underscores removed. 134 """ 135 136 df = df.rename( 137 columns=lambda x: _strip_underscores_func(x, strip_underscores) 138 ) 139 return df 140 141 142 def _strip_underscores_func( 143 col: str, strip_underscores: Union[str, bool] = None 144 ) -> pd.DataFrame: 145 """Strip underscores from a string.""" 146 underscore_options = [None, "left", "right", "both", "l", "r", True] 147 if strip_underscores not in underscore_options: 148 raise JanitorError( 149 f"strip_underscores must be one of: {underscore_options}" 150 ) 151 152 if strip_underscores in ["left", "l"]: 153 col = col.lstrip("_") 154 elif strip_underscores in ["right", "r"]: 155 col = col.rstrip("_") 156 elif strip_underscores == "both" or strip_underscores is True: 157 col = col.strip("_") 158 return col 159 160 161 def import_message(submodule: str, package: str, installation: str): 162 """ 163 Generic message for indicating to the user when a function relies on an 164 optional module / package that is not currently installed. Includes 165 installation instructions. Used in `chemistry.py` and `biology.py`. 166 167 :param submodule: pyjanitor submodule that needs an external dependency. 168 :param package: External package this submodule relies on. 169 :param installation: Command to execute in the environment to install 170 the package. 171 """ 172 173 print( 174 f"To use the janitor submodule {submodule}, you need to install " 175 f"{package}." 176 ) 177 print() 178 print(f"To do so, use the following command:") 179 print() 180 print(f" {installation}") 181 182 183 def idempotent(func: Callable, df: pd.DataFrame, *args, **kwargs): 184 """ 185 Raises error if a function operating on a `DataFrame` is not idempotent, 186 that is, `func(func(df)) = func(df)` is not true for all `df`. 187 188 :param func: A python method. 189 :param df: A pandas `DataFrame`. 190 :param args: Positional arguments supplied to the method. 191 :param kwargs: Keyword arguments supplied to the method. 192 :raises ValueError: If `func` is found to not be idempotent for the given 193 `DataFrame` `df`. 194 """ 195 if not func(df, *args, **kwargs) == func( 196 func(df, *args, **kwargs), *args, **kwargs 197 ): 198 raise ValueError( 199 "Supplied function is not idempotent for the given " "DataFrame." 200 ) 201 202 203 def deprecated_alias(**aliases) -> Callable: 204 """ 205 Used as a decorator when deprecating old function argument names, while 206 keeping backwards compatibility. 207 208 Implementation is inspired from `StackOverflow`_. 209 210 .. _StackOverflow: https://stackoverflow.com/questions/49802412/how-to-implement-deprecation-in-python-with-argument-alias # noqa: E501 211 212 Functional usage example: 213 214 .. code-block:: python 215 216 @deprecated_alias(a='alpha', b='beta') 217 def simple_sum(alpha, beta): 218 return alpha + beta 219 220 :param aliases: Dictionary of aliases for a function's arguments. 221 :return: Your original function wrapped with the kwarg redirection 222 function. 223 """ 224 225 def decorator(func): 226 @functools.wraps(func) 227 def wrapper(*args, **kwargs): 228 rename_kwargs(func.__name__, kwargs, aliases) 229 return func(*args, **kwargs) 230 231 return wrapper 232 233 return decorator 234 235 236 def rename_kwargs(func_name: str, kwargs: Dict, aliases: Dict): 237 """ 238 Used to update deprecated argument names with new names. Throws a 239 TypeError if both arguments are provided, and warns if old alias is used. 240 241 Implementation is inspired from `StackOverflow`_. 242 243 .. _StackOverflow: https://stackoverflow.com/questions/49802412/how-to-implement-deprecation-in-python-with-argument-alias # noqa: E501 244 245 :param func_name: name of decorated function. 246 :param kwargs: Arguments supplied to the method. 247 :param aliases: Dictionary of aliases for a function's arguments. 248 :return: Nothing; the passed `kwargs` are modified directly. 249 """ 250 for old_alias, new_alias in aliases.items(): 251 if old_alias in kwargs: 252 if new_alias in kwargs: 253 raise TypeError( 254 f"{func_name} received both {old_alias} and {new_alias}" 255 ) 256 warnings.warn( 257 f"{old_alias} is deprecated; use {new_alias}", 258 DeprecationWarning, 259 ) 260 kwargs[new_alias] = kwargs.pop(old_alias) 261 262 263 def check_column( 264 df: pd.DataFrame, old_column_names: List, present: bool = True 265 ): 266 """ 267 One-liner syntactic sugar for checking the presence or absence of a column. 268 269 Should be used like this:: 270 271 check(df, ['a', 'b'], present=True) 272 273 :param df: The name of the variable. 274 :param old_column_names: A list of column names we want to check to see if 275 present (or absent) in df. 276 :param present: If True (default), checks to see if all of old_column_names 277 are in df.columns. If False, checks that none of old_column_names are 278 in df.columns. 279 :returns: ValueError if data is not the expected type. 280 """ 281 for column_name in old_column_names: 282 if present: 283 if column_name not in df.columns: 284 raise ValueError( 285 f"{column_name} not present in dataframe columns!" 286 ) 287 else: # Tests for exclusion 288 if column_name in df.columns: 289 raise ValueError( 290 f"{column_name} already present in dataframe columns!" 291 ) 292 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/janitor/utils.py b/janitor/utils.py --- a/janitor/utils.py +++ b/janitor/utils.py @@ -4,6 +4,7 @@ import warnings from typing import Callable, Dict, List, Union +import numpy as np import pandas as pd from .errors import JanitorError @@ -289,3 +290,66 @@ raise ValueError( f"{column_name} already present in dataframe columns!" ) + + +def skipna(f: Callable) -> Callable: + """ + Decorator for escaping np.nan and None in a function + + Should be used like this:: + + df[column].apply(skipna(transform)) + + or:: + + @skipna + def transform(x): + pass + + :param f: the function to be wrapped + :returns: _wrapped, the wrapped function + """ + + def _wrapped(x, *args, **kwargs): + if (type(x) is float and np.isnan(x)) or x is None: + return np.nan + else: + return f(x, *args, **kwargs) + + return _wrapped + + +def skiperror( + f: Callable, return_x: bool = False, return_val=np.nan +) -> Callable: + """ + Decorator for escaping errors in a function + + Should be used like this:: + + df[column].apply( + skiperror(transform, return_val=3, return_x=False)) + + or:: + + @skiperror(return_val=3, return_x=False) + def transform(x): + pass + + :param f: the function to be wrapped + :param return_x: whether or not the original value that caused error + should be returned + :param return_val: the value to be returned when an error hits. + Ignored if return_x is True + :returns: _wrapped, the wrapped function + """ + + def _wrapped(x, *args, **kwargs): + try: + return f(x, *args, **kwargs) + except Exception: + if return_x: + return x + return return_val + + return _wrapped
{"golden_diff": "diff --git a/janitor/utils.py b/janitor/utils.py\n--- a/janitor/utils.py\n+++ b/janitor/utils.py\n@@ -4,6 +4,7 @@\n import warnings\n from typing import Callable, Dict, List, Union\n \n+import numpy as np\n import pandas as pd\n \n from .errors import JanitorError\n@@ -289,3 +290,66 @@\n raise ValueError(\n f\"{column_name} already present in dataframe columns!\"\n )\n+\n+\n+def skipna(f: Callable) -> Callable:\n+ \"\"\"\n+ Decorator for escaping np.nan and None in a function\n+\n+ Should be used like this::\n+\n+ df[column].apply(skipna(transform))\n+\n+ or::\n+\n+ @skipna\n+ def transform(x):\n+ pass\n+\n+ :param f: the function to be wrapped\n+ :returns: _wrapped, the wrapped function\n+ \"\"\"\n+\n+ def _wrapped(x, *args, **kwargs):\n+ if (type(x) is float and np.isnan(x)) or x is None:\n+ return np.nan\n+ else:\n+ return f(x, *args, **kwargs)\n+\n+ return _wrapped\n+\n+\n+def skiperror(\n+ f: Callable, return_x: bool = False, return_val=np.nan\n+) -> Callable:\n+ \"\"\"\n+ Decorator for escaping errors in a function\n+\n+ Should be used like this::\n+\n+ df[column].apply(\n+ skiperror(transform, return_val=3, return_x=False))\n+\n+ or::\n+\n+ @skiperror(return_val=3, return_x=False)\n+ def transform(x):\n+ pass\n+\n+ :param f: the function to be wrapped\n+ :param return_x: whether or not the original value that caused error\n+ should be returned\n+ :param return_val: the value to be returned when an error hits.\n+ Ignored if return_x is True\n+ :returns: _wrapped, the wrapped function\n+ \"\"\"\n+\n+ def _wrapped(x, *args, **kwargs):\n+ try:\n+ return f(x, *args, **kwargs)\n+ except Exception:\n+ if return_x:\n+ return x\n+ return return_val\n+\n+ return _wrapped\n", "issue": "[ENH] Decorator utility for handling NAs and errors (for transform_column)\n# Brief Description\r\n\r\nIf say `df[\"x\"]` is a string column with NAs, then the routine `df[\"x\"].apply(lambda x : x+\"1\")` doesn't work due to the NAs. It would be useful to have a decorator such that \r\n```python\r\ndf.transform_column(\"x\", skipna(lambda x : x + \"1\"))\r\n``` \r\nwould work\r\n\r\n# Example API\r\n\r\n```python\r\nfrom janitor.func_utils import skipna, skiperror\r\n\r\ndf.transform_column(\"x\", skipna(lambda x : x + \"1\"))\r\ndf.transform_column(\"y\", skiperror(func))\r\n\r\n@skiperror\r\ndef foo(x):\r\n ...\r\n pass\r\n```\r\n\r\n# Example implementation\r\n\r\n```python\r\n\r\ndef skipna(f):\r\n def _wrapped(x):\r\n if type(x) is float and np.isnan(x) or x is None:\r\n return np.nan\r\n else:\r\n return f(x)\r\n\r\n return _wrapped\r\n\r\n\r\ndef skiperror(f, return_val=None, return_x=False):\r\n def _wrapped(x):\r\n try:\r\n return f(x)\r\n except:\r\n if return_x:\r\n return x\r\n return return_val\r\n\r\n return _wrapped\r\n```\r\n\n", "before_files": [{"content": "\"\"\" Miscellaneous internal PyJanitor helper functions. \"\"\"\n\nimport functools\nimport warnings\nfrom typing import Callable, Dict, List, Union\n\nimport pandas as pd\n\nfrom .errors import JanitorError\n\n\ndef check(varname: str, value, expected_types: list):\n \"\"\"\n One-liner syntactic sugar for checking types.\n\n Should be used like this::\n\n check('x', x, [int, float])\n\n :param varname: The name of the variable.\n :param value: The value of the varname.\n :param expected_types: The types we expect the item to be.\n :returns: TypeError if data is not the expected type.\n \"\"\"\n is_expected_type = False\n for t in expected_types:\n if isinstance(value, t):\n is_expected_type = True\n break\n\n if not is_expected_type:\n raise TypeError(\n \"{varname} should be one of {expected_types}\".format(\n varname=varname, expected_types=expected_types\n )\n )\n\n\ndef _clean_accounting_column(x: str) -> float:\n \"\"\"\n Perform the logic for the `cleaning_style == \"accounting\"` attribute.\n\n This is a private function, not intended to be used outside of\n ``currency_column_to_numeric``.\n\n It is intended to be used in a pandas `apply` method.\n\n :returns: An object with a cleaned column.\n \"\"\"\n y = x.strip()\n y = y.replace(\",\", \"\")\n y = y.replace(\")\", \"\")\n y = y.replace(\"(\", \"-\")\n if y == \"-\":\n return 0.00\n return float(y)\n\n\ndef _currency_column_to_numeric(x, cast_non_numeric=None) -> str:\n \"\"\"\n Perform logic for changing cell values.\n\n This is a private function intended to be used only in\n ``currency_column_to_numeric``.\n\n It is intended to be used in a pandas `apply` method, after being passed\n through `partial`.\n \"\"\"\n acceptable_currency_characters = {\n \"-\",\n \".\",\n \"1\",\n \"2\",\n \"3\",\n \"4\",\n \"5\",\n \"6\",\n \"7\",\n \"8\",\n \"9\",\n \"0\",\n }\n if len(x) == 0:\n return \"ORIGINAL_NA\"\n\n if cast_non_numeric:\n if x in cast_non_numeric.keys():\n check(\n \"{%r: %r}\" % (x, str(cast_non_numeric[x])),\n cast_non_numeric[x],\n [int, float],\n )\n return cast_non_numeric[x]\n else:\n return \"\".join(i for i in x if i in acceptable_currency_characters)\n else:\n return \"\".join(i for i in x if i in acceptable_currency_characters)\n\n\ndef _replace_empty_string_with_none(x):\n if isinstance(x, int):\n return x\n\n elif isinstance(x, float):\n return x\n\n elif len(x):\n return x\n\n\ndef _replace_original_empty_string_with_none(x):\n if x != \"ORIGINAL_NA\":\n return x\n\n\ndef _strip_underscores(\n df: pd.DataFrame, strip_underscores: Union[str, bool] = None\n) -> pd.DataFrame:\n \"\"\"\n Strip underscores from DataFrames column names.\n\n Underscores can be stripped from the beginning, end or both.\n\n .. code-block:: python\n\n df = _strip_underscores(df, strip_underscores='left')\n\n :param df: The pandas DataFrame object.\n :param strip_underscores: (optional) Removes the outer underscores from all\n column names. Default None keeps outer underscores. Values can be\n either 'left', 'right' or 'both' or the respective shorthand 'l', 'r'\n and True.\n :returns: A pandas DataFrame with underscores removed.\n \"\"\"\n\n df = df.rename(\n columns=lambda x: _strip_underscores_func(x, strip_underscores)\n )\n return df\n\n\ndef _strip_underscores_func(\n col: str, strip_underscores: Union[str, bool] = None\n) -> pd.DataFrame:\n \"\"\"Strip underscores from a string.\"\"\"\n underscore_options = [None, \"left\", \"right\", \"both\", \"l\", \"r\", True]\n if strip_underscores not in underscore_options:\n raise JanitorError(\n f\"strip_underscores must be one of: {underscore_options}\"\n )\n\n if strip_underscores in [\"left\", \"l\"]:\n col = col.lstrip(\"_\")\n elif strip_underscores in [\"right\", \"r\"]:\n col = col.rstrip(\"_\")\n elif strip_underscores == \"both\" or strip_underscores is True:\n col = col.strip(\"_\")\n return col\n\n\ndef import_message(submodule: str, package: str, installation: str):\n \"\"\"\n Generic message for indicating to the user when a function relies on an\n optional module / package that is not currently installed. Includes\n installation instructions. Used in `chemistry.py` and `biology.py`.\n\n :param submodule: pyjanitor submodule that needs an external dependency.\n :param package: External package this submodule relies on.\n :param installation: Command to execute in the environment to install\n the package.\n \"\"\"\n\n print(\n f\"To use the janitor submodule {submodule}, you need to install \"\n f\"{package}.\"\n )\n print()\n print(f\"To do so, use the following command:\")\n print()\n print(f\" {installation}\")\n\n\ndef idempotent(func: Callable, df: pd.DataFrame, *args, **kwargs):\n \"\"\"\n Raises error if a function operating on a `DataFrame` is not idempotent,\n that is, `func(func(df)) = func(df)` is not true for all `df`.\n\n :param func: A python method.\n :param df: A pandas `DataFrame`.\n :param args: Positional arguments supplied to the method.\n :param kwargs: Keyword arguments supplied to the method.\n :raises ValueError: If `func` is found to not be idempotent for the given\n `DataFrame` `df`.\n \"\"\"\n if not func(df, *args, **kwargs) == func(\n func(df, *args, **kwargs), *args, **kwargs\n ):\n raise ValueError(\n \"Supplied function is not idempotent for the given \" \"DataFrame.\"\n )\n\n\ndef deprecated_alias(**aliases) -> Callable:\n \"\"\"\n Used as a decorator when deprecating old function argument names, while\n keeping backwards compatibility.\n\n Implementation is inspired from `StackOverflow`_.\n\n .. _StackOverflow: https://stackoverflow.com/questions/49802412/how-to-implement-deprecation-in-python-with-argument-alias # noqa: E501\n\n Functional usage example:\n\n .. code-block:: python\n\n @deprecated_alias(a='alpha', b='beta')\n def simple_sum(alpha, beta):\n return alpha + beta\n\n :param aliases: Dictionary of aliases for a function's arguments.\n :return: Your original function wrapped with the kwarg redirection\n function.\n \"\"\"\n\n def decorator(func):\n @functools.wraps(func)\n def wrapper(*args, **kwargs):\n rename_kwargs(func.__name__, kwargs, aliases)\n return func(*args, **kwargs)\n\n return wrapper\n\n return decorator\n\n\ndef rename_kwargs(func_name: str, kwargs: Dict, aliases: Dict):\n \"\"\"\n Used to update deprecated argument names with new names. Throws a\n TypeError if both arguments are provided, and warns if old alias is used.\n\n Implementation is inspired from `StackOverflow`_.\n\n .. _StackOverflow: https://stackoverflow.com/questions/49802412/how-to-implement-deprecation-in-python-with-argument-alias # noqa: E501\n\n :param func_name: name of decorated function.\n :param kwargs: Arguments supplied to the method.\n :param aliases: Dictionary of aliases for a function's arguments.\n :return: Nothing; the passed `kwargs` are modified directly.\n \"\"\"\n for old_alias, new_alias in aliases.items():\n if old_alias in kwargs:\n if new_alias in kwargs:\n raise TypeError(\n f\"{func_name} received both {old_alias} and {new_alias}\"\n )\n warnings.warn(\n f\"{old_alias} is deprecated; use {new_alias}\",\n DeprecationWarning,\n )\n kwargs[new_alias] = kwargs.pop(old_alias)\n\n\ndef check_column(\n df: pd.DataFrame, old_column_names: List, present: bool = True\n):\n \"\"\"\n One-liner syntactic sugar for checking the presence or absence of a column.\n\n Should be used like this::\n\n check(df, ['a', 'b'], present=True)\n\n :param df: The name of the variable.\n :param old_column_names: A list of column names we want to check to see if\n present (or absent) in df.\n :param present: If True (default), checks to see if all of old_column_names\n are in df.columns. If False, checks that none of old_column_names are\n in df.columns.\n :returns: ValueError if data is not the expected type.\n \"\"\"\n for column_name in old_column_names:\n if present:\n if column_name not in df.columns:\n raise ValueError(\n f\"{column_name} not present in dataframe columns!\"\n )\n else: # Tests for exclusion\n if column_name in df.columns:\n raise ValueError(\n f\"{column_name} already present in dataframe columns!\"\n )\n", "path": "janitor/utils.py"}], "after_files": [{"content": "\"\"\" Miscellaneous internal PyJanitor helper functions. \"\"\"\n\nimport functools\nimport warnings\nfrom typing import Callable, Dict, List, Union\n\nimport numpy as np\nimport pandas as pd\n\nfrom .errors import JanitorError\n\n\ndef check(varname: str, value, expected_types: list):\n \"\"\"\n One-liner syntactic sugar for checking types.\n\n Should be used like this::\n\n check('x', x, [int, float])\n\n :param varname: The name of the variable.\n :param value: The value of the varname.\n :param expected_types: The types we expect the item to be.\n :returns: TypeError if data is not the expected type.\n \"\"\"\n is_expected_type = False\n for t in expected_types:\n if isinstance(value, t):\n is_expected_type = True\n break\n\n if not is_expected_type:\n raise TypeError(\n \"{varname} should be one of {expected_types}\".format(\n varname=varname, expected_types=expected_types\n )\n )\n\n\ndef _clean_accounting_column(x: str) -> float:\n \"\"\"\n Perform the logic for the `cleaning_style == \"accounting\"` attribute.\n\n This is a private function, not intended to be used outside of\n ``currency_column_to_numeric``.\n\n It is intended to be used in a pandas `apply` method.\n\n :returns: An object with a cleaned column.\n \"\"\"\n y = x.strip()\n y = y.replace(\",\", \"\")\n y = y.replace(\")\", \"\")\n y = y.replace(\"(\", \"-\")\n if y == \"-\":\n return 0.00\n return float(y)\n\n\ndef _currency_column_to_numeric(x, cast_non_numeric=None) -> str:\n \"\"\"\n Perform logic for changing cell values.\n\n This is a private function intended to be used only in\n ``currency_column_to_numeric``.\n\n It is intended to be used in a pandas `apply` method, after being passed\n through `partial`.\n \"\"\"\n acceptable_currency_characters = {\n \"-\",\n \".\",\n \"1\",\n \"2\",\n \"3\",\n \"4\",\n \"5\",\n \"6\",\n \"7\",\n \"8\",\n \"9\",\n \"0\",\n }\n if len(x) == 0:\n return \"ORIGINAL_NA\"\n\n if cast_non_numeric:\n if x in cast_non_numeric.keys():\n check(\n \"{%r: %r}\" % (x, str(cast_non_numeric[x])),\n cast_non_numeric[x],\n [int, float],\n )\n return cast_non_numeric[x]\n else:\n return \"\".join(i for i in x if i in acceptable_currency_characters)\n else:\n return \"\".join(i for i in x if i in acceptable_currency_characters)\n\n\ndef _replace_empty_string_with_none(x):\n if isinstance(x, int):\n return x\n\n elif isinstance(x, float):\n return x\n\n elif len(x):\n return x\n\n\ndef _replace_original_empty_string_with_none(x):\n if x != \"ORIGINAL_NA\":\n return x\n\n\ndef _strip_underscores(\n df: pd.DataFrame, strip_underscores: Union[str, bool] = None\n) -> pd.DataFrame:\n \"\"\"\n Strip underscores from DataFrames column names.\n\n Underscores can be stripped from the beginning, end or both.\n\n .. code-block:: python\n\n df = _strip_underscores(df, strip_underscores='left')\n\n :param df: The pandas DataFrame object.\n :param strip_underscores: (optional) Removes the outer underscores from all\n column names. Default None keeps outer underscores. Values can be\n either 'left', 'right' or 'both' or the respective shorthand 'l', 'r'\n and True.\n :returns: A pandas DataFrame with underscores removed.\n \"\"\"\n\n df = df.rename(\n columns=lambda x: _strip_underscores_func(x, strip_underscores)\n )\n return df\n\n\ndef _strip_underscores_func(\n col: str, strip_underscores: Union[str, bool] = None\n) -> pd.DataFrame:\n \"\"\"Strip underscores from a string.\"\"\"\n underscore_options = [None, \"left\", \"right\", \"both\", \"l\", \"r\", True]\n if strip_underscores not in underscore_options:\n raise JanitorError(\n f\"strip_underscores must be one of: {underscore_options}\"\n )\n\n if strip_underscores in [\"left\", \"l\"]:\n col = col.lstrip(\"_\")\n elif strip_underscores in [\"right\", \"r\"]:\n col = col.rstrip(\"_\")\n elif strip_underscores == \"both\" or strip_underscores is True:\n col = col.strip(\"_\")\n return col\n\n\ndef import_message(submodule: str, package: str, installation: str):\n \"\"\"\n Generic message for indicating to the user when a function relies on an\n optional module / package that is not currently installed. Includes\n installation instructions. Used in `chemistry.py` and `biology.py`.\n\n :param submodule: pyjanitor submodule that needs an external dependency.\n :param package: External package this submodule relies on.\n :param installation: Command to execute in the environment to install\n the package.\n \"\"\"\n\n print(\n f\"To use the janitor submodule {submodule}, you need to install \"\n f\"{package}.\"\n )\n print()\n print(f\"To do so, use the following command:\")\n print()\n print(f\" {installation}\")\n\n\ndef idempotent(func: Callable, df: pd.DataFrame, *args, **kwargs):\n \"\"\"\n Raises error if a function operating on a `DataFrame` is not idempotent,\n that is, `func(func(df)) = func(df)` is not true for all `df`.\n\n :param func: A python method.\n :param df: A pandas `DataFrame`.\n :param args: Positional arguments supplied to the method.\n :param kwargs: Keyword arguments supplied to the method.\n :raises ValueError: If `func` is found to not be idempotent for the given\n `DataFrame` `df`.\n \"\"\"\n if not func(df, *args, **kwargs) == func(\n func(df, *args, **kwargs), *args, **kwargs\n ):\n raise ValueError(\n \"Supplied function is not idempotent for the given \" \"DataFrame.\"\n )\n\n\ndef deprecated_alias(**aliases) -> Callable:\n \"\"\"\n Used as a decorator when deprecating old function argument names, while\n keeping backwards compatibility.\n\n Implementation is inspired from `StackOverflow`_.\n\n .. _StackOverflow: https://stackoverflow.com/questions/49802412/how-to-implement-deprecation-in-python-with-argument-alias # noqa: E501\n\n Functional usage example:\n\n .. code-block:: python\n\n @deprecated_alias(a='alpha', b='beta')\n def simple_sum(alpha, beta):\n return alpha + beta\n\n :param aliases: Dictionary of aliases for a function's arguments.\n :return: Your original function wrapped with the kwarg redirection\n function.\n \"\"\"\n\n def decorator(func):\n @functools.wraps(func)\n def wrapper(*args, **kwargs):\n rename_kwargs(func.__name__, kwargs, aliases)\n return func(*args, **kwargs)\n\n return wrapper\n\n return decorator\n\n\ndef rename_kwargs(func_name: str, kwargs: Dict, aliases: Dict):\n \"\"\"\n Used to update deprecated argument names with new names. Throws a\n TypeError if both arguments are provided, and warns if old alias is used.\n\n Implementation is inspired from `StackOverflow`_.\n\n .. _StackOverflow: https://stackoverflow.com/questions/49802412/how-to-implement-deprecation-in-python-with-argument-alias # noqa: E501\n\n :param func_name: name of decorated function.\n :param kwargs: Arguments supplied to the method.\n :param aliases: Dictionary of aliases for a function's arguments.\n :return: Nothing; the passed `kwargs` are modified directly.\n \"\"\"\n for old_alias, new_alias in aliases.items():\n if old_alias in kwargs:\n if new_alias in kwargs:\n raise TypeError(\n f\"{func_name} received both {old_alias} and {new_alias}\"\n )\n warnings.warn(\n f\"{old_alias} is deprecated; use {new_alias}\",\n DeprecationWarning,\n )\n kwargs[new_alias] = kwargs.pop(old_alias)\n\n\ndef check_column(\n df: pd.DataFrame, old_column_names: List, present: bool = True\n):\n \"\"\"\n One-liner syntactic sugar for checking the presence or absence of a column.\n\n Should be used like this::\n\n check(df, ['a', 'b'], present=True)\n\n :param df: The name of the variable.\n :param old_column_names: A list of column names we want to check to see if\n present (or absent) in df.\n :param present: If True (default), checks to see if all of old_column_names\n are in df.columns. If False, checks that none of old_column_names are\n in df.columns.\n :returns: ValueError if data is not the expected type.\n \"\"\"\n for column_name in old_column_names:\n if present:\n if column_name not in df.columns:\n raise ValueError(\n f\"{column_name} not present in dataframe columns!\"\n )\n else: # Tests for exclusion\n if column_name in df.columns:\n raise ValueError(\n f\"{column_name} already present in dataframe columns!\"\n )\n\n\ndef skipna(f: Callable) -> Callable:\n \"\"\"\n Decorator for escaping np.nan and None in a function\n\n Should be used like this::\n\n df[column].apply(skipna(transform))\n\n or::\n\n @skipna\n def transform(x):\n pass\n\n :param f: the function to be wrapped\n :returns: _wrapped, the wrapped function\n \"\"\"\n\n def _wrapped(x, *args, **kwargs):\n if (type(x) is float and np.isnan(x)) or x is None:\n return np.nan\n else:\n return f(x, *args, **kwargs)\n\n return _wrapped\n\n\ndef skiperror(\n f: Callable, return_x: bool = False, return_val=np.nan\n) -> Callable:\n \"\"\"\n Decorator for escaping errors in a function\n\n Should be used like this::\n\n df[column].apply(\n skiperror(transform, return_val=3, return_x=False))\n\n or::\n\n @skiperror(return_val=3, return_x=False)\n def transform(x):\n pass\n\n :param f: the function to be wrapped\n :param return_x: whether or not the original value that caused error\n should be returned\n :param return_val: the value to be returned when an error hits.\n Ignored if return_x is True\n :returns: _wrapped, the wrapped function\n \"\"\"\n\n def _wrapped(x, *args, **kwargs):\n try:\n return f(x, *args, **kwargs)\n except Exception:\n if return_x:\n return x\n return return_val\n\n return _wrapped\n", "path": "janitor/utils.py"}]}
3,394
512
gh_patches_debug_18493
rasdani/github-patches
git_diff
getsentry__sentry-python-1868
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- "ImportError: cannot import name 'StarletteIntegration' from 'sentry_sdk.integrations.starlette'" after sentry version update ### How do you use Sentry? Sentry Saas (sentry.io) ### Version 1.9.7 ### Steps to Reproduce It only happens every once in a while so it is not easy to reproduce. Here is my `sentry.py` (see exception trace below): ``` import functools import inspect import os from typing import Callable import sentry_sdk from archimedes_data_fetcher.config import config def initialize(): """Initializes sentry""" sentry_sdk.init( config.get("SENTRY_DSN"), environment=config.get("ENVIRONMENT"), ) ... ... ... ``` Here is the output of `pip list` on the failing container: ``` Package Version ----------------------- --------- affine 2.3.1 archimedes-data-fetcher 0.5.19 archimedes-flow-utils 1.0.2 attrs 22.1.0 azure-core 1.25.1 azure-storage-blob 12.13.1 cachetools 5.2.0 certifi 2022.6.15 cffi 1.15.1 cftime 1.6.1 charset-normalizer 2.1.1 click 8.1.3 click-plugins 1.1.1 cligj 0.7.2 cloudpickle 2.1.0 croniter 1.3.5 cryptography 37.0.4 dask 2022.8.1 datadog 0.44.0 datadog-api-client 2.3.0 distributed 2022.8.1 docker 6.0.0 fsspec 2022.8.2 gitdb 4.0.9 GitPython 3.1.27 HeapDict 1.0.1 idna 3.3 importlib-resources 5.9.0 investpy 1.0.8 isodate 0.6.1 Jinja2 3.1.2 locket 1.0.0 lxml 4.9.1 MarkupSafe 2.1.1 marshmallow 3.17.1 marshmallow-oneofschema 3.0.1 minio 7.1.11 msgpack 1.0.4 msrest 0.7.1 mypy-extensions 0.4.3 netCDF4 1.6.0 numpy 1.23.2 oauthlib 3.2.0 packaging 21.3 pandas 1.4.4 partd 1.3.0 pendulum 2.1.2 pip 22.2.2 prefect 1.3.0 psutil 5.9.1 psycopg 3.1 pycparser 2.21 pyparsing 3.0.9 pyproj 3.3.1 python-box 6.0.2 python-dateutil 2.8.2 python-dotenv 0.20.0 python-slugify 6.1.2 pytz 2022.2.1 pytzdata 2020.1 PyYAML 6.0 rasterio 1.3.2 requests 2.28.1 requests-oauthlib 1.3.1 rioxarray 0.10.3 sentry-sdk 1.9.7 setuptools 65.3.0 six 1.16.0 smmap 5.0.0 snuggs 1.4.7 sortedcontainers 2.4.0 tabulate 0.8.10 tblib 1.7.0 text-unidecode 1.3 toml 0.10.2 toolz 0.12.0 tornado 6.1 typing_extensions 4.3.0 Unidecode 1.3.4 urllib3 1.26.12 websocket-client 1.4.0 wheel 0.37.1 xarray 2022.6.0 zict 2.2.0 zipp 3.8.1 ``` This has not happened when I pin `sentry-sdk` version to `1.9.5`, so I think it must have been introduced in either `1.9.6` or `1.9.7`. ### Expected Result No crash on sentry initialization. ### Actual Result ``` Task 'store[1]': Exception encountered during task execution! Traceback (most recent call last): File "/app/.venv/lib/python3.9/site-packages/prefect/engine/task_runner.py", line 880, in get_task_run_state value = prefect.utilities.executors.run_task_with_timeout( File "/app/.venv/lib/python3.9/site-packages/prefect/utilities/executors.py", line 468, in run_task_with_timeout return task.run(*args, **kwargs) # type: ignore File "/home/runner/work/archimedes-data-fetcher/archimedes-data-fetcher/archimedes_data_fetcher/sentry.py", line 36, in wrapper File "/app/archimedes_data_fetcher/sentry.py", line 13, in initialize sentry_sdk.init( File "/app/.venv/lib/python3.9/site-packages/sentry_sdk/hub.py", line 119, in _init client = Client(*args, **kwargs) # type: ignore File "/app/.venv/lib/python3.9/site-packages/sentry_sdk/client.py", line 89, in __init__ self._init_impl() File "/app/.venv/lib/python3.9/site-packages/sentry_sdk/client.py", line 123, in _init_impl self.integrations = setup_integrations( File "/app/.venv/lib/python3.9/site-packages/sentry_sdk/integrations/__init__.py", line 109, in setup_integrations for integration_cls in iter_default_integrations( File "/app/.venv/lib/python3.9/site-packages/sentry_sdk/integrations/__init__.py", line 41, in iter_default_integrations yield getattr(import_module(module), cls) File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 850, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/app/.venv/lib/python3.9/site-packages/sentry_sdk/integrations/fastapi.py", line 4, in <module> from sentry_sdk.integrations.starlette import ( ImportError: cannot import name 'StarletteIntegration' from 'sentry_sdk.integrations.starlette' (/app/.venv/lib/python3.9/site-packages/sentry_sdk/integrations/starlette.py) ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `sentry_sdk/integrations/fastapi.py` Content: ``` 1 import asyncio 2 3 from sentry_sdk._types import MYPY 4 from sentry_sdk.hub import Hub, _should_send_default_pii 5 from sentry_sdk.integrations import DidNotEnable 6 from sentry_sdk.integrations.starlette import ( 7 StarletteIntegration, 8 StarletteRequestExtractor, 9 ) 10 from sentry_sdk.tracing import SOURCE_FOR_STYLE, TRANSACTION_SOURCE_ROUTE 11 from sentry_sdk.utils import transaction_from_function 12 13 if MYPY: 14 from typing import Any, Callable, Dict 15 16 from sentry_sdk.scope import Scope 17 18 try: 19 import fastapi # type: ignore 20 except ImportError: 21 raise DidNotEnable("FastAPI is not installed") 22 23 24 _DEFAULT_TRANSACTION_NAME = "generic FastAPI request" 25 26 27 class FastApiIntegration(StarletteIntegration): 28 identifier = "fastapi" 29 30 @staticmethod 31 def setup_once(): 32 # type: () -> None 33 patch_get_request_handler() 34 35 36 def _set_transaction_name_and_source(scope, transaction_style, request): 37 # type: (Scope, str, Any) -> None 38 name = "" 39 40 if transaction_style == "endpoint": 41 endpoint = request.scope.get("endpoint") 42 if endpoint: 43 name = transaction_from_function(endpoint) or "" 44 45 elif transaction_style == "url": 46 route = request.scope.get("route") 47 if route: 48 path = getattr(route, "path", None) 49 if path is not None: 50 name = path 51 52 if not name: 53 name = _DEFAULT_TRANSACTION_NAME 54 source = TRANSACTION_SOURCE_ROUTE 55 else: 56 source = SOURCE_FOR_STYLE[transaction_style] 57 58 scope.set_transaction_name(name, source=source) 59 60 61 def patch_get_request_handler(): 62 # type: () -> None 63 old_get_request_handler = fastapi.routing.get_request_handler 64 65 def _sentry_get_request_handler(*args, **kwargs): 66 # type: (*Any, **Any) -> Any 67 dependant = kwargs.get("dependant") 68 if ( 69 dependant 70 and dependant.call is not None 71 and not asyncio.iscoroutinefunction(dependant.call) 72 ): 73 old_call = dependant.call 74 75 def _sentry_call(*args, **kwargs): 76 # type: (*Any, **Any) -> Any 77 hub = Hub.current 78 with hub.configure_scope() as sentry_scope: 79 if sentry_scope.profile is not None: 80 sentry_scope.profile.update_active_thread_id() 81 return old_call(*args, **kwargs) 82 83 dependant.call = _sentry_call 84 85 old_app = old_get_request_handler(*args, **kwargs) 86 87 async def _sentry_app(*args, **kwargs): 88 # type: (*Any, **Any) -> Any 89 hub = Hub.current 90 integration = hub.get_integration(FastApiIntegration) 91 if integration is None: 92 return await old_app(*args, **kwargs) 93 94 with hub.configure_scope() as sentry_scope: 95 request = args[0] 96 97 _set_transaction_name_and_source( 98 sentry_scope, integration.transaction_style, request 99 ) 100 101 extractor = StarletteRequestExtractor(request) 102 info = await extractor.extract_request_info() 103 104 def _make_request_event_processor(req, integration): 105 # type: (Any, Any) -> Callable[[Dict[str, Any], Dict[str, Any]], Dict[str, Any]] 106 def event_processor(event, hint): 107 # type: (Dict[str, Any], Dict[str, Any]) -> Dict[str, Any] 108 109 # Extract information from request 110 request_info = event.get("request", {}) 111 if info: 112 if "cookies" in info and _should_send_default_pii(): 113 request_info["cookies"] = info["cookies"] 114 if "data" in info: 115 request_info["data"] = info["data"] 116 event["request"] = request_info 117 118 return event 119 120 return event_processor 121 122 sentry_scope._name = FastApiIntegration.identifier 123 sentry_scope.add_event_processor( 124 _make_request_event_processor(request, integration) 125 ) 126 127 return await old_app(*args, **kwargs) 128 129 return _sentry_app 130 131 fastapi.routing.get_request_handler = _sentry_get_request_handler 132 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/sentry_sdk/integrations/fastapi.py b/sentry_sdk/integrations/fastapi.py --- a/sentry_sdk/integrations/fastapi.py +++ b/sentry_sdk/integrations/fastapi.py @@ -3,18 +3,21 @@ from sentry_sdk._types import MYPY from sentry_sdk.hub import Hub, _should_send_default_pii from sentry_sdk.integrations import DidNotEnable -from sentry_sdk.integrations.starlette import ( - StarletteIntegration, - StarletteRequestExtractor, -) from sentry_sdk.tracing import SOURCE_FOR_STYLE, TRANSACTION_SOURCE_ROUTE from sentry_sdk.utils import transaction_from_function if MYPY: from typing import Any, Callable, Dict - from sentry_sdk.scope import Scope +try: + from sentry_sdk.integrations.starlette import ( + StarletteIntegration, + StarletteRequestExtractor, + ) +except DidNotEnable: + raise DidNotEnable("Starlette is not installed") + try: import fastapi # type: ignore except ImportError:
{"golden_diff": "diff --git a/sentry_sdk/integrations/fastapi.py b/sentry_sdk/integrations/fastapi.py\n--- a/sentry_sdk/integrations/fastapi.py\n+++ b/sentry_sdk/integrations/fastapi.py\n@@ -3,18 +3,21 @@\n from sentry_sdk._types import MYPY\n from sentry_sdk.hub import Hub, _should_send_default_pii\n from sentry_sdk.integrations import DidNotEnable\n-from sentry_sdk.integrations.starlette import (\n- StarletteIntegration,\n- StarletteRequestExtractor,\n-)\n from sentry_sdk.tracing import SOURCE_FOR_STYLE, TRANSACTION_SOURCE_ROUTE\n from sentry_sdk.utils import transaction_from_function\n \n if MYPY:\n from typing import Any, Callable, Dict\n-\n from sentry_sdk.scope import Scope\n \n+try:\n+ from sentry_sdk.integrations.starlette import (\n+ StarletteIntegration,\n+ StarletteRequestExtractor,\n+ )\n+except DidNotEnable:\n+ raise DidNotEnable(\"Starlette is not installed\")\n+\n try:\n import fastapi # type: ignore\n except ImportError:\n", "issue": "\"ImportError: cannot import name 'StarletteIntegration' from 'sentry_sdk.integrations.starlette'\" after sentry version update\n### How do you use Sentry?\r\n\r\nSentry Saas (sentry.io)\r\n\r\n### Version\r\n\r\n1.9.7\r\n\r\n### Steps to Reproduce\r\n\r\nIt only happens every once in a while so it is not easy to reproduce.\r\n\r\nHere is my `sentry.py` (see exception trace below):\r\n\r\n```\r\nimport functools\r\nimport inspect\r\nimport os\r\nfrom typing import Callable\r\n\r\nimport sentry_sdk\r\n\r\nfrom archimedes_data_fetcher.config import config\r\n\r\n\r\ndef initialize():\r\n \"\"\"Initializes sentry\"\"\"\r\n sentry_sdk.init(\r\n config.get(\"SENTRY_DSN\"),\r\n environment=config.get(\"ENVIRONMENT\"),\r\n )\r\n...\r\n...\r\n...\r\n```\r\n\r\nHere is the output of `pip list` on the failing container:\r\n\r\n```\r\nPackage Version\r\n----------------------- ---------\r\naffine 2.3.1\r\narchimedes-data-fetcher 0.5.19\r\narchimedes-flow-utils 1.0.2\r\nattrs 22.1.0\r\nazure-core 1.25.1\r\nazure-storage-blob 12.13.1\r\ncachetools 5.2.0\r\ncertifi 2022.6.15\r\ncffi 1.15.1\r\ncftime 1.6.1\r\ncharset-normalizer 2.1.1\r\nclick 8.1.3\r\nclick-plugins 1.1.1\r\ncligj 0.7.2\r\ncloudpickle 2.1.0\r\ncroniter 1.3.5\r\ncryptography 37.0.4\r\ndask 2022.8.1\r\ndatadog 0.44.0\r\ndatadog-api-client 2.3.0\r\ndistributed 2022.8.1\r\ndocker 6.0.0\r\nfsspec 2022.8.2\r\ngitdb 4.0.9\r\nGitPython 3.1.27\r\nHeapDict 1.0.1\r\nidna 3.3\r\nimportlib-resources 5.9.0\r\ninvestpy 1.0.8\r\nisodate 0.6.1\r\nJinja2 3.1.2\r\nlocket 1.0.0\r\nlxml 4.9.1\r\nMarkupSafe 2.1.1\r\nmarshmallow 3.17.1\r\nmarshmallow-oneofschema 3.0.1\r\nminio 7.1.11\r\nmsgpack 1.0.4\r\nmsrest 0.7.1\r\nmypy-extensions 0.4.3\r\nnetCDF4 1.6.0\r\nnumpy 1.23.2\r\noauthlib 3.2.0\r\npackaging 21.3\r\npandas 1.4.4\r\npartd 1.3.0\r\npendulum 2.1.2\r\npip 22.2.2\r\nprefect 1.3.0\r\npsutil 5.9.1\r\npsycopg 3.1\r\npycparser 2.21\r\npyparsing 3.0.9\r\npyproj 3.3.1\r\npython-box 6.0.2\r\npython-dateutil 2.8.2\r\npython-dotenv 0.20.0\r\npython-slugify 6.1.2\r\npytz 2022.2.1\r\npytzdata 2020.1\r\nPyYAML 6.0\r\nrasterio 1.3.2\r\nrequests 2.28.1\r\nrequests-oauthlib 1.3.1\r\nrioxarray 0.10.3\r\nsentry-sdk 1.9.7\r\nsetuptools 65.3.0\r\nsix 1.16.0\r\nsmmap 5.0.0\r\nsnuggs 1.4.7\r\nsortedcontainers 2.4.0\r\ntabulate 0.8.10\r\ntblib 1.7.0\r\ntext-unidecode 1.3\r\ntoml 0.10.2\r\ntoolz 0.12.0\r\ntornado 6.1\r\ntyping_extensions 4.3.0\r\nUnidecode 1.3.4\r\nurllib3 1.26.12\r\nwebsocket-client 1.4.0\r\nwheel 0.37.1\r\nxarray 2022.6.0\r\nzict 2.2.0\r\nzipp 3.8.1\r\n```\r\n\r\nThis has not happened when I pin `sentry-sdk` version to `1.9.5`, so I think it must have been introduced in either `1.9.6` or `1.9.7`.\r\n\r\n### Expected Result\r\n\r\nNo crash on sentry initialization.\r\n\r\n### Actual Result\r\n\r\n```\r\nTask 'store[1]': Exception encountered during task execution!\r\nTraceback (most recent call last):\r\n File \"/app/.venv/lib/python3.9/site-packages/prefect/engine/task_runner.py\", line 880, in get_task_run_state\r\n value = prefect.utilities.executors.run_task_with_timeout(\r\n File \"/app/.venv/lib/python3.9/site-packages/prefect/utilities/executors.py\", line 468, in run_task_with_timeout\r\n return task.run(*args, **kwargs) # type: ignore\r\n File \"/home/runner/work/archimedes-data-fetcher/archimedes-data-fetcher/archimedes_data_fetcher/sentry.py\", line 36, in wrapper\r\n File \"/app/archimedes_data_fetcher/sentry.py\", line 13, in initialize\r\n sentry_sdk.init(\r\n File \"/app/.venv/lib/python3.9/site-packages/sentry_sdk/hub.py\", line 119, in _init\r\n client = Client(*args, **kwargs) # type: ignore\r\n File \"/app/.venv/lib/python3.9/site-packages/sentry_sdk/client.py\", line 89, in __init__\r\n self._init_impl()\r\n File \"/app/.venv/lib/python3.9/site-packages/sentry_sdk/client.py\", line 123, in _init_impl\r\n self.integrations = setup_integrations(\r\n File \"/app/.venv/lib/python3.9/site-packages/sentry_sdk/integrations/__init__.py\", line 109, in setup_integrations\r\n for integration_cls in iter_default_integrations(\r\n File \"/app/.venv/lib/python3.9/site-packages/sentry_sdk/integrations/__init__.py\", line 41, in iter_default_integrations\r\n yield getattr(import_module(module), cls)\r\n File \"/usr/local/lib/python3.9/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1030, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 1007, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 986, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 680, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 850, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 228, in _call_with_frames_removed\r\n File \"/app/.venv/lib/python3.9/site-packages/sentry_sdk/integrations/fastapi.py\", line 4, in <module>\r\n from sentry_sdk.integrations.starlette import (\r\nImportError: cannot import name 'StarletteIntegration' from 'sentry_sdk.integrations.starlette' (/app/.venv/lib/python3.9/site-packages/sentry_sdk/integrations/starlette.py)\r\n```\r\n\n", "before_files": [{"content": "import asyncio\n\nfrom sentry_sdk._types import MYPY\nfrom sentry_sdk.hub import Hub, _should_send_default_pii\nfrom sentry_sdk.integrations import DidNotEnable\nfrom sentry_sdk.integrations.starlette import (\n StarletteIntegration,\n StarletteRequestExtractor,\n)\nfrom sentry_sdk.tracing import SOURCE_FOR_STYLE, TRANSACTION_SOURCE_ROUTE\nfrom sentry_sdk.utils import transaction_from_function\n\nif MYPY:\n from typing import Any, Callable, Dict\n\n from sentry_sdk.scope import Scope\n\ntry:\n import fastapi # type: ignore\nexcept ImportError:\n raise DidNotEnable(\"FastAPI is not installed\")\n\n\n_DEFAULT_TRANSACTION_NAME = \"generic FastAPI request\"\n\n\nclass FastApiIntegration(StarletteIntegration):\n identifier = \"fastapi\"\n\n @staticmethod\n def setup_once():\n # type: () -> None\n patch_get_request_handler()\n\n\ndef _set_transaction_name_and_source(scope, transaction_style, request):\n # type: (Scope, str, Any) -> None\n name = \"\"\n\n if transaction_style == \"endpoint\":\n endpoint = request.scope.get(\"endpoint\")\n if endpoint:\n name = transaction_from_function(endpoint) or \"\"\n\n elif transaction_style == \"url\":\n route = request.scope.get(\"route\")\n if route:\n path = getattr(route, \"path\", None)\n if path is not None:\n name = path\n\n if not name:\n name = _DEFAULT_TRANSACTION_NAME\n source = TRANSACTION_SOURCE_ROUTE\n else:\n source = SOURCE_FOR_STYLE[transaction_style]\n\n scope.set_transaction_name(name, source=source)\n\n\ndef patch_get_request_handler():\n # type: () -> None\n old_get_request_handler = fastapi.routing.get_request_handler\n\n def _sentry_get_request_handler(*args, **kwargs):\n # type: (*Any, **Any) -> Any\n dependant = kwargs.get(\"dependant\")\n if (\n dependant\n and dependant.call is not None\n and not asyncio.iscoroutinefunction(dependant.call)\n ):\n old_call = dependant.call\n\n def _sentry_call(*args, **kwargs):\n # type: (*Any, **Any) -> Any\n hub = Hub.current\n with hub.configure_scope() as sentry_scope:\n if sentry_scope.profile is not None:\n sentry_scope.profile.update_active_thread_id()\n return old_call(*args, **kwargs)\n\n dependant.call = _sentry_call\n\n old_app = old_get_request_handler(*args, **kwargs)\n\n async def _sentry_app(*args, **kwargs):\n # type: (*Any, **Any) -> Any\n hub = Hub.current\n integration = hub.get_integration(FastApiIntegration)\n if integration is None:\n return await old_app(*args, **kwargs)\n\n with hub.configure_scope() as sentry_scope:\n request = args[0]\n\n _set_transaction_name_and_source(\n sentry_scope, integration.transaction_style, request\n )\n\n extractor = StarletteRequestExtractor(request)\n info = await extractor.extract_request_info()\n\n def _make_request_event_processor(req, integration):\n # type: (Any, Any) -> Callable[[Dict[str, Any], Dict[str, Any]], Dict[str, Any]]\n def event_processor(event, hint):\n # type: (Dict[str, Any], Dict[str, Any]) -> Dict[str, Any]\n\n # Extract information from request\n request_info = event.get(\"request\", {})\n if info:\n if \"cookies\" in info and _should_send_default_pii():\n request_info[\"cookies\"] = info[\"cookies\"]\n if \"data\" in info:\n request_info[\"data\"] = info[\"data\"]\n event[\"request\"] = request_info\n\n return event\n\n return event_processor\n\n sentry_scope._name = FastApiIntegration.identifier\n sentry_scope.add_event_processor(\n _make_request_event_processor(request, integration)\n )\n\n return await old_app(*args, **kwargs)\n\n return _sentry_app\n\n fastapi.routing.get_request_handler = _sentry_get_request_handler\n", "path": "sentry_sdk/integrations/fastapi.py"}], "after_files": [{"content": "import asyncio\n\nfrom sentry_sdk._types import MYPY\nfrom sentry_sdk.hub import Hub, _should_send_default_pii\nfrom sentry_sdk.integrations import DidNotEnable\nfrom sentry_sdk.tracing import SOURCE_FOR_STYLE, TRANSACTION_SOURCE_ROUTE\nfrom sentry_sdk.utils import transaction_from_function\n\nif MYPY:\n from typing import Any, Callable, Dict\n from sentry_sdk.scope import Scope\n\ntry:\n from sentry_sdk.integrations.starlette import (\n StarletteIntegration,\n StarletteRequestExtractor,\n )\nexcept DidNotEnable:\n raise DidNotEnable(\"Starlette is not installed\")\n\ntry:\n import fastapi # type: ignore\nexcept ImportError:\n raise DidNotEnable(\"FastAPI is not installed\")\n\n\n_DEFAULT_TRANSACTION_NAME = \"generic FastAPI request\"\n\n\nclass FastApiIntegration(StarletteIntegration):\n identifier = \"fastapi\"\n\n @staticmethod\n def setup_once():\n # type: () -> None\n patch_get_request_handler()\n\n\ndef _set_transaction_name_and_source(scope, transaction_style, request):\n # type: (Scope, str, Any) -> None\n name = \"\"\n\n if transaction_style == \"endpoint\":\n endpoint = request.scope.get(\"endpoint\")\n if endpoint:\n name = transaction_from_function(endpoint) or \"\"\n\n elif transaction_style == \"url\":\n route = request.scope.get(\"route\")\n if route:\n path = getattr(route, \"path\", None)\n if path is not None:\n name = path\n\n if not name:\n name = _DEFAULT_TRANSACTION_NAME\n source = TRANSACTION_SOURCE_ROUTE\n else:\n source = SOURCE_FOR_STYLE[transaction_style]\n\n scope.set_transaction_name(name, source=source)\n\n\ndef patch_get_request_handler():\n # type: () -> None\n old_get_request_handler = fastapi.routing.get_request_handler\n\n def _sentry_get_request_handler(*args, **kwargs):\n # type: (*Any, **Any) -> Any\n dependant = kwargs.get(\"dependant\")\n if (\n dependant\n and dependant.call is not None\n and not asyncio.iscoroutinefunction(dependant.call)\n ):\n old_call = dependant.call\n\n def _sentry_call(*args, **kwargs):\n # type: (*Any, **Any) -> Any\n hub = Hub.current\n with hub.configure_scope() as sentry_scope:\n if sentry_scope.profile is not None:\n sentry_scope.profile.update_active_thread_id()\n return old_call(*args, **kwargs)\n\n dependant.call = _sentry_call\n\n old_app = old_get_request_handler(*args, **kwargs)\n\n async def _sentry_app(*args, **kwargs):\n # type: (*Any, **Any) -> Any\n hub = Hub.current\n integration = hub.get_integration(FastApiIntegration)\n if integration is None:\n return await old_app(*args, **kwargs)\n\n with hub.configure_scope() as sentry_scope:\n request = args[0]\n\n _set_transaction_name_and_source(\n sentry_scope, integration.transaction_style, request\n )\n\n extractor = StarletteRequestExtractor(request)\n info = await extractor.extract_request_info()\n\n def _make_request_event_processor(req, integration):\n # type: (Any, Any) -> Callable[[Dict[str, Any], Dict[str, Any]], Dict[str, Any]]\n def event_processor(event, hint):\n # type: (Dict[str, Any], Dict[str, Any]) -> Dict[str, Any]\n\n # Extract information from request\n request_info = event.get(\"request\", {})\n if info:\n if \"cookies\" in info and _should_send_default_pii():\n request_info[\"cookies\"] = info[\"cookies\"]\n if \"data\" in info:\n request_info[\"data\"] = info[\"data\"]\n event[\"request\"] = request_info\n\n return event\n\n return event_processor\n\n sentry_scope._name = FastApiIntegration.identifier\n sentry_scope.add_event_processor(\n _make_request_event_processor(request, integration)\n )\n\n return await old_app(*args, **kwargs)\n\n return _sentry_app\n\n fastapi.routing.get_request_handler = _sentry_get_request_handler\n", "path": "sentry_sdk/integrations/fastapi.py"}]}
3,293
242
gh_patches_debug_30535
rasdani/github-patches
git_diff
pantsbuild__pants-13940
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- `log_level_by_target` doesn't seem to work well for Python targets **Describe the bug** Using the following: ``` level = "warn" log_levels_by_target = {"pants.goal.stats_aggregator" = "info"} ``` when running `./pants --stats-log ...` I don't see the stats being logged. **Pants version** `main` **OS** Ubuntu **Additional info** I suspect this is because `log_levels_by_target` isn't used to configure the Python logger, and therefore the python logger for any Python module is configured to use the level set by `level`. This can be seen by inspecting the logger in `src/python/pants/goal/stats_aggregator.py` which is set to the level `WARN`. Therefore I assume the log call never gets forwarded to the Rust-implemented handler, and therefore `log_levels_by_target` isn't considered. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `src/python/pants/init/logging.py` Content: ``` 1 # Copyright 2018 Pants project contributors (see CONTRIBUTORS.md). 2 # Licensed under the Apache License, Version 2.0 (see LICENSE). 3 4 from __future__ import annotations 5 6 import http.client 7 import locale 8 import logging 9 import sys 10 from contextlib import contextmanager 11 from io import BufferedReader, TextIOWrapper 12 from logging import Formatter, Handler, LogRecord 13 from pathlib import PurePath 14 from typing import Iterator 15 16 import pants.util.logging as pants_logging 17 from pants.engine.internals import native_engine 18 from pants.option.option_value_container import OptionValueContainer 19 from pants.util.dirutil import safe_mkdir_for 20 from pants.util.docutil import doc_url 21 from pants.util.logging import LogLevel 22 from pants.util.strutil import strip_prefix 23 24 # Although logging supports the WARN level, its not documented and could conceivably be yanked. 25 # Since pants has supported 'warn' since inception, leave the 'warn' choice as-is but explicitly 26 # setup a 'WARN' logging level name that maps to 'WARNING'. 27 logging.addLevelName(logging.WARNING, "WARN") 28 logging.addLevelName(pants_logging.TRACE, "TRACE") 29 30 31 class _NativeHandler(Handler): 32 """This class is installed as a Python logging module handler (using the logging.addHandler 33 method) and proxies logs to the Rust logging infrastructure.""" 34 35 def emit(self, record: LogRecord) -> None: 36 native_engine.write_log(self.format(record), record.levelno, record.name) 37 38 def flush(self) -> None: 39 native_engine.flush_log() 40 41 42 class _ExceptionFormatter(Formatter): 43 """Possibly render the stacktrace and possibly give debug hints, based on global options.""" 44 45 def __init__(self, level: LogLevel, *, print_stacktrace: bool) -> None: 46 super().__init__(None) 47 self.level = level 48 self.print_stacktrace = print_stacktrace 49 50 def formatException(self, exc_info): 51 stacktrace = super().formatException(exc_info) if self.print_stacktrace else "" 52 53 debug_instructions = [] 54 if not self.print_stacktrace: 55 debug_instructions.append("--print-stacktrace for more error details") 56 if self.level not in {LogLevel.DEBUG, LogLevel.TRACE}: 57 debug_instructions.append("-ldebug for more logs") 58 debug_instructions = ( 59 f"Use {' and/or '.join(debug_instructions)}. " if debug_instructions else "" 60 ) 61 62 return ( 63 f"{stacktrace}\n\n{debug_instructions}\nSee {doc_url('troubleshooting')} for common " 64 f"issues.\nConsider reaching out for help: {doc_url('getting-help')}\n" 65 ) 66 67 68 @contextmanager 69 def stdio_destination(stdin_fileno: int, stdout_fileno: int, stderr_fileno: int) -> Iterator[None]: 70 """Sets a destination for both logging and stdio: must be called after `initialize_stdio`. 71 72 After `initialize_stdio` and outside of this contextmanager, the default stdio destination is 73 the pants.log. But inside of this block, all engine "tasks"/@rules that are spawned will have 74 thread/task-local state that directs their IO to the given destination. When the contextmanager 75 exits all tasks will be restored to the default destination (regardless of whether they have 76 completed). 77 """ 78 if not logging.getLogger(None).handlers: 79 raise AssertionError("stdio_destination should only be called after initialize_stdio.") 80 81 native_engine.stdio_thread_console_set(stdin_fileno, stdout_fileno, stderr_fileno) 82 try: 83 yield 84 finally: 85 native_engine.stdio_thread_console_clear() 86 87 88 def stdio_destination_use_color(use_color: bool) -> None: 89 """Sets a color mode for the current thread's destination. 90 91 True or false force color to be used or not used: None causes TTY detection to decide whether 92 color will be used. 93 94 NB: This method is independent from either `stdio_destination` or `initialize_stdio` because 95 we cannot decide whether to use color for a particular destination until it is open AND we have 96 parsed options for the relevant connection. 97 """ 98 native_engine.stdio_thread_console_color_mode_set(use_color) 99 100 101 @contextmanager 102 def _python_logging_setup(level: LogLevel, *, print_stacktrace: bool) -> Iterator[None]: 103 """Installs a root Python logger that routes all logging through a Rust logger.""" 104 105 def trace_fn(self, message, *args, **kwargs): 106 if self.isEnabledFor(LogLevel.TRACE.level): 107 self._log(LogLevel.TRACE.level, message, *args, **kwargs) 108 109 logging.Logger.trace = trace_fn # type: ignore[attr-defined] 110 logger = logging.getLogger(None) 111 112 def clear_logging_handlers(): 113 handlers = tuple(logger.handlers) 114 for handler in handlers: 115 logger.removeHandler(handler) 116 return handlers 117 118 def set_logging_handlers(handlers): 119 for handler in handlers: 120 logger.addHandler(handler) 121 122 # Remove existing handlers, and restore them afterward. 123 handlers = clear_logging_handlers() 124 try: 125 # This routes warnings through our loggers instead of straight to raw stderr. 126 logging.captureWarnings(True) 127 handler = _NativeHandler() 128 exc_formatter = _ExceptionFormatter(level, print_stacktrace=print_stacktrace) 129 handler.setFormatter(exc_formatter) 130 logger.addHandler(handler) 131 level.set_level_for(logger) 132 133 if logger.isEnabledFor(LogLevel.TRACE.level): 134 http.client.HTTPConnection.debuglevel = 1 # type: ignore[attr-defined] 135 requests_logger = logging.getLogger("requests.packages.urllib3") 136 LogLevel.TRACE.set_level_for(requests_logger) 137 requests_logger.propagate = True 138 139 yield 140 finally: 141 clear_logging_handlers() 142 set_logging_handlers(handlers) 143 144 145 @contextmanager 146 def initialize_stdio(global_bootstrap_options: OptionValueContainer) -> Iterator[None]: 147 """Mutates sys.std* and logging to route stdio for a Pants process to thread local destinations. 148 149 In this context, `sys.std*` and logging handlers will route through Rust code that uses 150 thread-local information to decide whether to write to a file, or to stdio file handles. 151 152 To control the stdio destination set by this method, use the `stdio_destination` context manager. 153 154 This is called in two different processes: 155 * PantsRunner, after it has determined that LocalPantsRunner will be running in process, and 156 immediately before setting a `stdio_destination` for the remainder of the run. 157 * PantsDaemon, immediately on startup. The process will then default to sending stdio to the log 158 until client connections arrive, at which point `stdio_destination` is used per-connection. 159 """ 160 with initialize_stdio_raw( 161 global_bootstrap_options.level, 162 global_bootstrap_options.log_show_rust_3rdparty, 163 global_bootstrap_options.show_log_target, 164 _get_log_levels_by_target(global_bootstrap_options), 165 global_bootstrap_options.print_stacktrace, 166 global_bootstrap_options.ignore_warnings, 167 global_bootstrap_options.pants_workdir, 168 ): 169 yield 170 171 172 @contextmanager 173 def initialize_stdio_raw( 174 global_level: LogLevel, 175 log_show_rust_3rdparty: bool, 176 show_target: bool, 177 log_levels_by_target: dict[str, LogLevel], 178 print_stacktrace: bool, 179 ignore_warnings: list[str], 180 pants_workdir: str, 181 ) -> Iterator[None]: 182 literal_filters = [] 183 regex_filters = [] 184 for filt in ignore_warnings: 185 if filt.startswith("$regex$"): 186 regex_filters.append(strip_prefix(filt, "$regex$")) 187 else: 188 literal_filters.append(filt) 189 190 # Set the pants log destination. 191 log_path = str(pants_log_path(PurePath(pants_workdir))) 192 safe_mkdir_for(log_path) 193 194 # Initialize thread-local stdio, and replace sys.std* with proxies. 195 original_stdin, original_stdout, original_stderr = sys.stdin, sys.stdout, sys.stderr 196 try: 197 raw_stdin, sys.stdout, sys.stderr = native_engine.stdio_initialize( 198 global_level.level, 199 log_show_rust_3rdparty, 200 show_target, 201 {k: v.level for k, v in log_levels_by_target.items()}, 202 tuple(literal_filters), 203 tuple(regex_filters), 204 log_path, 205 ) 206 sys.stdin = TextIOWrapper( 207 BufferedReader(raw_stdin), 208 # NB: We set the default encoding explicitly to bypass logic in the TextIOWrapper 209 # constructor that would poke the underlying file (which is not valid until a 210 # `stdio_destination` is set). 211 encoding=locale.getpreferredencoding(False), 212 ) 213 214 sys.__stdin__, sys.__stdout__, sys.__stderr__ = sys.stdin, sys.stdout, sys.stderr 215 # Install a Python logger that will route through the Rust logger. 216 with _python_logging_setup(global_level, print_stacktrace=print_stacktrace): 217 yield 218 finally: 219 sys.stdin, sys.stdout, sys.stderr = original_stdin, original_stdout, original_stderr 220 sys.__stdin__, sys.__stdout__, sys.__stderr__ = sys.stdin, sys.stdout, sys.stderr 221 222 223 def pants_log_path(workdir: PurePath) -> PurePath: 224 """Given the path of the workdir, returns the `pants.log` path.""" 225 return workdir / "pants.log" 226 227 228 def _get_log_levels_by_target( 229 global_bootstrap_options: OptionValueContainer, 230 ) -> dict[str, LogLevel]: 231 raw_levels = global_bootstrap_options.log_levels_by_target 232 levels: dict[str, LogLevel] = {} 233 for key, value in raw_levels.items(): 234 if not isinstance(key, str): 235 raise ValueError( 236 "Keys for log_domain_levels must be strings, but was given the key: {key} with type {type(key)}." 237 ) 238 if not isinstance(value, str): 239 raise ValueError( 240 "Values for log_domain_levels must be strings, but was given the value: {value} with type {type(value)}." 241 ) 242 log_level = LogLevel[value.upper()] 243 levels[key] = log_level 244 return levels 245 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/src/python/pants/init/logging.py b/src/python/pants/init/logging.py --- a/src/python/pants/init/logging.py +++ b/src/python/pants/init/logging.py @@ -99,7 +99,9 @@ @contextmanager -def _python_logging_setup(level: LogLevel, *, print_stacktrace: bool) -> Iterator[None]: +def _python_logging_setup( + level: LogLevel, log_levels_by_target: dict[str, LogLevel], *, print_stacktrace: bool +) -> Iterator[None]: """Installs a root Python logger that routes all logging through a Rust logger.""" def trace_fn(self, message, *args, **kwargs): @@ -130,6 +132,9 @@ logger.addHandler(handler) level.set_level_for(logger) + for key, level in log_levels_by_target.items(): + level.set_level_for(logging.getLogger(key)) + if logger.isEnabledFor(LogLevel.TRACE.level): http.client.HTTPConnection.debuglevel = 1 # type: ignore[attr-defined] requests_logger = logging.getLogger("requests.packages.urllib3") @@ -213,7 +218,9 @@ sys.__stdin__, sys.__stdout__, sys.__stderr__ = sys.stdin, sys.stdout, sys.stderr # Install a Python logger that will route through the Rust logger. - with _python_logging_setup(global_level, print_stacktrace=print_stacktrace): + with _python_logging_setup( + global_level, log_levels_by_target, print_stacktrace=print_stacktrace + ): yield finally: sys.stdin, sys.stdout, sys.stderr = original_stdin, original_stdout, original_stderr
{"golden_diff": "diff --git a/src/python/pants/init/logging.py b/src/python/pants/init/logging.py\n--- a/src/python/pants/init/logging.py\n+++ b/src/python/pants/init/logging.py\n@@ -99,7 +99,9 @@\n \n \n @contextmanager\n-def _python_logging_setup(level: LogLevel, *, print_stacktrace: bool) -> Iterator[None]:\n+def _python_logging_setup(\n+ level: LogLevel, log_levels_by_target: dict[str, LogLevel], *, print_stacktrace: bool\n+) -> Iterator[None]:\n \"\"\"Installs a root Python logger that routes all logging through a Rust logger.\"\"\"\n \n def trace_fn(self, message, *args, **kwargs):\n@@ -130,6 +132,9 @@\n logger.addHandler(handler)\n level.set_level_for(logger)\n \n+ for key, level in log_levels_by_target.items():\n+ level.set_level_for(logging.getLogger(key))\n+\n if logger.isEnabledFor(LogLevel.TRACE.level):\n http.client.HTTPConnection.debuglevel = 1 # type: ignore[attr-defined]\n requests_logger = logging.getLogger(\"requests.packages.urllib3\")\n@@ -213,7 +218,9 @@\n \n sys.__stdin__, sys.__stdout__, sys.__stderr__ = sys.stdin, sys.stdout, sys.stderr\n # Install a Python logger that will route through the Rust logger.\n- with _python_logging_setup(global_level, print_stacktrace=print_stacktrace):\n+ with _python_logging_setup(\n+ global_level, log_levels_by_target, print_stacktrace=print_stacktrace\n+ ):\n yield\n finally:\n sys.stdin, sys.stdout, sys.stderr = original_stdin, original_stdout, original_stderr\n", "issue": "`log_level_by_target` doesn't seem to work well for Python targets\n**Describe the bug**\r\nUsing the following:\r\n```\r\nlevel = \"warn\"\r\nlog_levels_by_target = {\"pants.goal.stats_aggregator\" = \"info\"}\r\n```\r\nwhen running `./pants --stats-log ...` I don't see the stats being logged.\r\n\r\n**Pants version**\r\n`main`\r\n\r\n**OS**\r\nUbuntu\r\n\r\n**Additional info**\r\nI suspect this is because `log_levels_by_target` isn't used to configure the Python logger, and therefore the python logger for any Python module is configured to use the level set by `level`.\r\n\r\nThis can be seen by inspecting the logger in `src/python/pants/goal/stats_aggregator.py` which is set to the level `WARN`. Therefore I assume the log call never gets forwarded to the Rust-implemented handler, and therefore `log_levels_by_target` isn't considered.\r\n\n", "before_files": [{"content": "# Copyright 2018 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nimport http.client\nimport locale\nimport logging\nimport sys\nfrom contextlib import contextmanager\nfrom io import BufferedReader, TextIOWrapper\nfrom logging import Formatter, Handler, LogRecord\nfrom pathlib import PurePath\nfrom typing import Iterator\n\nimport pants.util.logging as pants_logging\nfrom pants.engine.internals import native_engine\nfrom pants.option.option_value_container import OptionValueContainer\nfrom pants.util.dirutil import safe_mkdir_for\nfrom pants.util.docutil import doc_url\nfrom pants.util.logging import LogLevel\nfrom pants.util.strutil import strip_prefix\n\n# Although logging supports the WARN level, its not documented and could conceivably be yanked.\n# Since pants has supported 'warn' since inception, leave the 'warn' choice as-is but explicitly\n# setup a 'WARN' logging level name that maps to 'WARNING'.\nlogging.addLevelName(logging.WARNING, \"WARN\")\nlogging.addLevelName(pants_logging.TRACE, \"TRACE\")\n\n\nclass _NativeHandler(Handler):\n \"\"\"This class is installed as a Python logging module handler (using the logging.addHandler\n method) and proxies logs to the Rust logging infrastructure.\"\"\"\n\n def emit(self, record: LogRecord) -> None:\n native_engine.write_log(self.format(record), record.levelno, record.name)\n\n def flush(self) -> None:\n native_engine.flush_log()\n\n\nclass _ExceptionFormatter(Formatter):\n \"\"\"Possibly render the stacktrace and possibly give debug hints, based on global options.\"\"\"\n\n def __init__(self, level: LogLevel, *, print_stacktrace: bool) -> None:\n super().__init__(None)\n self.level = level\n self.print_stacktrace = print_stacktrace\n\n def formatException(self, exc_info):\n stacktrace = super().formatException(exc_info) if self.print_stacktrace else \"\"\n\n debug_instructions = []\n if not self.print_stacktrace:\n debug_instructions.append(\"--print-stacktrace for more error details\")\n if self.level not in {LogLevel.DEBUG, LogLevel.TRACE}:\n debug_instructions.append(\"-ldebug for more logs\")\n debug_instructions = (\n f\"Use {' and/or '.join(debug_instructions)}. \" if debug_instructions else \"\"\n )\n\n return (\n f\"{stacktrace}\\n\\n{debug_instructions}\\nSee {doc_url('troubleshooting')} for common \"\n f\"issues.\\nConsider reaching out for help: {doc_url('getting-help')}\\n\"\n )\n\n\n@contextmanager\ndef stdio_destination(stdin_fileno: int, stdout_fileno: int, stderr_fileno: int) -> Iterator[None]:\n \"\"\"Sets a destination for both logging and stdio: must be called after `initialize_stdio`.\n\n After `initialize_stdio` and outside of this contextmanager, the default stdio destination is\n the pants.log. But inside of this block, all engine \"tasks\"/@rules that are spawned will have\n thread/task-local state that directs their IO to the given destination. When the contextmanager\n exits all tasks will be restored to the default destination (regardless of whether they have\n completed).\n \"\"\"\n if not logging.getLogger(None).handlers:\n raise AssertionError(\"stdio_destination should only be called after initialize_stdio.\")\n\n native_engine.stdio_thread_console_set(stdin_fileno, stdout_fileno, stderr_fileno)\n try:\n yield\n finally:\n native_engine.stdio_thread_console_clear()\n\n\ndef stdio_destination_use_color(use_color: bool) -> None:\n \"\"\"Sets a color mode for the current thread's destination.\n\n True or false force color to be used or not used: None causes TTY detection to decide whether\n color will be used.\n\n NB: This method is independent from either `stdio_destination` or `initialize_stdio` because\n we cannot decide whether to use color for a particular destination until it is open AND we have\n parsed options for the relevant connection.\n \"\"\"\n native_engine.stdio_thread_console_color_mode_set(use_color)\n\n\n@contextmanager\ndef _python_logging_setup(level: LogLevel, *, print_stacktrace: bool) -> Iterator[None]:\n \"\"\"Installs a root Python logger that routes all logging through a Rust logger.\"\"\"\n\n def trace_fn(self, message, *args, **kwargs):\n if self.isEnabledFor(LogLevel.TRACE.level):\n self._log(LogLevel.TRACE.level, message, *args, **kwargs)\n\n logging.Logger.trace = trace_fn # type: ignore[attr-defined]\n logger = logging.getLogger(None)\n\n def clear_logging_handlers():\n handlers = tuple(logger.handlers)\n for handler in handlers:\n logger.removeHandler(handler)\n return handlers\n\n def set_logging_handlers(handlers):\n for handler in handlers:\n logger.addHandler(handler)\n\n # Remove existing handlers, and restore them afterward.\n handlers = clear_logging_handlers()\n try:\n # This routes warnings through our loggers instead of straight to raw stderr.\n logging.captureWarnings(True)\n handler = _NativeHandler()\n exc_formatter = _ExceptionFormatter(level, print_stacktrace=print_stacktrace)\n handler.setFormatter(exc_formatter)\n logger.addHandler(handler)\n level.set_level_for(logger)\n\n if logger.isEnabledFor(LogLevel.TRACE.level):\n http.client.HTTPConnection.debuglevel = 1 # type: ignore[attr-defined]\n requests_logger = logging.getLogger(\"requests.packages.urllib3\")\n LogLevel.TRACE.set_level_for(requests_logger)\n requests_logger.propagate = True\n\n yield\n finally:\n clear_logging_handlers()\n set_logging_handlers(handlers)\n\n\n@contextmanager\ndef initialize_stdio(global_bootstrap_options: OptionValueContainer) -> Iterator[None]:\n \"\"\"Mutates sys.std* and logging to route stdio for a Pants process to thread local destinations.\n\n In this context, `sys.std*` and logging handlers will route through Rust code that uses\n thread-local information to decide whether to write to a file, or to stdio file handles.\n\n To control the stdio destination set by this method, use the `stdio_destination` context manager.\n\n This is called in two different processes:\n * PantsRunner, after it has determined that LocalPantsRunner will be running in process, and\n immediately before setting a `stdio_destination` for the remainder of the run.\n * PantsDaemon, immediately on startup. The process will then default to sending stdio to the log\n until client connections arrive, at which point `stdio_destination` is used per-connection.\n \"\"\"\n with initialize_stdio_raw(\n global_bootstrap_options.level,\n global_bootstrap_options.log_show_rust_3rdparty,\n global_bootstrap_options.show_log_target,\n _get_log_levels_by_target(global_bootstrap_options),\n global_bootstrap_options.print_stacktrace,\n global_bootstrap_options.ignore_warnings,\n global_bootstrap_options.pants_workdir,\n ):\n yield\n\n\n@contextmanager\ndef initialize_stdio_raw(\n global_level: LogLevel,\n log_show_rust_3rdparty: bool,\n show_target: bool,\n log_levels_by_target: dict[str, LogLevel],\n print_stacktrace: bool,\n ignore_warnings: list[str],\n pants_workdir: str,\n) -> Iterator[None]:\n literal_filters = []\n regex_filters = []\n for filt in ignore_warnings:\n if filt.startswith(\"$regex$\"):\n regex_filters.append(strip_prefix(filt, \"$regex$\"))\n else:\n literal_filters.append(filt)\n\n # Set the pants log destination.\n log_path = str(pants_log_path(PurePath(pants_workdir)))\n safe_mkdir_for(log_path)\n\n # Initialize thread-local stdio, and replace sys.std* with proxies.\n original_stdin, original_stdout, original_stderr = sys.stdin, sys.stdout, sys.stderr\n try:\n raw_stdin, sys.stdout, sys.stderr = native_engine.stdio_initialize(\n global_level.level,\n log_show_rust_3rdparty,\n show_target,\n {k: v.level for k, v in log_levels_by_target.items()},\n tuple(literal_filters),\n tuple(regex_filters),\n log_path,\n )\n sys.stdin = TextIOWrapper(\n BufferedReader(raw_stdin),\n # NB: We set the default encoding explicitly to bypass logic in the TextIOWrapper\n # constructor that would poke the underlying file (which is not valid until a\n # `stdio_destination` is set).\n encoding=locale.getpreferredencoding(False),\n )\n\n sys.__stdin__, sys.__stdout__, sys.__stderr__ = sys.stdin, sys.stdout, sys.stderr\n # Install a Python logger that will route through the Rust logger.\n with _python_logging_setup(global_level, print_stacktrace=print_stacktrace):\n yield\n finally:\n sys.stdin, sys.stdout, sys.stderr = original_stdin, original_stdout, original_stderr\n sys.__stdin__, sys.__stdout__, sys.__stderr__ = sys.stdin, sys.stdout, sys.stderr\n\n\ndef pants_log_path(workdir: PurePath) -> PurePath:\n \"\"\"Given the path of the workdir, returns the `pants.log` path.\"\"\"\n return workdir / \"pants.log\"\n\n\ndef _get_log_levels_by_target(\n global_bootstrap_options: OptionValueContainer,\n) -> dict[str, LogLevel]:\n raw_levels = global_bootstrap_options.log_levels_by_target\n levels: dict[str, LogLevel] = {}\n for key, value in raw_levels.items():\n if not isinstance(key, str):\n raise ValueError(\n \"Keys for log_domain_levels must be strings, but was given the key: {key} with type {type(key)}.\"\n )\n if not isinstance(value, str):\n raise ValueError(\n \"Values for log_domain_levels must be strings, but was given the value: {value} with type {type(value)}.\"\n )\n log_level = LogLevel[value.upper()]\n levels[key] = log_level\n return levels\n", "path": "src/python/pants/init/logging.py"}], "after_files": [{"content": "# Copyright 2018 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nimport http.client\nimport locale\nimport logging\nimport sys\nfrom contextlib import contextmanager\nfrom io import BufferedReader, TextIOWrapper\nfrom logging import Formatter, Handler, LogRecord\nfrom pathlib import PurePath\nfrom typing import Iterator\n\nimport pants.util.logging as pants_logging\nfrom pants.engine.internals import native_engine\nfrom pants.option.option_value_container import OptionValueContainer\nfrom pants.util.dirutil import safe_mkdir_for\nfrom pants.util.docutil import doc_url\nfrom pants.util.logging import LogLevel\nfrom pants.util.strutil import strip_prefix\n\n# Although logging supports the WARN level, its not documented and could conceivably be yanked.\n# Since pants has supported 'warn' since inception, leave the 'warn' choice as-is but explicitly\n# setup a 'WARN' logging level name that maps to 'WARNING'.\nlogging.addLevelName(logging.WARNING, \"WARN\")\nlogging.addLevelName(pants_logging.TRACE, \"TRACE\")\n\n\nclass _NativeHandler(Handler):\n \"\"\"This class is installed as a Python logging module handler (using the logging.addHandler\n method) and proxies logs to the Rust logging infrastructure.\"\"\"\n\n def emit(self, record: LogRecord) -> None:\n native_engine.write_log(self.format(record), record.levelno, record.name)\n\n def flush(self) -> None:\n native_engine.flush_log()\n\n\nclass _ExceptionFormatter(Formatter):\n \"\"\"Possibly render the stacktrace and possibly give debug hints, based on global options.\"\"\"\n\n def __init__(self, level: LogLevel, *, print_stacktrace: bool) -> None:\n super().__init__(None)\n self.level = level\n self.print_stacktrace = print_stacktrace\n\n def formatException(self, exc_info):\n stacktrace = super().formatException(exc_info) if self.print_stacktrace else \"\"\n\n debug_instructions = []\n if not self.print_stacktrace:\n debug_instructions.append(\"--print-stacktrace for more error details\")\n if self.level not in {LogLevel.DEBUG, LogLevel.TRACE}:\n debug_instructions.append(\"-ldebug for more logs\")\n debug_instructions = (\n f\"Use {' and/or '.join(debug_instructions)}. \" if debug_instructions else \"\"\n )\n\n return (\n f\"{stacktrace}\\n\\n{debug_instructions}\\nSee {doc_url('troubleshooting')} for common \"\n f\"issues.\\nConsider reaching out for help: {doc_url('getting-help')}\\n\"\n )\n\n\n@contextmanager\ndef stdio_destination(stdin_fileno: int, stdout_fileno: int, stderr_fileno: int) -> Iterator[None]:\n \"\"\"Sets a destination for both logging and stdio: must be called after `initialize_stdio`.\n\n After `initialize_stdio` and outside of this contextmanager, the default stdio destination is\n the pants.log. But inside of this block, all engine \"tasks\"/@rules that are spawned will have\n thread/task-local state that directs their IO to the given destination. When the contextmanager\n exits all tasks will be restored to the default destination (regardless of whether they have\n completed).\n \"\"\"\n if not logging.getLogger(None).handlers:\n raise AssertionError(\"stdio_destination should only be called after initialize_stdio.\")\n\n native_engine.stdio_thread_console_set(stdin_fileno, stdout_fileno, stderr_fileno)\n try:\n yield\n finally:\n native_engine.stdio_thread_console_clear()\n\n\ndef stdio_destination_use_color(use_color: bool) -> None:\n \"\"\"Sets a color mode for the current thread's destination.\n\n True or false force color to be used or not used: None causes TTY detection to decide whether\n color will be used.\n\n NB: This method is independent from either `stdio_destination` or `initialize_stdio` because\n we cannot decide whether to use color for a particular destination until it is open AND we have\n parsed options for the relevant connection.\n \"\"\"\n native_engine.stdio_thread_console_color_mode_set(use_color)\n\n\n@contextmanager\ndef _python_logging_setup(\n level: LogLevel, log_levels_by_target: dict[str, LogLevel], *, print_stacktrace: bool\n) -> Iterator[None]:\n \"\"\"Installs a root Python logger that routes all logging through a Rust logger.\"\"\"\n\n def trace_fn(self, message, *args, **kwargs):\n if self.isEnabledFor(LogLevel.TRACE.level):\n self._log(LogLevel.TRACE.level, message, *args, **kwargs)\n\n logging.Logger.trace = trace_fn # type: ignore[attr-defined]\n logger = logging.getLogger(None)\n\n def clear_logging_handlers():\n handlers = tuple(logger.handlers)\n for handler in handlers:\n logger.removeHandler(handler)\n return handlers\n\n def set_logging_handlers(handlers):\n for handler in handlers:\n logger.addHandler(handler)\n\n # Remove existing handlers, and restore them afterward.\n handlers = clear_logging_handlers()\n try:\n # This routes warnings through our loggers instead of straight to raw stderr.\n logging.captureWarnings(True)\n handler = _NativeHandler()\n exc_formatter = _ExceptionFormatter(level, print_stacktrace=print_stacktrace)\n handler.setFormatter(exc_formatter)\n logger.addHandler(handler)\n level.set_level_for(logger)\n\n for key, level in log_levels_by_target.items():\n level.set_level_for(logging.getLogger(key))\n\n if logger.isEnabledFor(LogLevel.TRACE.level):\n http.client.HTTPConnection.debuglevel = 1 # type: ignore[attr-defined]\n requests_logger = logging.getLogger(\"requests.packages.urllib3\")\n LogLevel.TRACE.set_level_for(requests_logger)\n requests_logger.propagate = True\n\n yield\n finally:\n clear_logging_handlers()\n set_logging_handlers(handlers)\n\n\n@contextmanager\ndef initialize_stdio(global_bootstrap_options: OptionValueContainer) -> Iterator[None]:\n \"\"\"Mutates sys.std* and logging to route stdio for a Pants process to thread local destinations.\n\n In this context, `sys.std*` and logging handlers will route through Rust code that uses\n thread-local information to decide whether to write to a file, or to stdio file handles.\n\n To control the stdio destination set by this method, use the `stdio_destination` context manager.\n\n This is called in two different processes:\n * PantsRunner, after it has determined that LocalPantsRunner will be running in process, and\n immediately before setting a `stdio_destination` for the remainder of the run.\n * PantsDaemon, immediately on startup. The process will then default to sending stdio to the log\n until client connections arrive, at which point `stdio_destination` is used per-connection.\n \"\"\"\n with initialize_stdio_raw(\n global_bootstrap_options.level,\n global_bootstrap_options.log_show_rust_3rdparty,\n global_bootstrap_options.show_log_target,\n _get_log_levels_by_target(global_bootstrap_options),\n global_bootstrap_options.print_stacktrace,\n global_bootstrap_options.ignore_warnings,\n global_bootstrap_options.pants_workdir,\n ):\n yield\n\n\n@contextmanager\ndef initialize_stdio_raw(\n global_level: LogLevel,\n log_show_rust_3rdparty: bool,\n show_target: bool,\n log_levels_by_target: dict[str, LogLevel],\n print_stacktrace: bool,\n ignore_warnings: list[str],\n pants_workdir: str,\n) -> Iterator[None]:\n literal_filters = []\n regex_filters = []\n for filt in ignore_warnings:\n if filt.startswith(\"$regex$\"):\n regex_filters.append(strip_prefix(filt, \"$regex$\"))\n else:\n literal_filters.append(filt)\n\n # Set the pants log destination.\n log_path = str(pants_log_path(PurePath(pants_workdir)))\n safe_mkdir_for(log_path)\n\n # Initialize thread-local stdio, and replace sys.std* with proxies.\n original_stdin, original_stdout, original_stderr = sys.stdin, sys.stdout, sys.stderr\n try:\n raw_stdin, sys.stdout, sys.stderr = native_engine.stdio_initialize(\n global_level.level,\n log_show_rust_3rdparty,\n show_target,\n {k: v.level for k, v in log_levels_by_target.items()},\n tuple(literal_filters),\n tuple(regex_filters),\n log_path,\n )\n sys.stdin = TextIOWrapper(\n BufferedReader(raw_stdin),\n # NB: We set the default encoding explicitly to bypass logic in the TextIOWrapper\n # constructor that would poke the underlying file (which is not valid until a\n # `stdio_destination` is set).\n encoding=locale.getpreferredencoding(False),\n )\n\n sys.__stdin__, sys.__stdout__, sys.__stderr__ = sys.stdin, sys.stdout, sys.stderr\n # Install a Python logger that will route through the Rust logger.\n with _python_logging_setup(\n global_level, log_levels_by_target, print_stacktrace=print_stacktrace\n ):\n yield\n finally:\n sys.stdin, sys.stdout, sys.stderr = original_stdin, original_stdout, original_stderr\n sys.__stdin__, sys.__stdout__, sys.__stderr__ = sys.stdin, sys.stdout, sys.stderr\n\n\ndef pants_log_path(workdir: PurePath) -> PurePath:\n \"\"\"Given the path of the workdir, returns the `pants.log` path.\"\"\"\n return workdir / \"pants.log\"\n\n\ndef _get_log_levels_by_target(\n global_bootstrap_options: OptionValueContainer,\n) -> dict[str, LogLevel]:\n raw_levels = global_bootstrap_options.log_levels_by_target\n levels: dict[str, LogLevel] = {}\n for key, value in raw_levels.items():\n if not isinstance(key, str):\n raise ValueError(\n \"Keys for log_domain_levels must be strings, but was given the key: {key} with type {type(key)}.\"\n )\n if not isinstance(value, str):\n raise ValueError(\n \"Values for log_domain_levels must be strings, but was given the value: {value} with type {type(value)}.\"\n )\n log_level = LogLevel[value.upper()]\n levels[key] = log_level\n return levels\n", "path": "src/python/pants/init/logging.py"}]}
3,199
364
gh_patches_debug_38850
rasdani/github-patches
git_diff
sanic-org__sanic-2170
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- deprecate CompositionView ? Currently sanic offers a class called `CompositionView` I really am struggling to find any utility in this class, since ```python from sanic.views import CompositionView def get_handler(request): return text("I am a get method") view = CompositionView() view.add(["GET"], get_handler) view.add(["POST", "PUT"], lambda request: text("I am a post/put method")) # Use the new view to handle requests to the base URL app.add_route(view, "/") ``` Seems much more confusing to me than ```python def get_handler(request): return text("I am a get method") app.route("/", methods=["GET"])(get_handler) app.route("/", methods=["POST", "PUT"])(lambda request: text("I am a post/put method")) ``` Can anyone offer a compelling use case for CompositionView? If not, I would suggest to deprecate it https://github.com/sanic-org/sanic/blob/master/sanic/views.py --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `sanic/views.py` Content: ``` 1 from typing import Any, Callable, List 2 3 from sanic.constants import HTTP_METHODS 4 from sanic.exceptions import InvalidUsage 5 6 7 class HTTPMethodView: 8 """Simple class based implementation of view for the sanic. 9 You should implement methods (get, post, put, patch, delete) for the class 10 to every HTTP method you want to support. 11 12 For example: 13 14 .. code-block:: python 15 16 class DummyView(HTTPMethodView): 17 def get(self, request, *args, **kwargs): 18 return text('I am get method') 19 def put(self, request, *args, **kwargs): 20 return text('I am put method') 21 22 If someone tries to use a non-implemented method, there will be a 23 405 response. 24 25 If you need any url params just mention them in method definition: 26 27 .. code-block:: python 28 29 class DummyView(HTTPMethodView): 30 def get(self, request, my_param_here, *args, **kwargs): 31 return text('I am get method with %s' % my_param_here) 32 33 To add the view into the routing you could use 34 35 1) ``app.add_route(DummyView.as_view(), '/')``, OR 36 2) ``app.route('/')(DummyView.as_view())`` 37 38 To add any decorator you could set it into decorators variable 39 """ 40 41 decorators: List[Callable[[Callable[..., Any]], Callable[..., Any]]] = [] 42 43 def dispatch_request(self, request, *args, **kwargs): 44 handler = getattr(self, request.method.lower(), None) 45 return handler(request, *args, **kwargs) 46 47 @classmethod 48 def as_view(cls, *class_args, **class_kwargs): 49 """Return view function for use with the routing system, that 50 dispatches request to appropriate handler method. 51 """ 52 53 def view(*args, **kwargs): 54 self = view.view_class(*class_args, **class_kwargs) 55 return self.dispatch_request(*args, **kwargs) 56 57 if cls.decorators: 58 view.__module__ = cls.__module__ 59 for decorator in cls.decorators: 60 view = decorator(view) 61 62 view.view_class = cls 63 view.__doc__ = cls.__doc__ 64 view.__module__ = cls.__module__ 65 view.__name__ = cls.__name__ 66 return view 67 68 69 def stream(func): 70 func.is_stream = True 71 return func 72 73 74 class CompositionView: 75 """Simple method-function mapped view for the sanic. 76 You can add handler functions to methods (get, post, put, patch, delete) 77 for every HTTP method you want to support. 78 79 For example: 80 81 .. code-block:: python 82 83 view = CompositionView() 84 view.add(['GET'], lambda request: text('I am get method')) 85 view.add(['POST', 'PUT'], lambda request: text('I am post/put method')) 86 87 If someone tries to use a non-implemented method, there will be a 88 405 response. 89 """ 90 91 def __init__(self): 92 self.handlers = {} 93 self.name = self.__class__.__name__ 94 95 def __name__(self): 96 return self.name 97 98 def add(self, methods, handler, stream=False): 99 if stream: 100 handler.is_stream = stream 101 for method in methods: 102 if method not in HTTP_METHODS: 103 raise InvalidUsage(f"{method} is not a valid HTTP method.") 104 105 if method in self.handlers: 106 raise InvalidUsage(f"Method {method} is already registered.") 107 self.handlers[method] = handler 108 109 def __call__(self, request, *args, **kwargs): 110 handler = self.handlers[request.method.upper()] 111 return handler(request, *args, **kwargs) 112 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/sanic/views.py b/sanic/views.py --- a/sanic/views.py +++ b/sanic/views.py @@ -1,9 +1,25 @@ -from typing import Any, Callable, List +from __future__ import annotations + +from typing import ( + TYPE_CHECKING, + Any, + Callable, + Iterable, + List, + Optional, + Union, +) +from warnings import warn from sanic.constants import HTTP_METHODS from sanic.exceptions import InvalidUsage +if TYPE_CHECKING: + from sanic import Sanic + from sanic.blueprints import Blueprint + + class HTTPMethodView: """Simple class based implementation of view for the sanic. You should implement methods (get, post, put, patch, delete) for the class @@ -40,6 +56,31 @@ decorators: List[Callable[[Callable[..., Any]], Callable[..., Any]]] = [] + def __init_subclass__( + cls, + attach: Optional[Union[Sanic, Blueprint]] = None, + uri: str = "", + methods: Iterable[str] = frozenset({"GET"}), + host: Optional[str] = None, + strict_slashes: Optional[bool] = None, + version: Optional[int] = None, + name: Optional[str] = None, + stream: bool = False, + version_prefix: str = "/v", + ) -> None: + if attach: + cls.attach( + attach, + uri=uri, + methods=methods, + host=host, + strict_slashes=strict_slashes, + version=version, + name=name, + stream=stream, + version_prefix=version_prefix, + ) + def dispatch_request(self, request, *args, **kwargs): handler = getattr(self, request.method.lower(), None) return handler(request, *args, **kwargs) @@ -65,6 +106,31 @@ view.__name__ = cls.__name__ return view + @classmethod + def attach( + cls, + to: Union[Sanic, Blueprint], + uri: str, + methods: Iterable[str] = frozenset({"GET"}), + host: Optional[str] = None, + strict_slashes: Optional[bool] = None, + version: Optional[int] = None, + name: Optional[str] = None, + stream: bool = False, + version_prefix: str = "/v", + ) -> None: + to.add_route( + cls.as_view(), + uri=uri, + methods=methods, + host=host, + strict_slashes=strict_slashes, + version=version, + name=name, + stream=stream, + version_prefix=version_prefix, + ) + def stream(func): func.is_stream = True @@ -91,6 +157,11 @@ def __init__(self): self.handlers = {} self.name = self.__class__.__name__ + warn( + "CompositionView has been deprecated and will be removed in " + "v21.12. Please update your view to HTTPMethodView.", + DeprecationWarning, + ) def __name__(self): return self.name
{"golden_diff": "diff --git a/sanic/views.py b/sanic/views.py\n--- a/sanic/views.py\n+++ b/sanic/views.py\n@@ -1,9 +1,25 @@\n-from typing import Any, Callable, List\n+from __future__ import annotations\n+\n+from typing import (\n+ TYPE_CHECKING,\n+ Any,\n+ Callable,\n+ Iterable,\n+ List,\n+ Optional,\n+ Union,\n+)\n+from warnings import warn\n \n from sanic.constants import HTTP_METHODS\n from sanic.exceptions import InvalidUsage\n \n \n+if TYPE_CHECKING:\n+ from sanic import Sanic\n+ from sanic.blueprints import Blueprint\n+\n+\n class HTTPMethodView:\n \"\"\"Simple class based implementation of view for the sanic.\n You should implement methods (get, post, put, patch, delete) for the class\n@@ -40,6 +56,31 @@\n \n decorators: List[Callable[[Callable[..., Any]], Callable[..., Any]]] = []\n \n+ def __init_subclass__(\n+ cls,\n+ attach: Optional[Union[Sanic, Blueprint]] = None,\n+ uri: str = \"\",\n+ methods: Iterable[str] = frozenset({\"GET\"}),\n+ host: Optional[str] = None,\n+ strict_slashes: Optional[bool] = None,\n+ version: Optional[int] = None,\n+ name: Optional[str] = None,\n+ stream: bool = False,\n+ version_prefix: str = \"/v\",\n+ ) -> None:\n+ if attach:\n+ cls.attach(\n+ attach,\n+ uri=uri,\n+ methods=methods,\n+ host=host,\n+ strict_slashes=strict_slashes,\n+ version=version,\n+ name=name,\n+ stream=stream,\n+ version_prefix=version_prefix,\n+ )\n+\n def dispatch_request(self, request, *args, **kwargs):\n handler = getattr(self, request.method.lower(), None)\n return handler(request, *args, **kwargs)\n@@ -65,6 +106,31 @@\n view.__name__ = cls.__name__\n return view\n \n+ @classmethod\n+ def attach(\n+ cls,\n+ to: Union[Sanic, Blueprint],\n+ uri: str,\n+ methods: Iterable[str] = frozenset({\"GET\"}),\n+ host: Optional[str] = None,\n+ strict_slashes: Optional[bool] = None,\n+ version: Optional[int] = None,\n+ name: Optional[str] = None,\n+ stream: bool = False,\n+ version_prefix: str = \"/v\",\n+ ) -> None:\n+ to.add_route(\n+ cls.as_view(),\n+ uri=uri,\n+ methods=methods,\n+ host=host,\n+ strict_slashes=strict_slashes,\n+ version=version,\n+ name=name,\n+ stream=stream,\n+ version_prefix=version_prefix,\n+ )\n+\n \n def stream(func):\n func.is_stream = True\n@@ -91,6 +157,11 @@\n def __init__(self):\n self.handlers = {}\n self.name = self.__class__.__name__\n+ warn(\n+ \"CompositionView has been deprecated and will be removed in \"\n+ \"v21.12. Please update your view to HTTPMethodView.\",\n+ DeprecationWarning,\n+ )\n \n def __name__(self):\n return self.name\n", "issue": "deprecate CompositionView ? \nCurrently sanic offers a class called `CompositionView`\r\n\r\nI really am struggling to find any utility in this class, since \r\n\r\n```python\r\nfrom sanic.views import CompositionView\r\n\r\ndef get_handler(request):\r\n return text(\"I am a get method\")\r\n\r\nview = CompositionView()\r\nview.add([\"GET\"], get_handler)\r\nview.add([\"POST\", \"PUT\"], lambda request: text(\"I am a post/put method\"))\r\n\r\n# Use the new view to handle requests to the base URL\r\napp.add_route(view, \"/\")\r\n```\r\n\r\n\r\nSeems much more confusing to me than\r\n\r\n```python\r\ndef get_handler(request):\r\n return text(\"I am a get method\")\r\n\r\napp.route(\"/\", methods=[\"GET\"])(get_handler)\r\napp.route(\"/\", methods=[\"POST\", \"PUT\"])(lambda request: text(\"I am a post/put method\"))\r\n```\r\n\r\nCan anyone offer a compelling use case for CompositionView?\r\n\r\nIf not, I would suggest to deprecate it \r\n\r\n\r\nhttps://github.com/sanic-org/sanic/blob/master/sanic/views.py\n", "before_files": [{"content": "from typing import Any, Callable, List\n\nfrom sanic.constants import HTTP_METHODS\nfrom sanic.exceptions import InvalidUsage\n\n\nclass HTTPMethodView:\n \"\"\"Simple class based implementation of view for the sanic.\n You should implement methods (get, post, put, patch, delete) for the class\n to every HTTP method you want to support.\n\n For example:\n\n .. code-block:: python\n\n class DummyView(HTTPMethodView):\n def get(self, request, *args, **kwargs):\n return text('I am get method')\n def put(self, request, *args, **kwargs):\n return text('I am put method')\n\n If someone tries to use a non-implemented method, there will be a\n 405 response.\n\n If you need any url params just mention them in method definition:\n\n .. code-block:: python\n\n class DummyView(HTTPMethodView):\n def get(self, request, my_param_here, *args, **kwargs):\n return text('I am get method with %s' % my_param_here)\n\n To add the view into the routing you could use\n\n 1) ``app.add_route(DummyView.as_view(), '/')``, OR\n 2) ``app.route('/')(DummyView.as_view())``\n\n To add any decorator you could set it into decorators variable\n \"\"\"\n\n decorators: List[Callable[[Callable[..., Any]], Callable[..., Any]]] = []\n\n def dispatch_request(self, request, *args, **kwargs):\n handler = getattr(self, request.method.lower(), None)\n return handler(request, *args, **kwargs)\n\n @classmethod\n def as_view(cls, *class_args, **class_kwargs):\n \"\"\"Return view function for use with the routing system, that\n dispatches request to appropriate handler method.\n \"\"\"\n\n def view(*args, **kwargs):\n self = view.view_class(*class_args, **class_kwargs)\n return self.dispatch_request(*args, **kwargs)\n\n if cls.decorators:\n view.__module__ = cls.__module__\n for decorator in cls.decorators:\n view = decorator(view)\n\n view.view_class = cls\n view.__doc__ = cls.__doc__\n view.__module__ = cls.__module__\n view.__name__ = cls.__name__\n return view\n\n\ndef stream(func):\n func.is_stream = True\n return func\n\n\nclass CompositionView:\n \"\"\"Simple method-function mapped view for the sanic.\n You can add handler functions to methods (get, post, put, patch, delete)\n for every HTTP method you want to support.\n\n For example:\n\n .. code-block:: python\n\n view = CompositionView()\n view.add(['GET'], lambda request: text('I am get method'))\n view.add(['POST', 'PUT'], lambda request: text('I am post/put method'))\n\n If someone tries to use a non-implemented method, there will be a\n 405 response.\n \"\"\"\n\n def __init__(self):\n self.handlers = {}\n self.name = self.__class__.__name__\n\n def __name__(self):\n return self.name\n\n def add(self, methods, handler, stream=False):\n if stream:\n handler.is_stream = stream\n for method in methods:\n if method not in HTTP_METHODS:\n raise InvalidUsage(f\"{method} is not a valid HTTP method.\")\n\n if method in self.handlers:\n raise InvalidUsage(f\"Method {method} is already registered.\")\n self.handlers[method] = handler\n\n def __call__(self, request, *args, **kwargs):\n handler = self.handlers[request.method.upper()]\n return handler(request, *args, **kwargs)\n", "path": "sanic/views.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Iterable,\n List,\n Optional,\n Union,\n)\nfrom warnings import warn\n\nfrom sanic.constants import HTTP_METHODS\nfrom sanic.exceptions import InvalidUsage\n\n\nif TYPE_CHECKING:\n from sanic import Sanic\n from sanic.blueprints import Blueprint\n\n\nclass HTTPMethodView:\n \"\"\"Simple class based implementation of view for the sanic.\n You should implement methods (get, post, put, patch, delete) for the class\n to every HTTP method you want to support.\n\n For example:\n\n .. code-block:: python\n\n class DummyView(HTTPMethodView):\n def get(self, request, *args, **kwargs):\n return text('I am get method')\n def put(self, request, *args, **kwargs):\n return text('I am put method')\n\n If someone tries to use a non-implemented method, there will be a\n 405 response.\n\n If you need any url params just mention them in method definition:\n\n .. code-block:: python\n\n class DummyView(HTTPMethodView):\n def get(self, request, my_param_here, *args, **kwargs):\n return text('I am get method with %s' % my_param_here)\n\n To add the view into the routing you could use\n\n 1) ``app.add_route(DummyView.as_view(), '/')``, OR\n 2) ``app.route('/')(DummyView.as_view())``\n\n To add any decorator you could set it into decorators variable\n \"\"\"\n\n decorators: List[Callable[[Callable[..., Any]], Callable[..., Any]]] = []\n\n def __init_subclass__(\n cls,\n attach: Optional[Union[Sanic, Blueprint]] = None,\n uri: str = \"\",\n methods: Iterable[str] = frozenset({\"GET\"}),\n host: Optional[str] = None,\n strict_slashes: Optional[bool] = None,\n version: Optional[int] = None,\n name: Optional[str] = None,\n stream: bool = False,\n version_prefix: str = \"/v\",\n ) -> None:\n if attach:\n cls.attach(\n attach,\n uri=uri,\n methods=methods,\n host=host,\n strict_slashes=strict_slashes,\n version=version,\n name=name,\n stream=stream,\n version_prefix=version_prefix,\n )\n\n def dispatch_request(self, request, *args, **kwargs):\n handler = getattr(self, request.method.lower(), None)\n return handler(request, *args, **kwargs)\n\n @classmethod\n def as_view(cls, *class_args, **class_kwargs):\n \"\"\"Return view function for use with the routing system, that\n dispatches request to appropriate handler method.\n \"\"\"\n\n def view(*args, **kwargs):\n self = view.view_class(*class_args, **class_kwargs)\n return self.dispatch_request(*args, **kwargs)\n\n if cls.decorators:\n view.__module__ = cls.__module__\n for decorator in cls.decorators:\n view = decorator(view)\n\n view.view_class = cls\n view.__doc__ = cls.__doc__\n view.__module__ = cls.__module__\n view.__name__ = cls.__name__\n return view\n\n @classmethod\n def attach(\n cls,\n to: Union[Sanic, Blueprint],\n uri: str,\n methods: Iterable[str] = frozenset({\"GET\"}),\n host: Optional[str] = None,\n strict_slashes: Optional[bool] = None,\n version: Optional[int] = None,\n name: Optional[str] = None,\n stream: bool = False,\n version_prefix: str = \"/v\",\n ) -> None:\n to.add_route(\n cls.as_view(),\n uri=uri,\n methods=methods,\n host=host,\n strict_slashes=strict_slashes,\n version=version,\n name=name,\n stream=stream,\n version_prefix=version_prefix,\n )\n\n\ndef stream(func):\n func.is_stream = True\n return func\n\n\nclass CompositionView:\n \"\"\"Simple method-function mapped view for the sanic.\n You can add handler functions to methods (get, post, put, patch, delete)\n for every HTTP method you want to support.\n\n For example:\n\n .. code-block:: python\n\n view = CompositionView()\n view.add(['GET'], lambda request: text('I am get method'))\n view.add(['POST', 'PUT'], lambda request: text('I am post/put method'))\n\n If someone tries to use a non-implemented method, there will be a\n 405 response.\n \"\"\"\n\n def __init__(self):\n self.handlers = {}\n self.name = self.__class__.__name__\n warn(\n \"CompositionView has been deprecated and will be removed in \"\n \"v21.12. Please update your view to HTTPMethodView.\",\n DeprecationWarning,\n )\n\n def __name__(self):\n return self.name\n\n def add(self, methods, handler, stream=False):\n if stream:\n handler.is_stream = stream\n for method in methods:\n if method not in HTTP_METHODS:\n raise InvalidUsage(f\"{method} is not a valid HTTP method.\")\n\n if method in self.handlers:\n raise InvalidUsage(f\"Method {method} is already registered.\")\n self.handlers[method] = handler\n\n def __call__(self, request, *args, **kwargs):\n handler = self.handlers[request.method.upper()]\n return handler(request, *args, **kwargs)\n", "path": "sanic/views.py"}]}
1,516
757
gh_patches_debug_31653
rasdani/github-patches
git_diff
apluslms__a-plus-1326
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Remove the user profile avatar icons from the student group menus The user profile avatar icon was removed from the user profile page in #590. Now the profile icon needs to be removed from the student group menus that include it. We decided to remove the avatar in order to remove the dependency on the external service gravatar.com and Wordpress accounts. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `userprofile/models.py` Content: ``` 1 import hashlib 2 from typing import Any, Optional, TYPE_CHECKING 3 4 from aplus_auth.payload import Permission 5 from django.conf import settings 6 from django.contrib.auth.models import User, AnonymousUser 7 from django.urls import reverse 8 from django.db import models 9 from django.db.models.signals import post_save 10 from django.utils.functional import cached_property 11 from django.utils.translation import gettext_lazy as _ 12 from rest_framework.authtoken.models import Token 13 14 from authorization.object_permissions import ObjectPermissions 15 16 if TYPE_CHECKING: 17 from django.db.models.manager import RelatedManager 18 19 from course.models import CourseInstance 20 from exercise.models import BaseExercise, Submission, SubmissionDraft 21 from external_services.models import LTIService 22 23 24 class UserProfileQuerySet(models.QuerySet['UserProfile']): 25 def prefetch_tags(self, instance: 'CourseInstance', to_attr: str = 'instance_taggings') -> 'UserProfileQuerySet': 26 return self.prefetch_related( 27 models.Prefetch( 28 'taggings', 29 instance.taggings.select_related('tag'), 30 to_attr, 31 ), 32 ) 33 34 35 class UserProfileManager(models.Manager): 36 _queryset_class = UserProfileQuerySet 37 38 def get_queryset(self): 39 return super().get_queryset().select_related("user") 40 41 def prefetch_tags(self, instance: 'CourseInstance', to_attr: str = 'instance_taggings') -> UserProfileQuerySet: 42 return self.all().prefetch_tags(instance, to_attr) 43 44 45 class UserProfile(models.Model): 46 """ 47 Additional user information and methods. 48 """ 49 50 @classmethod 51 def get_by_student_id(cls, student_id, org=settings.LOCAL_ORGANIZATION): 52 # Because student ID is unique only within organization, 53 # also organization needs to be specified. 54 return cls.objects.get(student_id=student_id, organization=org) 55 56 @classmethod 57 def get_by_email(cls, email): 58 return User.objects.filter(email=email).first().userprofile 59 60 @classmethod 61 def get_by_request(cls, request): 62 user = request.user 63 if user.is_authenticated: 64 return user.userprofile 65 raise RuntimeError("Seeking user profile without authenticated user.") 66 67 user = models.OneToOneField(User, 68 verbose_name=_('LABEL_USER'), 69 on_delete=models.CASCADE, 70 ) 71 language = models.CharField( 72 verbose_name=_('LABEL_LANGUAGE'), 73 max_length=5, 74 blank=True, 75 default='', 76 ) 77 student_id = models.CharField( 78 verbose_name=_('LABEL_STUDENT_ID'), 79 max_length=25, 80 blank=True, null=True, 81 ) 82 organization = models.CharField( 83 verbose_name=_('LABEL_ORGANIZATION'), 84 max_length=80, 85 blank=True, 86 ) 87 objects = UserProfileManager() 88 89 if TYPE_CHECKING: 90 id: models.AutoField 91 submissions: RelatedManager['Submission'] 92 graded_submissions: RelatedManager['Submission'] 93 submission_drafts: RelatedManager['SubmissionDraft'] 94 95 class Meta: 96 verbose_name = _('MODEL_NAME_USER_PROFILE') 97 verbose_name_plural = _('MODEL_NAME_USER_PROFILE_PLURAL') 98 ordering = ['id'] 99 100 def __str__(self): 101 if self.student_id is None: 102 return f"{self.user.username} ({self.user.first_name} {self.user.last_name}, {self.user.email})" 103 return ( 104 f"{self.user.username} ({self.user.first_name} {self.user.last_name}, " 105 f"{self.user.email}, " 106 f"{self.student_id})" 107 ) 108 109 @cached_property 110 def api_token(self): 111 # FIXME: implement support for more than 1 token 112 token, _created = Token.objects.get_or_create(user=self.user) 113 return token.key 114 115 @cached_property 116 def avatar_url(self): 117 """ 118 URL address for gravatar image based on the user email. 119 """ 120 hash_key = hashlib.md5(self.user.email.encode('utf-8')).hexdigest() 121 return "http://www.gravatar.com/avatar/" + hash_key + "?d=identicon" 122 123 @cached_property 124 def shortname(self): 125 """ 126 A short version of the user's name in form "John D." 127 """ 128 try: 129 return self.user.first_name + " " + self.user.last_name[0] + "." 130 except: # pylint: disable=bare-except 131 return self.user.username 132 133 @cached_property 134 def name_with_student_id(self): 135 name = self.user.get_full_name() 136 if self.student_id: 137 return name + ', ' + self.student_id 138 return name 139 140 @cached_property 141 def is_external(self): 142 """ 143 Is this an external rather than internal account. 144 """ 145 return (self.organization != settings.LOCAL_ORGANIZATION) 146 147 def get_url(self, instance): 148 kwargs = dict(user_id=self.user.id, **instance.get_url_kwargs()) # pylint: disable=use-dict-literal 149 return reverse('user-results', kwargs=kwargs) 150 151 def regenerate_api_token(self): 152 # FIXME: implement support for more than 1 token 153 token, created = Token.objects.get_or_create(user=self.user) 154 if not created: 155 token.delete() 156 Token.objects.create(user=self.user) 157 158 159 def create_user_profile(sender, instance, created, **kwargs): # pylint: disable=unused-argument 160 """ 161 This function automatically creates an user profile for all new User models. The profiles 162 are used for extending the User models with domain specific attributes and behavior. 163 164 @param sender: the signal that invoked the function 165 @param instance: the User object that was just created 166 @param created: a boolean whether the object was created and not just updated 167 """ 168 if created: 169 UserProfile.objects.get_or_create(user=instance) 170 171 172 # Attach to the post_save signal. 173 post_save.connect(create_user_profile, sender=User) 174 175 176 class GraderUser(AnonymousUser): 177 def __init__(self, username: str, permissions: ObjectPermissions, **extra: Any): 178 self.username = username 179 self.permissions = permissions 180 self._extra = extra 181 182 @property 183 def is_anonymous(self): 184 """GraderUser is anonymous, but not AnonymousUser""" 185 return True 186 187 @property 188 def is_authenticated(self): 189 return True 190 191 # A-plus interface 192 @property 193 def userprofile(self): 194 """Compatibilty with User.userprofile""" 195 return self 196 197 198 class LTIServiceUser(GraderUser): 199 def __init__(self, submission: "Submission" = None, 200 exercise: "BaseExercise" = None, 201 lti_service: Optional["LTIService"] = None, 202 user_id: int = None, 203 **kwargs: Any, 204 ) -> None: 205 """exercise and student_id must both be present if one is""" 206 self.lti_service = lti_service 207 permissions = ObjectPermissions() 208 if submission: 209 permissions.submissions.add(Permission.WRITE, submission) 210 if exercise: 211 if user_id: 212 permissions.submissions.add_create(exercise=exercise, user_id=user_id) 213 else: 214 permissions.submissions.add_create(exercise=exercise) 215 super().__init__("LTI", permissions, **kwargs) 216 ``` Path: `course/renders.py` Content: ``` 1 from django.template import loader 2 3 4 def render_avatars(profiles): 5 template = loader.get_template("course/_avatars.html") 6 return template.render({ 'profiles': profiles }) 7 8 9 def group_info_context(group, profile): 10 if not group: 11 return { 'id': None } 12 return { 13 'id': group.id, 14 'collaborators': group.collaborator_names(profile), 15 'avatars': render_avatars(group.members.all()), 16 } 17 18 19 def render_group_info(group, profile): 20 template = loader.get_template("course/_group_info.html") 21 return template.render(group_info_context(group, profile)) 22 ``` Path: `course/templatetags/course.py` Content: ``` 1 from typing import Any, Dict, List, Union 2 from copy import deepcopy 3 4 from django import template 5 from django.db import models 6 from django.utils.safestring import mark_safe 7 from django.utils.translation import get_language 8 9 from exercise.cache.content import CachedContent 10 from course.models import CourseInstance, UserTagging 11 from lib.localization_syntax import pick_localized 12 from userprofile.models import UserProfile 13 from ..cache.menu import CachedTopMenu 14 15 16 register = template.Library() 17 18 19 def _prepare_topmenu(context): 20 if 'topmenu' not in context: 21 request = context.get('request', None) 22 context['topmenu'] = CachedTopMenu(request.user if request else None) 23 return context['topmenu'] 24 25 26 def _deadline_extended_exercise_open(entry, now): 27 personal_deadline = entry.get('personal_deadline') 28 return personal_deadline is not None and entry['opening_time'] <= now <= personal_deadline 29 30 31 @register.inclusion_tag("course/_course_dropdown_menu.html", takes_context=True) 32 def course_menu(context): 33 menu = _prepare_topmenu(context) 34 return { "instances": menu.courses() } 35 36 37 @register.inclusion_tag('course/_group_select.html', takes_context=True) 38 def group_select(context): 39 instance = context.get('instance', None) 40 if not instance: 41 return { 'groups': [] } 42 menu = _prepare_topmenu(context) 43 groups, selected = menu.groups(instance) 44 return { 45 'instance': instance, 46 'groups': groups, 47 'selected': selected, 48 } 49 50 51 @register.filter 52 def escape_slashes(string): 53 return str(string).replace('/', '\/') # noqa: W605 54 55 @register.filter 56 def parse_localization(entry): 57 return pick_localized(entry, get_language()) 58 59 60 @register.filter 61 def list_unselected(langs): 62 listed = list(filter(lambda x: x and x != get_language(), langs.split("|"))) 63 return listed 64 65 66 @register.filter 67 def is_visible(entry): 68 return CachedContent.is_visible(entry) 69 70 71 @register.filter 72 def is_visible_to(entry, user): 73 return entry.is_visible_to(user) 74 75 76 @register.filter 77 def is_listed(entry): 78 return CachedContent.is_listed(entry) 79 80 81 @register.filter 82 def len_listed(entries): 83 return len([e for e in entries if CachedContent.is_listed(e)]) 84 85 86 @register.filter 87 def is_in_maintenance(entry): 88 return CachedContent.is_in_maintenance(entry) 89 90 91 @register.filter 92 def exercises_open(entry, now): 93 return entry['opening_time'] <= now <= entry['closing_time'] 94 95 96 @register.filter 97 def deadline_extended_exercise_open(entry, now): 98 return _deadline_extended_exercise_open(entry, now) 99 100 101 @register.filter 102 def deadline_extended_exercises_open(entry, now): 103 entries = deepcopy(entry['flatted']) 104 return any(_deadline_extended_exercise_open(entry, now) for entry in entries) 105 106 107 @register.filter 108 def exercises_submittable(entry, now): 109 if entry['late_allowed']: 110 return entry['opening_time'] <= now <= entry['late_time'] 111 return entry['opening_time'] <= now <= entry['closing_time'] 112 113 114 @register.filter 115 def has_opened(entry, now): 116 return entry['opening_time'] <= now 117 118 119 @register.filter 120 def url(model_object, name=None): 121 if name: 122 return model_object.get_url(name) 123 return model_object.get_display_url() 124 125 126 @register.filter 127 def names(profiles): 128 return ", ".join(p.user.get_full_name() for p in profiles) 129 130 131 @register.inclusion_tag('course/_avatars.html') 132 def avatars(profiles): 133 return { 'profiles': profiles } 134 135 136 @register.inclusion_tag("course/_profiles.html") 137 def profiles( 138 profiles: Union[UserProfile, List[UserProfile], models.QuerySet[UserProfile]], 139 instance: CourseInstance, 140 is_teacher: bool 141 ) -> Dict[str, Any]: 142 if isinstance(profiles, UserProfile): 143 profiles = [profiles] 144 elif isinstance(profiles, models.QuerySet): 145 # Avoid re-fetching the queryset 146 profiles = list(profiles) 147 return { 148 'instance': instance, 149 'profiles': profiles, 150 'is_teacher': is_teacher, 151 } 152 153 154 @register.simple_tag 155 def tags(profile, instance): 156 tags = UserTagging.objects.get_all(profile, instance) 157 return mark_safe(' '.join(tag.html_label for tag in tags)) 158 159 160 @register.filter 161 def enrollment_audience(enrollment_audience_val): 162 # convert enrollment audience Enum value to the string description 163 return CourseInstance.ENROLLMENT_AUDIENCE[enrollment_audience_val] 164 165 166 @register.filter 167 def view_content_to(view_content_to_val): 168 # convert "view content to" Enum value to the string description 169 return CourseInstance.VIEW_ACCESS[view_content_to_val] 170 171 172 @register.filter 173 def is_banned_student(profile, course_instance): 174 return course_instance.is_banned(profile.user) 175 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/course/renders.py b/course/renders.py --- a/course/renders.py +++ b/course/renders.py @@ -1,18 +1,12 @@ from django.template import loader -def render_avatars(profiles): - template = loader.get_template("course/_avatars.html") - return template.render({ 'profiles': profiles }) - - def group_info_context(group, profile): if not group: return { 'id': None } return { 'id': group.id, - 'collaborators': group.collaborator_names(profile), - 'avatars': render_avatars(group.members.all()), + 'collaborators': group.collaborator_names(profile) } diff --git a/course/templatetags/course.py b/course/templatetags/course.py --- a/course/templatetags/course.py +++ b/course/templatetags/course.py @@ -128,11 +128,6 @@ return ", ".join(p.user.get_full_name() for p in profiles) [email protected]_tag('course/_avatars.html') -def avatars(profiles): - return { 'profiles': profiles } - - @register.inclusion_tag("course/_profiles.html") def profiles( profiles: Union[UserProfile, List[UserProfile], models.QuerySet[UserProfile]], diff --git a/userprofile/models.py b/userprofile/models.py --- a/userprofile/models.py +++ b/userprofile/models.py @@ -1,4 +1,3 @@ -import hashlib from typing import Any, Optional, TYPE_CHECKING from aplus_auth.payload import Permission @@ -112,14 +111,6 @@ token, _created = Token.objects.get_or_create(user=self.user) return token.key - @cached_property - def avatar_url(self): - """ - URL address for gravatar image based on the user email. - """ - hash_key = hashlib.md5(self.user.email.encode('utf-8')).hexdigest() - return "http://www.gravatar.com/avatar/" + hash_key + "?d=identicon" - @cached_property def shortname(self): """
{"golden_diff": "diff --git a/course/renders.py b/course/renders.py\n--- a/course/renders.py\n+++ b/course/renders.py\n@@ -1,18 +1,12 @@\n from django.template import loader\n \n \n-def render_avatars(profiles):\n- template = loader.get_template(\"course/_avatars.html\")\n- return template.render({ 'profiles': profiles })\n-\n-\n def group_info_context(group, profile):\n if not group:\n return { 'id': None }\n return {\n 'id': group.id,\n- 'collaborators': group.collaborator_names(profile),\n- 'avatars': render_avatars(group.members.all()),\n+ 'collaborators': group.collaborator_names(profile)\n }\n \n \ndiff --git a/course/templatetags/course.py b/course/templatetags/course.py\n--- a/course/templatetags/course.py\n+++ b/course/templatetags/course.py\n@@ -128,11 +128,6 @@\n return \", \".join(p.user.get_full_name() for p in profiles)\n \n \[email protected]_tag('course/_avatars.html')\n-def avatars(profiles):\n- return { 'profiles': profiles }\n-\n-\n @register.inclusion_tag(\"course/_profiles.html\")\n def profiles(\n profiles: Union[UserProfile, List[UserProfile], models.QuerySet[UserProfile]],\ndiff --git a/userprofile/models.py b/userprofile/models.py\n--- a/userprofile/models.py\n+++ b/userprofile/models.py\n@@ -1,4 +1,3 @@\n-import hashlib\n from typing import Any, Optional, TYPE_CHECKING\n \n from aplus_auth.payload import Permission\n@@ -112,14 +111,6 @@\n token, _created = Token.objects.get_or_create(user=self.user)\n return token.key\n \n- @cached_property\n- def avatar_url(self):\n- \"\"\"\n- URL address for gravatar image based on the user email.\n- \"\"\"\n- hash_key = hashlib.md5(self.user.email.encode('utf-8')).hexdigest()\n- return \"http://www.gravatar.com/avatar/\" + hash_key + \"?d=identicon\"\n-\n @cached_property\n def shortname(self):\n \"\"\"\n", "issue": "Remove the user profile avatar icons from the student group menus\nThe user profile avatar icon was removed from the user profile page in #590. Now the profile icon needs to be removed from the student group menus that include it.\r\n\r\nWe decided to remove the avatar in order to remove the dependency on the external service gravatar.com and Wordpress accounts.\n", "before_files": [{"content": "import hashlib\nfrom typing import Any, Optional, TYPE_CHECKING\n\nfrom aplus_auth.payload import Permission\nfrom django.conf import settings\nfrom django.contrib.auth.models import User, AnonymousUser\nfrom django.urls import reverse\nfrom django.db import models\nfrom django.db.models.signals import post_save\nfrom django.utils.functional import cached_property\nfrom django.utils.translation import gettext_lazy as _\nfrom rest_framework.authtoken.models import Token\n\nfrom authorization.object_permissions import ObjectPermissions\n\nif TYPE_CHECKING:\n from django.db.models.manager import RelatedManager\n\n from course.models import CourseInstance\n from exercise.models import BaseExercise, Submission, SubmissionDraft\n from external_services.models import LTIService\n\n\nclass UserProfileQuerySet(models.QuerySet['UserProfile']):\n def prefetch_tags(self, instance: 'CourseInstance', to_attr: str = 'instance_taggings') -> 'UserProfileQuerySet':\n return self.prefetch_related(\n models.Prefetch(\n 'taggings',\n instance.taggings.select_related('tag'),\n to_attr,\n ),\n )\n\n\nclass UserProfileManager(models.Manager):\n _queryset_class = UserProfileQuerySet\n\n def get_queryset(self):\n return super().get_queryset().select_related(\"user\")\n\n def prefetch_tags(self, instance: 'CourseInstance', to_attr: str = 'instance_taggings') -> UserProfileQuerySet:\n return self.all().prefetch_tags(instance, to_attr)\n\n\nclass UserProfile(models.Model):\n \"\"\"\n Additional user information and methods.\n \"\"\"\n\n @classmethod\n def get_by_student_id(cls, student_id, org=settings.LOCAL_ORGANIZATION):\n # Because student ID is unique only within organization,\n # also organization needs to be specified.\n return cls.objects.get(student_id=student_id, organization=org)\n\n @classmethod\n def get_by_email(cls, email):\n return User.objects.filter(email=email).first().userprofile\n\n @classmethod\n def get_by_request(cls, request):\n user = request.user\n if user.is_authenticated:\n return user.userprofile\n raise RuntimeError(\"Seeking user profile without authenticated user.\")\n\n user = models.OneToOneField(User,\n verbose_name=_('LABEL_USER'),\n on_delete=models.CASCADE,\n )\n language = models.CharField(\n verbose_name=_('LABEL_LANGUAGE'),\n max_length=5,\n blank=True,\n default='',\n )\n student_id = models.CharField(\n verbose_name=_('LABEL_STUDENT_ID'),\n max_length=25,\n blank=True, null=True,\n )\n organization = models.CharField(\n verbose_name=_('LABEL_ORGANIZATION'),\n max_length=80,\n blank=True,\n )\n objects = UserProfileManager()\n\n if TYPE_CHECKING:\n id: models.AutoField\n submissions: RelatedManager['Submission']\n graded_submissions: RelatedManager['Submission']\n submission_drafts: RelatedManager['SubmissionDraft']\n\n class Meta:\n verbose_name = _('MODEL_NAME_USER_PROFILE')\n verbose_name_plural = _('MODEL_NAME_USER_PROFILE_PLURAL')\n ordering = ['id']\n\n def __str__(self):\n if self.student_id is None:\n return f\"{self.user.username} ({self.user.first_name} {self.user.last_name}, {self.user.email})\"\n return (\n f\"{self.user.username} ({self.user.first_name} {self.user.last_name}, \"\n f\"{self.user.email}, \"\n f\"{self.student_id})\"\n )\n\n @cached_property\n def api_token(self):\n # FIXME: implement support for more than 1 token\n token, _created = Token.objects.get_or_create(user=self.user)\n return token.key\n\n @cached_property\n def avatar_url(self):\n \"\"\"\n URL address for gravatar image based on the user email.\n \"\"\"\n hash_key = hashlib.md5(self.user.email.encode('utf-8')).hexdigest()\n return \"http://www.gravatar.com/avatar/\" + hash_key + \"?d=identicon\"\n\n @cached_property\n def shortname(self):\n \"\"\"\n A short version of the user's name in form \"John D.\"\n \"\"\"\n try:\n return self.user.first_name + \" \" + self.user.last_name[0] + \".\"\n except: # pylint: disable=bare-except\n return self.user.username\n\n @cached_property\n def name_with_student_id(self):\n name = self.user.get_full_name()\n if self.student_id:\n return name + ', ' + self.student_id\n return name\n\n @cached_property\n def is_external(self):\n \"\"\"\n Is this an external rather than internal account.\n \"\"\"\n return (self.organization != settings.LOCAL_ORGANIZATION)\n\n def get_url(self, instance):\n kwargs = dict(user_id=self.user.id, **instance.get_url_kwargs()) # pylint: disable=use-dict-literal\n return reverse('user-results', kwargs=kwargs)\n\n def regenerate_api_token(self):\n # FIXME: implement support for more than 1 token\n token, created = Token.objects.get_or_create(user=self.user)\n if not created:\n token.delete()\n Token.objects.create(user=self.user)\n\n\ndef create_user_profile(sender, instance, created, **kwargs): # pylint: disable=unused-argument\n \"\"\"\n This function automatically creates an user profile for all new User models. The profiles\n are used for extending the User models with domain specific attributes and behavior.\n\n @param sender: the signal that invoked the function\n @param instance: the User object that was just created\n @param created: a boolean whether the object was created and not just updated\n \"\"\"\n if created:\n UserProfile.objects.get_or_create(user=instance)\n\n\n# Attach to the post_save signal.\npost_save.connect(create_user_profile, sender=User)\n\n\nclass GraderUser(AnonymousUser):\n def __init__(self, username: str, permissions: ObjectPermissions, **extra: Any):\n self.username = username\n self.permissions = permissions\n self._extra = extra\n\n @property\n def is_anonymous(self):\n \"\"\"GraderUser is anonymous, but not AnonymousUser\"\"\"\n return True\n\n @property\n def is_authenticated(self):\n return True\n\n # A-plus interface\n @property\n def userprofile(self):\n \"\"\"Compatibilty with User.userprofile\"\"\"\n return self\n\n\nclass LTIServiceUser(GraderUser):\n def __init__(self, submission: \"Submission\" = None,\n exercise: \"BaseExercise\" = None,\n lti_service: Optional[\"LTIService\"] = None,\n user_id: int = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"exercise and student_id must both be present if one is\"\"\"\n self.lti_service = lti_service\n permissions = ObjectPermissions()\n if submission:\n permissions.submissions.add(Permission.WRITE, submission)\n if exercise:\n if user_id:\n permissions.submissions.add_create(exercise=exercise, user_id=user_id)\n else:\n permissions.submissions.add_create(exercise=exercise)\n super().__init__(\"LTI\", permissions, **kwargs)\n", "path": "userprofile/models.py"}, {"content": "from django.template import loader\n\n\ndef render_avatars(profiles):\n template = loader.get_template(\"course/_avatars.html\")\n return template.render({ 'profiles': profiles })\n\n\ndef group_info_context(group, profile):\n if not group:\n return { 'id': None }\n return {\n 'id': group.id,\n 'collaborators': group.collaborator_names(profile),\n 'avatars': render_avatars(group.members.all()),\n }\n\n\ndef render_group_info(group, profile):\n template = loader.get_template(\"course/_group_info.html\")\n return template.render(group_info_context(group, profile))\n", "path": "course/renders.py"}, {"content": "from typing import Any, Dict, List, Union\nfrom copy import deepcopy\n\nfrom django import template\nfrom django.db import models\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import get_language\n\nfrom exercise.cache.content import CachedContent\nfrom course.models import CourseInstance, UserTagging\nfrom lib.localization_syntax import pick_localized\nfrom userprofile.models import UserProfile\nfrom ..cache.menu import CachedTopMenu\n\n\nregister = template.Library()\n\n\ndef _prepare_topmenu(context):\n if 'topmenu' not in context:\n request = context.get('request', None)\n context['topmenu'] = CachedTopMenu(request.user if request else None)\n return context['topmenu']\n\n\ndef _deadline_extended_exercise_open(entry, now):\n personal_deadline = entry.get('personal_deadline')\n return personal_deadline is not None and entry['opening_time'] <= now <= personal_deadline\n\n\[email protected]_tag(\"course/_course_dropdown_menu.html\", takes_context=True)\ndef course_menu(context):\n menu = _prepare_topmenu(context)\n return { \"instances\": menu.courses() }\n\n\[email protected]_tag('course/_group_select.html', takes_context=True)\ndef group_select(context):\n instance = context.get('instance', None)\n if not instance:\n return { 'groups': [] }\n menu = _prepare_topmenu(context)\n groups, selected = menu.groups(instance)\n return {\n 'instance': instance,\n 'groups': groups,\n 'selected': selected,\n }\n\n\[email protected]\ndef escape_slashes(string):\n return str(string).replace('/', '\\/') # noqa: W605\n\[email protected]\ndef parse_localization(entry):\n return pick_localized(entry, get_language())\n\n\[email protected]\ndef list_unselected(langs):\n listed = list(filter(lambda x: x and x != get_language(), langs.split(\"|\")))\n return listed\n\n\[email protected]\ndef is_visible(entry):\n return CachedContent.is_visible(entry)\n\n\[email protected]\ndef is_visible_to(entry, user):\n return entry.is_visible_to(user)\n\n\[email protected]\ndef is_listed(entry):\n return CachedContent.is_listed(entry)\n\n\[email protected]\ndef len_listed(entries):\n return len([e for e in entries if CachedContent.is_listed(e)])\n\n\[email protected]\ndef is_in_maintenance(entry):\n return CachedContent.is_in_maintenance(entry)\n\n\[email protected]\ndef exercises_open(entry, now):\n return entry['opening_time'] <= now <= entry['closing_time']\n\n\[email protected]\ndef deadline_extended_exercise_open(entry, now):\n return _deadline_extended_exercise_open(entry, now)\n\n\[email protected]\ndef deadline_extended_exercises_open(entry, now):\n entries = deepcopy(entry['flatted'])\n return any(_deadline_extended_exercise_open(entry, now) for entry in entries)\n\n\[email protected]\ndef exercises_submittable(entry, now):\n if entry['late_allowed']:\n return entry['opening_time'] <= now <= entry['late_time']\n return entry['opening_time'] <= now <= entry['closing_time']\n\n\[email protected]\ndef has_opened(entry, now):\n return entry['opening_time'] <= now\n\n\[email protected]\ndef url(model_object, name=None):\n if name:\n return model_object.get_url(name)\n return model_object.get_display_url()\n\n\[email protected]\ndef names(profiles):\n return \", \".join(p.user.get_full_name() for p in profiles)\n\n\[email protected]_tag('course/_avatars.html')\ndef avatars(profiles):\n return { 'profiles': profiles }\n\n\[email protected]_tag(\"course/_profiles.html\")\ndef profiles(\n profiles: Union[UserProfile, List[UserProfile], models.QuerySet[UserProfile]],\n instance: CourseInstance,\n is_teacher: bool\n ) -> Dict[str, Any]:\n if isinstance(profiles, UserProfile):\n profiles = [profiles]\n elif isinstance(profiles, models.QuerySet):\n # Avoid re-fetching the queryset\n profiles = list(profiles)\n return {\n 'instance': instance,\n 'profiles': profiles,\n 'is_teacher': is_teacher,\n }\n\n\[email protected]_tag\ndef tags(profile, instance):\n tags = UserTagging.objects.get_all(profile, instance)\n return mark_safe(' '.join(tag.html_label for tag in tags))\n\n\[email protected]\ndef enrollment_audience(enrollment_audience_val):\n # convert enrollment audience Enum value to the string description\n return CourseInstance.ENROLLMENT_AUDIENCE[enrollment_audience_val]\n\n\[email protected]\ndef view_content_to(view_content_to_val):\n # convert \"view content to\" Enum value to the string description\n return CourseInstance.VIEW_ACCESS[view_content_to_val]\n\n\[email protected]\ndef is_banned_student(profile, course_instance):\n return course_instance.is_banned(profile.user)\n", "path": "course/templatetags/course.py"}], "after_files": [{"content": "from typing import Any, Optional, TYPE_CHECKING\n\nfrom aplus_auth.payload import Permission\nfrom django.conf import settings\nfrom django.contrib.auth.models import User, AnonymousUser\nfrom django.urls import reverse\nfrom django.db import models\nfrom django.db.models.signals import post_save\nfrom django.utils.functional import cached_property\nfrom django.utils.translation import gettext_lazy as _\nfrom rest_framework.authtoken.models import Token\n\nfrom authorization.object_permissions import ObjectPermissions\n\nif TYPE_CHECKING:\n from django.db.models.manager import RelatedManager\n\n from course.models import CourseInstance\n from exercise.models import BaseExercise, Submission, SubmissionDraft\n from external_services.models import LTIService\n\n\nclass UserProfileQuerySet(models.QuerySet['UserProfile']):\n def prefetch_tags(self, instance: 'CourseInstance', to_attr: str = 'instance_taggings') -> 'UserProfileQuerySet':\n return self.prefetch_related(\n models.Prefetch(\n 'taggings',\n instance.taggings.select_related('tag'),\n to_attr,\n ),\n )\n\n\nclass UserProfileManager(models.Manager):\n _queryset_class = UserProfileQuerySet\n\n def get_queryset(self):\n return super().get_queryset().select_related(\"user\")\n\n def prefetch_tags(self, instance: 'CourseInstance', to_attr: str = 'instance_taggings') -> UserProfileQuerySet:\n return self.all().prefetch_tags(instance, to_attr)\n\n\nclass UserProfile(models.Model):\n \"\"\"\n Additional user information and methods.\n \"\"\"\n\n @classmethod\n def get_by_student_id(cls, student_id, org=settings.LOCAL_ORGANIZATION):\n # Because student ID is unique only within organization,\n # also organization needs to be specified.\n return cls.objects.get(student_id=student_id, organization=org)\n\n @classmethod\n def get_by_email(cls, email):\n return User.objects.filter(email=email).first().userprofile\n\n @classmethod\n def get_by_request(cls, request):\n user = request.user\n if user.is_authenticated:\n return user.userprofile\n raise RuntimeError(\"Seeking user profile without authenticated user.\")\n\n user = models.OneToOneField(User,\n verbose_name=_('LABEL_USER'),\n on_delete=models.CASCADE,\n )\n language = models.CharField(\n verbose_name=_('LABEL_LANGUAGE'),\n max_length=5,\n blank=True,\n default='',\n )\n student_id = models.CharField(\n verbose_name=_('LABEL_STUDENT_ID'),\n max_length=25,\n blank=True, null=True,\n )\n organization = models.CharField(\n verbose_name=_('LABEL_ORGANIZATION'),\n max_length=80,\n blank=True,\n )\n objects = UserProfileManager()\n\n if TYPE_CHECKING:\n id: models.AutoField\n submissions: RelatedManager['Submission']\n graded_submissions: RelatedManager['Submission']\n submission_drafts: RelatedManager['SubmissionDraft']\n\n class Meta:\n verbose_name = _('MODEL_NAME_USER_PROFILE')\n verbose_name_plural = _('MODEL_NAME_USER_PROFILE_PLURAL')\n ordering = ['id']\n\n def __str__(self):\n if self.student_id is None:\n return f\"{self.user.username} ({self.user.first_name} {self.user.last_name}, {self.user.email})\"\n return (\n f\"{self.user.username} ({self.user.first_name} {self.user.last_name}, \"\n f\"{self.user.email}, \"\n f\"{self.student_id})\"\n )\n\n @cached_property\n def api_token(self):\n # FIXME: implement support for more than 1 token\n token, _created = Token.objects.get_or_create(user=self.user)\n return token.key\n\n @cached_property\n def shortname(self):\n \"\"\"\n A short version of the user's name in form \"John D.\"\n \"\"\"\n try:\n return self.user.first_name + \" \" + self.user.last_name[0] + \".\"\n except: # pylint: disable=bare-except\n return self.user.username\n\n @cached_property\n def name_with_student_id(self):\n name = self.user.get_full_name()\n if self.student_id:\n return name + ', ' + self.student_id\n return name\n\n @cached_property\n def is_external(self):\n \"\"\"\n Is this an external rather than internal account.\n \"\"\"\n return (self.organization != settings.LOCAL_ORGANIZATION)\n\n def get_url(self, instance):\n kwargs = dict(user_id=self.user.id, **instance.get_url_kwargs()) # pylint: disable=use-dict-literal\n return reverse('user-results', kwargs=kwargs)\n\n def regenerate_api_token(self):\n # FIXME: implement support for more than 1 token\n token, created = Token.objects.get_or_create(user=self.user)\n if not created:\n token.delete()\n Token.objects.create(user=self.user)\n\n\ndef create_user_profile(sender, instance, created, **kwargs): # pylint: disable=unused-argument\n \"\"\"\n This function automatically creates an user profile for all new User models. The profiles\n are used for extending the User models with domain specific attributes and behavior.\n\n @param sender: the signal that invoked the function\n @param instance: the User object that was just created\n @param created: a boolean whether the object was created and not just updated\n \"\"\"\n if created:\n UserProfile.objects.get_or_create(user=instance)\n\n\n# Attach to the post_save signal.\npost_save.connect(create_user_profile, sender=User)\n\n\nclass GraderUser(AnonymousUser):\n def __init__(self, username: str, permissions: ObjectPermissions, **extra: Any):\n self.username = username\n self.permissions = permissions\n self._extra = extra\n\n @property\n def is_anonymous(self):\n \"\"\"GraderUser is anonymous, but not AnonymousUser\"\"\"\n return True\n\n @property\n def is_authenticated(self):\n return True\n\n # A-plus interface\n @property\n def userprofile(self):\n \"\"\"Compatibilty with User.userprofile\"\"\"\n return self\n\n\nclass LTIServiceUser(GraderUser):\n def __init__(self, submission: \"Submission\" = None,\n exercise: \"BaseExercise\" = None,\n lti_service: Optional[\"LTIService\"] = None,\n user_id: int = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"exercise and student_id must both be present if one is\"\"\"\n self.lti_service = lti_service\n permissions = ObjectPermissions()\n if submission:\n permissions.submissions.add(Permission.WRITE, submission)\n if exercise:\n if user_id:\n permissions.submissions.add_create(exercise=exercise, user_id=user_id)\n else:\n permissions.submissions.add_create(exercise=exercise)\n super().__init__(\"LTI\", permissions, **kwargs)\n", "path": "userprofile/models.py"}, {"content": "from django.template import loader\n\n\ndef group_info_context(group, profile):\n if not group:\n return { 'id': None }\n return {\n 'id': group.id,\n 'collaborators': group.collaborator_names(profile)\n }\n\n\ndef render_group_info(group, profile):\n template = loader.get_template(\"course/_group_info.html\")\n return template.render(group_info_context(group, profile))\n", "path": "course/renders.py"}, {"content": "from typing import Any, Dict, List, Union\nfrom copy import deepcopy\n\nfrom django import template\nfrom django.db import models\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import get_language\n\nfrom exercise.cache.content import CachedContent\nfrom course.models import CourseInstance, UserTagging\nfrom lib.localization_syntax import pick_localized\nfrom userprofile.models import UserProfile\nfrom ..cache.menu import CachedTopMenu\n\n\nregister = template.Library()\n\n\ndef _prepare_topmenu(context):\n if 'topmenu' not in context:\n request = context.get('request', None)\n context['topmenu'] = CachedTopMenu(request.user if request else None)\n return context['topmenu']\n\n\ndef _deadline_extended_exercise_open(entry, now):\n personal_deadline = entry.get('personal_deadline')\n return personal_deadline is not None and entry['opening_time'] <= now <= personal_deadline\n\n\[email protected]_tag(\"course/_course_dropdown_menu.html\", takes_context=True)\ndef course_menu(context):\n menu = _prepare_topmenu(context)\n return { \"instances\": menu.courses() }\n\n\[email protected]_tag('course/_group_select.html', takes_context=True)\ndef group_select(context):\n instance = context.get('instance', None)\n if not instance:\n return { 'groups': [] }\n menu = _prepare_topmenu(context)\n groups, selected = menu.groups(instance)\n return {\n 'instance': instance,\n 'groups': groups,\n 'selected': selected,\n }\n\n\[email protected]\ndef escape_slashes(string):\n return str(string).replace('/', '\\/') # noqa: W605\n\[email protected]\ndef parse_localization(entry):\n return pick_localized(entry, get_language())\n\n\[email protected]\ndef list_unselected(langs):\n listed = list(filter(lambda x: x and x != get_language(), langs.split(\"|\")))\n return listed\n\n\[email protected]\ndef is_visible(entry):\n return CachedContent.is_visible(entry)\n\n\[email protected]\ndef is_visible_to(entry, user):\n return entry.is_visible_to(user)\n\n\[email protected]\ndef is_listed(entry):\n return CachedContent.is_listed(entry)\n\n\[email protected]\ndef len_listed(entries):\n return len([e for e in entries if CachedContent.is_listed(e)])\n\n\[email protected]\ndef is_in_maintenance(entry):\n return CachedContent.is_in_maintenance(entry)\n\n\[email protected]\ndef exercises_open(entry, now):\n return entry['opening_time'] <= now <= entry['closing_time']\n\n\[email protected]\ndef deadline_extended_exercise_open(entry, now):\n return _deadline_extended_exercise_open(entry, now)\n\n\[email protected]\ndef deadline_extended_exercises_open(entry, now):\n entries = deepcopy(entry['flatted'])\n return any(_deadline_extended_exercise_open(entry, now) for entry in entries)\n\n\[email protected]\ndef exercises_submittable(entry, now):\n if entry['late_allowed']:\n return entry['opening_time'] <= now <= entry['late_time']\n return entry['opening_time'] <= now <= entry['closing_time']\n\n\[email protected]\ndef has_opened(entry, now):\n return entry['opening_time'] <= now\n\n\[email protected]\ndef url(model_object, name=None):\n if name:\n return model_object.get_url(name)\n return model_object.get_display_url()\n\n\[email protected]\ndef names(profiles):\n return \", \".join(p.user.get_full_name() for p in profiles)\n\n\[email protected]_tag(\"course/_profiles.html\")\ndef profiles(\n profiles: Union[UserProfile, List[UserProfile], models.QuerySet[UserProfile]],\n instance: CourseInstance,\n is_teacher: bool\n ) -> Dict[str, Any]:\n if isinstance(profiles, UserProfile):\n profiles = [profiles]\n elif isinstance(profiles, models.QuerySet):\n # Avoid re-fetching the queryset\n profiles = list(profiles)\n return {\n 'instance': instance,\n 'profiles': profiles,\n 'is_teacher': is_teacher,\n }\n\n\[email protected]_tag\ndef tags(profile, instance):\n tags = UserTagging.objects.get_all(profile, instance)\n return mark_safe(' '.join(tag.html_label for tag in tags))\n\n\[email protected]\ndef enrollment_audience(enrollment_audience_val):\n # convert enrollment audience Enum value to the string description\n return CourseInstance.ENROLLMENT_AUDIENCE[enrollment_audience_val]\n\n\[email protected]\ndef view_content_to(view_content_to_val):\n # convert \"view content to\" Enum value to the string description\n return CourseInstance.VIEW_ACCESS[view_content_to_val]\n\n\[email protected]\ndef is_banned_student(profile, course_instance):\n return course_instance.is_banned(profile.user)\n", "path": "course/templatetags/course.py"}]}
4,065
473
gh_patches_debug_6472
rasdani/github-patches
git_diff
spack__spack-11648
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Installation issue: lua-luafilesystem I have a nightly CI build that now dies while trying to do a `spack install lua-luafilesystem`. This occurred upon the the merge of #11528 In other words, I am seeing that `lua-luafilesystem` will not install using commit eb584d8 but will install from 8e3fd3f. This is on CentOS 7 , gcc 8.2.0 ### Steps to reproduce the issue ``` [spack]$ git checkout eb584d8 Previous HEAD position was 8e3fd3f... tty: make tty.* print exception types HEAD is now at eb584d8... refactor: remove unused spack.stage._get_mirrors() function [spack]$ ./bin/spack install lua-luafilesystem ... ==> lua is already installed in /home/osolberg/temp/spack/opt/spack/linux-centos7-x86_64/gcc-8.2.0/lua-5.3.5-fulv52lir6poddzxeogk7rgrqglnkbon ==> Installing lua-luafilesystem ==> Searching for binary cache of lua-luafilesystem ==> Finding buildcaches in /bifx/apps/spack/mirror/build_cache ==> No binary for lua-luafilesystem found: installing from source ==> Using cached archive: /home/osolberg/temp/spack/var/spack/cache/lua-luafilesystem/lua-luafilesystem-1_7_0_2.tar.gz ==> Staging archive: /home/osolberg/temp/spack/var/spack/stage/lua-luafilesystem-1_7_0_2-dhumhtidskeakxzmlru6qnprnpw7lthz/v1_7_0_2.tar.gz ==> Created stage in /home/osolberg/temp/spack/var/spack/stage/lua-luafilesystem-1_7_0_2-dhumhtidskeakxzmlru6qnprnpw7lthz ==> No patches needed for lua-luafilesystem ==> Building lua-luafilesystem [Package] ==> Error: OSError: [Errno 2] No such file or directory: '/home/osolberg/temp/spack/var/spack/stage/lua-luafilesystem-1_7_0_2-dhumhtidskeakxzmlru6qnprnpw7lthz/src' /home/osolberg/temp/spack/lib/spack/spack/package.py:1577, in build_process: 1574 echo = logger.echo 1575 self.log() 1576 >> 1577 # Run post install hooks before build stage is removed. 1578 spack.hooks.post_install(self.spec) 1579 1580 # Stop timer. ``` ### Platform and user environment ```commandline [spack]$ uname -a Linux bifx1n03.bold.bio 3.10.0-957.1.3.el7.x86_64 #1 SMP Thu Nov 29 14:49:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux [spack]$ ./bin/spack spec --install-status lua-luafilesystem Input spec -------------------------------- - lua-luafilesystem Concretized -------------------------------- - lua-luafilesystem@1_7_0_2%[email protected] arch=linux-centos7-x86_64 [+] ^[email protected]%[email protected]~tcltk arch=linux-centos7-x86_64 [+] ^[email protected]%[email protected] arch=linux-centos7-x86_64 [+] ^[email protected]%[email protected] patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-centos7-x86_64 [+] ^[email protected]%[email protected] arch=linux-centos7-x86_64 [+] ^[email protected]%[email protected]+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-centos7-x86_64 [+] ^[email protected]%[email protected] arch=linux-centos7-x86_64 [+] ^[email protected]%[email protected] arch=linux-centos7-x86_64 [+] ^[email protected]%[email protected]~symlinks~termlib arch=linux-centos7-x86_64 [+] ^[email protected]%[email protected] arch=linux-centos7-x86_64 [+] ^[email protected]%[email protected] arch=linux-centos7-x86_64 [+] ^[email protected]%[email protected]~darwinssl~gssapi~libssh~libssh2~nghttp2 arch=linux-centos7-x86_64 [+] ^[email protected]%[email protected]+systemcerts arch=linux-centos7-x86_64 [+] ^[email protected]%[email protected]+optimize+pic+shared arch=linux-centos7-x86_64 [+] ^[email protected]%[email protected]+libbsd arch=linux-centos7-x86_64 [+] ^[email protected]%[email protected] arch=linux-centos7-x86_64 [+] ^[email protected]%[email protected]+bzip2+curses+git~libunistring+libxml2 patches=9acdb4e73f67c241b5ef32505c9ddf7cf6884ca8ea661692f21dca28483b04b8 +tar+xz arch=linux-centos7-x86_64 [+] ^[email protected]%[email protected]+shared arch=linux-centos7-x86_64 [+] ^[email protected]%[email protected] arch=linux-centos7-x86_64 [+] ^[email protected]%[email protected]~python arch=linux-centos7-x86_64 [+] ^[email protected]%[email protected] arch=linux-centos7-x86_64 [+] ^[email protected]%[email protected] arch=linux-centos7-x86_64 [+] ^[email protected]%[email protected] arch=linux-centos7-x86_64 [+] ^[email protected]%[email protected] arch=linux-centos7-x86_64 [+] ^[email protected]%[email protected]+jit+multibyte+utf arch=linux-centos7-x86_64 [+] ^[email protected]%[email protected] arch=linux-centos7-x86_64 [+] ^[email protected]%[email protected] arch=linux-centos7-x86_64 ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `var/spack/repos/builtin/packages/lua-luafilesystem/package.py` Content: ``` 1 # Copyright 2013-2019 Lawrence Livermore National Security, LLC and other 2 # Spack Project Developers. See the top-level COPYRIGHT file for details. 3 # 4 # SPDX-License-Identifier: (Apache-2.0 OR MIT) 5 6 import os.path 7 8 9 class LuaLuafilesystem(Package): 10 """LuaFileSystem is a Lua library developed to complement the set of 11 functions related to file systems offered by the standard Lua distribution. 12 13 LuaFileSystem offers a portable way to access the underlying directory 14 structure and file attributes. 15 16 LuaFileSystem is free software and uses the same license as Lua 5.1 17 """ 18 19 homepage = 'http://keplerproject.github.io/luafilesystem' 20 url = 'https://github.com/keplerproject/luafilesystem/archive/v1_6_3.tar.gz' 21 22 version('1_7_0_2', sha256='23b4883aeb4fb90b2d0f338659f33a631f9df7a7e67c54115775a77d4ac3cc59') 23 version('1_6_3', 'bed11874cfded8b4beed7dd054127b24') 24 25 # The version constraint here comes from this post: 26 # 27 # https://www.perforce.com/blog/git-beyond-basics-using-shallow-clones 28 # 29 # where it is claimed that full shallow clone support was added @1.9 30 depends_on('[email protected]:', type='build') 31 extends('lua') 32 33 @property 34 def rockspec(self): 35 version = self.spec.version 36 semver = version[0:3] 37 tweak_level = version[3] if len(version) > 3 else 1 38 fmt = os.path.join( 39 self.stage.path, 40 'luafilesystem-{version.underscored}', 41 'rockspecs', 42 'luafilesystem-{semver.dotted}-{tweak_level}.rockspec' 43 ) 44 return fmt.format( 45 version=version, semver=semver, tweak_level=tweak_level 46 ) 47 48 def install(self, spec, prefix): 49 luarocks('--tree=' + prefix, 'install', self.rockspec) 50 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/var/spack/repos/builtin/packages/lua-luafilesystem/package.py b/var/spack/repos/builtin/packages/lua-luafilesystem/package.py --- a/var/spack/repos/builtin/packages/lua-luafilesystem/package.py +++ b/var/spack/repos/builtin/packages/lua-luafilesystem/package.py @@ -36,8 +36,7 @@ semver = version[0:3] tweak_level = version[3] if len(version) > 3 else 1 fmt = os.path.join( - self.stage.path, - 'luafilesystem-{version.underscored}', + self.stage.source_path, 'rockspecs', 'luafilesystem-{semver.dotted}-{tweak_level}.rockspec' )
{"golden_diff": "diff --git a/var/spack/repos/builtin/packages/lua-luafilesystem/package.py b/var/spack/repos/builtin/packages/lua-luafilesystem/package.py\n--- a/var/spack/repos/builtin/packages/lua-luafilesystem/package.py\n+++ b/var/spack/repos/builtin/packages/lua-luafilesystem/package.py\n@@ -36,8 +36,7 @@\n semver = version[0:3]\n tweak_level = version[3] if len(version) > 3 else 1\n fmt = os.path.join(\n- self.stage.path,\n- 'luafilesystem-{version.underscored}',\n+ self.stage.source_path,\n 'rockspecs',\n 'luafilesystem-{semver.dotted}-{tweak_level}.rockspec'\n )\n", "issue": "Installation issue: lua-luafilesystem\nI have a nightly CI build that now dies while trying to do a `spack install lua-luafilesystem`. This occurred upon the the merge of #11528 \r\n\r\nIn other words, I am seeing that `lua-luafilesystem` will not install using commit eb584d8 but will install from 8e3fd3f.\r\n\r\nThis is on CentOS 7 , gcc 8.2.0\r\n\r\n### Steps to reproduce the issue\r\n\r\n```\r\n[spack]$ git checkout eb584d8\r\nPrevious HEAD position was 8e3fd3f... tty: make tty.* print exception types\r\nHEAD is now at eb584d8... refactor: remove unused spack.stage._get_mirrors() function\r\n[spack]$ ./bin/spack install lua-luafilesystem\r\n...\r\n==> lua is already installed in /home/osolberg/temp/spack/opt/spack/linux-centos7-x86_64/gcc-8.2.0/lua-5.3.5-fulv52lir6poddzxeogk7rgrqglnkbon\r\n==> Installing lua-luafilesystem\r\n==> Searching for binary cache of lua-luafilesystem\r\n==> Finding buildcaches in /bifx/apps/spack/mirror/build_cache\r\n==> No binary for lua-luafilesystem found: installing from source\r\n==> Using cached archive: /home/osolberg/temp/spack/var/spack/cache/lua-luafilesystem/lua-luafilesystem-1_7_0_2.tar.gz\r\n==> Staging archive: /home/osolberg/temp/spack/var/spack/stage/lua-luafilesystem-1_7_0_2-dhumhtidskeakxzmlru6qnprnpw7lthz/v1_7_0_2.tar.gz\r\n==> Created stage in /home/osolberg/temp/spack/var/spack/stage/lua-luafilesystem-1_7_0_2-dhumhtidskeakxzmlru6qnprnpw7lthz\r\n==> No patches needed for lua-luafilesystem\r\n==> Building lua-luafilesystem [Package]\r\n==> Error: OSError: [Errno 2] No such file or directory: '/home/osolberg/temp/spack/var/spack/stage/lua-luafilesystem-1_7_0_2-dhumhtidskeakxzmlru6qnprnpw7lthz/src'\r\n\r\n/home/osolberg/temp/spack/lib/spack/spack/package.py:1577, in build_process:\r\n 1574 echo = logger.echo\r\n 1575 self.log()\r\n 1576\r\n >> 1577 # Run post install hooks before build stage is removed.\r\n 1578 spack.hooks.post_install(self.spec)\r\n 1579\r\n 1580 # Stop timer.\r\n\r\n```\r\n\r\n### Platform and user environment\r\n\r\n```commandline\r\n[spack]$ uname -a\r\nLinux bifx1n03.bold.bio 3.10.0-957.1.3.el7.x86_64 #1 SMP Thu Nov 29 14:49:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux\r\n\r\n[spack]$ ./bin/spack spec --install-status lua-luafilesystem\r\nInput spec\r\n--------------------------------\r\n - lua-luafilesystem\r\n\r\nConcretized\r\n--------------------------------\r\n - lua-luafilesystem@1_7_0_2%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]~tcltk arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]~symlinks~termlib arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]~darwinssl~gssapi~libssh~libssh2~nghttp2 arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]+systemcerts arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]+optimize+pic+shared arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]+libbsd arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]+bzip2+curses+git~libunistring+libxml2 patches=9acdb4e73f67c241b5ef32505c9ddf7cf6884ca8ea661692f21dca28483b04b8 +tar+xz arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]+shared arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]~python arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]+jit+multibyte+utf arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n```\n", "before_files": [{"content": "# Copyright 2013-2019 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nimport os.path\n\n\nclass LuaLuafilesystem(Package):\n \"\"\"LuaFileSystem is a Lua library developed to complement the set of\n functions related to file systems offered by the standard Lua distribution.\n\n LuaFileSystem offers a portable way to access the underlying directory\n structure and file attributes.\n\n LuaFileSystem is free software and uses the same license as Lua 5.1\n \"\"\"\n\n homepage = 'http://keplerproject.github.io/luafilesystem'\n url = 'https://github.com/keplerproject/luafilesystem/archive/v1_6_3.tar.gz'\n\n version('1_7_0_2', sha256='23b4883aeb4fb90b2d0f338659f33a631f9df7a7e67c54115775a77d4ac3cc59')\n version('1_6_3', 'bed11874cfded8b4beed7dd054127b24')\n\n # The version constraint here comes from this post:\n #\n # https://www.perforce.com/blog/git-beyond-basics-using-shallow-clones\n #\n # where it is claimed that full shallow clone support was added @1.9\n depends_on('[email protected]:', type='build')\n extends('lua')\n\n @property\n def rockspec(self):\n version = self.spec.version\n semver = version[0:3]\n tweak_level = version[3] if len(version) > 3 else 1\n fmt = os.path.join(\n self.stage.path,\n 'luafilesystem-{version.underscored}',\n 'rockspecs',\n 'luafilesystem-{semver.dotted}-{tweak_level}.rockspec'\n )\n return fmt.format(\n version=version, semver=semver, tweak_level=tweak_level\n )\n\n def install(self, spec, prefix):\n luarocks('--tree=' + prefix, 'install', self.rockspec)\n", "path": "var/spack/repos/builtin/packages/lua-luafilesystem/package.py"}], "after_files": [{"content": "# Copyright 2013-2019 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nimport os.path\n\n\nclass LuaLuafilesystem(Package):\n \"\"\"LuaFileSystem is a Lua library developed to complement the set of\n functions related to file systems offered by the standard Lua distribution.\n\n LuaFileSystem offers a portable way to access the underlying directory\n structure and file attributes.\n\n LuaFileSystem is free software and uses the same license as Lua 5.1\n \"\"\"\n\n homepage = 'http://keplerproject.github.io/luafilesystem'\n url = 'https://github.com/keplerproject/luafilesystem/archive/v1_6_3.tar.gz'\n\n version('1_7_0_2', sha256='23b4883aeb4fb90b2d0f338659f33a631f9df7a7e67c54115775a77d4ac3cc59')\n version('1_6_3', 'bed11874cfded8b4beed7dd054127b24')\n\n # The version constraint here comes from this post:\n #\n # https://www.perforce.com/blog/git-beyond-basics-using-shallow-clones\n #\n # where it is claimed that full shallow clone support was added @1.9\n depends_on('[email protected]:', type='build')\n extends('lua')\n\n @property\n def rockspec(self):\n version = self.spec.version\n semver = version[0:3]\n tweak_level = version[3] if len(version) > 3 else 1\n fmt = os.path.join(\n self.stage.source_path,\n 'rockspecs',\n 'luafilesystem-{semver.dotted}-{tweak_level}.rockspec'\n )\n return fmt.format(\n version=version, semver=semver, tweak_level=tweak_level\n )\n\n def install(self, spec, prefix):\n luarocks('--tree=' + prefix, 'install', self.rockspec)\n", "path": "var/spack/repos/builtin/packages/lua-luafilesystem/package.py"}]}
2,895
164
gh_patches_debug_12022
rasdani/github-patches
git_diff
nipy__nipype-2054
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- nipypecli should have --version ### Summary ```shell $> nipypecli --version Error: no such option: --version ``` ### Actual behavior ```shell $> nipypecli --version Error: no such option: --version ``` ### Expected behavior ```shell $> nipypecli --version 0.13.1 ``` ### How to replicate the behavior ```shell $> nipypecli --version Error: no such option: --version ``` ### Script/Workflow details Please put URL to code or code here (if not too long). http://www.onerussian.com/tmp/nipypecli ### Platform details: Please paste the output of: `python -c "import nipype; print(nipype.get_info()); print(nipype.__version__)"` ```shell $> python -c "import nipype; print(nipype.get_info()); print(nipype.__version__)" {'nibabel_version': '2.1.0', 'sys_executable': '/usr/bin/python', 'networkx_version': '1.11', 'numpy_version': '1.12.0', 'sys_platform': 'linux2', 'sys_version': '2.7.13 (default, Jan 19 2017, 14:48:08) \n[GCC 6.3.0 20170118]', 'commit_source': u'archive substitution', 'commit_hash': u'8946bcab9d0e2f24e5364e42d4a7766e00237cb8', 'pkg_path': '/usr/lib/python2.7/dist-packages/nipype', 'nipype_version': u'0.13.1', 'traits_version': '4.6.0', 'scipy_version': '0.18.1'} 0.13.1 ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `nipype/scripts/cli.py` Content: ``` 1 #!python 2 # emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*- 3 # vi: set ft=python sts=4 ts=4 sw=4 et: 4 from io import open 5 6 import click 7 8 from .instance import list_interfaces 9 from .utils import (CONTEXT_SETTINGS, 10 UNKNOWN_OPTIONS, 11 ExistingDirPath, 12 ExistingFilePath, 13 UnexistingFilePath, 14 RegularExpression, 15 PythonModule, 16 check_not_none,) 17 18 19 # declare the CLI group 20 @click.group(context_settings=CONTEXT_SETTINGS) 21 def cli(): 22 pass 23 24 25 @cli.command(context_settings=CONTEXT_SETTINGS) 26 @click.argument('logdir', type=ExistingDirPath, callback=check_not_none) 27 @click.option('-r', '--regex', type=RegularExpression(), callback=check_not_none, 28 help='Regular expression to be searched in each traceback.') 29 def search(logdir, regex): 30 """Search for tracebacks content. 31 32 Search for traceback inside a folder of nipype crash log files that match 33 a given regular expression. 34 35 Examples:\n 36 nipypecli search nipype/wd/log -r '.*subject123.*' 37 """ 38 from .crash_files import iter_tracebacks 39 40 for file, trace in iter_tracebacks(logdir): 41 if regex.search(trace): 42 click.echo("-" * len(file)) 43 click.echo(file) 44 click.echo("-" * len(file)) 45 click.echo(trace) 46 47 48 @cli.command(context_settings=CONTEXT_SETTINGS) 49 @click.argument('crashfile', type=ExistingFilePath, callback=check_not_none) 50 @click.option('-r', '--rerun', is_flag=True, flag_value=True, 51 help='Rerun crashed node.') 52 @click.option('-d', '--debug', is_flag=True, flag_value=True, 53 help='Enable Python debugger when re-executing.') 54 @click.option('-i', '--ipydebug', is_flag=True, flag_value=True, 55 help='Enable IPython debugger when re-executing.') 56 @click.option('-w', '--dir', type=ExistingDirPath, 57 help='Directory where to run the node in.') 58 def crash(crashfile, rerun, debug, ipydebug, dir): 59 """Display Nipype crash files. 60 61 For certain crash files, one can rerun a failed node in a temp directory. 62 63 Examples:\n 64 nipypecli crash crashfile.pklz\n 65 nipypecli crash crashfile.pklz -r -i\n 66 """ 67 from .crash_files import display_crash_file 68 69 debug = 'ipython' if ipydebug else debug 70 if debug == 'ipython': 71 import sys 72 from IPython.core import ultratb 73 sys.excepthook = ultratb.FormattedTB(mode='Verbose', 74 color_scheme='Linux', 75 call_pdb=1) 76 display_crash_file(crashfile, rerun, debug, dir) 77 78 79 @cli.command(context_settings=CONTEXT_SETTINGS) 80 @click.argument('pklz_file', type=ExistingFilePath, callback=check_not_none) 81 def show(pklz_file): 82 """Print the content of Nipype node .pklz file. 83 84 Examples:\n 85 nipypecli show node.pklz 86 """ 87 from pprint import pprint 88 from ..utils.filemanip import loadpkl 89 90 pkl_data = loadpkl(pklz_file) 91 pprint(pkl_data) 92 93 94 @cli.command(context_settings=UNKNOWN_OPTIONS) 95 @click.argument('module', type=PythonModule(), required=False, 96 callback=check_not_none) 97 @click.argument('interface', type=str, required=False) 98 @click.option('--list', is_flag=True, flag_value=True, 99 help='List the available Interfaces inside the given module.') 100 @click.option('-h', '--help', is_flag=True, flag_value=True, 101 help='Show help message and exit.') 102 @click.pass_context 103 def run(ctx, module, interface, list, help): 104 """Run a Nipype Interface. 105 106 Examples:\n 107 nipypecli run nipype.interfaces.nipy --list\n 108 nipypecli run nipype.interfaces.nipy ComputeMask --help 109 """ 110 import argparse 111 from .utils import add_args_options 112 from ..utils.nipype_cmd import run_instance 113 114 # print run command help if no arguments are given 115 module_given = bool(module) 116 if not module_given: 117 click.echo(ctx.command.get_help(ctx)) 118 119 # print the list of available interfaces for the given module 120 elif (module_given and list) or (module_given and not interface): 121 iface_names = list_interfaces(module) 122 click.echo('Available Interfaces:') 123 for if_name in iface_names: 124 click.echo(' {}'.format(if_name)) 125 126 # check the interface 127 elif (module_given and interface): 128 # create the argument parser 129 description = "Run {}".format(interface) 130 prog = " ".join([ctx.command_path, 131 module.__name__, 132 interface] + ctx.args) 133 iface_parser = argparse.ArgumentParser(description=description, 134 prog=prog) 135 136 # instantiate the interface 137 node = getattr(module, interface)() 138 iface_parser = add_args_options(iface_parser, node) 139 140 if not ctx.args: 141 # print the interface help 142 try: 143 iface_parser.print_help() 144 except: 145 print('An error ocurred when trying to print the full' 146 'command help, printing usage.') 147 finally: 148 iface_parser.print_usage() 149 else: 150 # run the interface 151 args = iface_parser.parse_args(args=ctx.args) 152 run_instance(node, args) 153 154 155 @cli.group() 156 def convert(): 157 """Export nipype interfaces to other formats.""" 158 pass 159 160 161 @convert.command(context_settings=CONTEXT_SETTINGS) 162 @click.option("-i", "--interface", type=str, required=True, 163 help="Name of the Nipype interface to export.") 164 @click.option("-m", "--module", type=PythonModule(), required=True, 165 callback=check_not_none, 166 help="Module where the interface is defined.") 167 @click.option("-o", "--output", type=UnexistingFilePath, required=True, 168 callback=check_not_none, 169 help="JSON file name where the Boutiques descriptor will be written.") 170 @click.option("-t", "--ignored-template-inputs", type=str, multiple=True, 171 help="Interface inputs ignored in path template creations.") 172 @click.option("-d", "--docker-image", type=str, 173 help="Name of the Docker image where the Nipype interface is available.") 174 @click.option("-r", "--docker-index", type=str, 175 help="Docker index where the Docker image is stored (e.g. http://index.docker.io).") 176 @click.option("-n", "--ignore-template-numbers", is_flag=True, flag_value=True, 177 help="Ignore all numbers in path template creations.") 178 @click.option("-v", "--verbose", is_flag=True, flag_value=True, 179 help="Enable verbose output.") 180 def boutiques(interface, module, output, ignored_template_inputs, 181 docker_image, docker_index, ignore_template_numbers, 182 verbose): 183 """Nipype to Boutiques exporter. 184 185 See Boutiques specification at https://github.com/boutiques/schema. 186 """ 187 from nipype.utils.nipype2boutiques import generate_boutiques_descriptor 188 189 # Generates JSON string 190 json_string = generate_boutiques_descriptor(module, 191 interface, 192 ignored_template_inputs, 193 docker_image, 194 docker_index, 195 verbose, 196 ignore_template_numbers) 197 198 # Writes JSON string to file 199 with open(output, 'w') as f: 200 f.write(json_string) 201 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/nipype/scripts/cli.py b/nipype/scripts/cli.py --- a/nipype/scripts/cli.py +++ b/nipype/scripts/cli.py @@ -15,6 +15,7 @@ PythonModule, check_not_none,) +from .. import __version__ # declare the CLI group @click.group(context_settings=CONTEXT_SETTINGS) @@ -151,6 +152,10 @@ args = iface_parser.parse_args(args=ctx.args) run_instance(node, args) [email protected](context_settings=CONTEXT_SETTINGS) +def version(): + """Print current version of Nipype.""" + click.echo(__version__) @cli.group() def convert():
{"golden_diff": "diff --git a/nipype/scripts/cli.py b/nipype/scripts/cli.py\n--- a/nipype/scripts/cli.py\n+++ b/nipype/scripts/cli.py\n@@ -15,6 +15,7 @@\n PythonModule,\n check_not_none,)\n \n+from .. import __version__\n \n # declare the CLI group\n @click.group(context_settings=CONTEXT_SETTINGS)\n@@ -151,6 +152,10 @@\n args = iface_parser.parse_args(args=ctx.args)\n run_instance(node, args)\n \[email protected](context_settings=CONTEXT_SETTINGS)\n+def version():\n+ \"\"\"Print current version of Nipype.\"\"\"\n+ click.echo(__version__)\n \n @cli.group()\n def convert():\n", "issue": "nipypecli should have --version \n### Summary\r\n\r\n```shell\r\n$> nipypecli --version\r\nError: no such option: --version\r\n```\r\n\r\n### Actual behavior\r\n\r\n```shell\r\n$> nipypecli --version\r\nError: no such option: --version\r\n```\r\n### Expected behavior\r\n```shell\r\n$> nipypecli --version\r\n0.13.1\r\n```\r\n### How to replicate the behavior\r\n\r\n```shell\r\n$> nipypecli --version\r\nError: no such option: --version\r\n```\r\n### Script/Workflow details\r\n\r\nPlease put URL to code or code here (if not too long).\r\n\r\nhttp://www.onerussian.com/tmp/nipypecli\r\n\r\n### Platform details:\r\n\r\nPlease paste the output of: `python -c \"import nipype; print(nipype.get_info()); print(nipype.__version__)\"`\r\n\r\n```shell\r\n$> python -c \"import nipype; print(nipype.get_info()); print(nipype.__version__)\" \r\n{'nibabel_version': '2.1.0', 'sys_executable': '/usr/bin/python', 'networkx_version': '1.11', 'numpy_version': '1.12.0', 'sys_platform': 'linux2', 'sys_version': '2.7.13 (default, Jan 19 2017, 14:48:08) \\n[GCC 6.3.0 20170118]', 'commit_source': u'archive substitution', 'commit_hash': u'8946bcab9d0e2f24e5364e42d4a7766e00237cb8', 'pkg_path': '/usr/lib/python2.7/dist-packages/nipype', 'nipype_version': u'0.13.1', 'traits_version': '4.6.0', 'scipy_version': '0.18.1'}\r\n0.13.1\r\n```\r\n\r\n\r\n\n", "before_files": [{"content": "#!python\n# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-\n# vi: set ft=python sts=4 ts=4 sw=4 et:\nfrom io import open\n\nimport click\n\nfrom .instance import list_interfaces\nfrom .utils import (CONTEXT_SETTINGS,\n UNKNOWN_OPTIONS,\n ExistingDirPath,\n ExistingFilePath,\n UnexistingFilePath,\n RegularExpression,\n PythonModule,\n check_not_none,)\n\n\n# declare the CLI group\[email protected](context_settings=CONTEXT_SETTINGS)\ndef cli():\n pass\n\n\[email protected](context_settings=CONTEXT_SETTINGS)\[email protected]('logdir', type=ExistingDirPath, callback=check_not_none)\[email protected]('-r', '--regex', type=RegularExpression(), callback=check_not_none,\n help='Regular expression to be searched in each traceback.')\ndef search(logdir, regex):\n \"\"\"Search for tracebacks content.\n\n Search for traceback inside a folder of nipype crash log files that match\n a given regular expression.\n\n Examples:\\n\n nipypecli search nipype/wd/log -r '.*subject123.*'\n \"\"\"\n from .crash_files import iter_tracebacks\n\n for file, trace in iter_tracebacks(logdir):\n if regex.search(trace):\n click.echo(\"-\" * len(file))\n click.echo(file)\n click.echo(\"-\" * len(file))\n click.echo(trace)\n\n\[email protected](context_settings=CONTEXT_SETTINGS)\[email protected]('crashfile', type=ExistingFilePath, callback=check_not_none)\[email protected]('-r', '--rerun', is_flag=True, flag_value=True,\n help='Rerun crashed node.')\[email protected]('-d', '--debug', is_flag=True, flag_value=True,\n help='Enable Python debugger when re-executing.')\[email protected]('-i', '--ipydebug', is_flag=True, flag_value=True,\n help='Enable IPython debugger when re-executing.')\[email protected]('-w', '--dir', type=ExistingDirPath,\n help='Directory where to run the node in.')\ndef crash(crashfile, rerun, debug, ipydebug, dir):\n \"\"\"Display Nipype crash files.\n\n For certain crash files, one can rerun a failed node in a temp directory.\n\n Examples:\\n\n nipypecli crash crashfile.pklz\\n\n nipypecli crash crashfile.pklz -r -i\\n\n \"\"\"\n from .crash_files import display_crash_file\n\n debug = 'ipython' if ipydebug else debug\n if debug == 'ipython':\n import sys\n from IPython.core import ultratb\n sys.excepthook = ultratb.FormattedTB(mode='Verbose',\n color_scheme='Linux',\n call_pdb=1)\n display_crash_file(crashfile, rerun, debug, dir)\n\n\[email protected](context_settings=CONTEXT_SETTINGS)\[email protected]('pklz_file', type=ExistingFilePath, callback=check_not_none)\ndef show(pklz_file):\n \"\"\"Print the content of Nipype node .pklz file.\n\n Examples:\\n\n nipypecli show node.pklz\n \"\"\"\n from pprint import pprint\n from ..utils.filemanip import loadpkl\n\n pkl_data = loadpkl(pklz_file)\n pprint(pkl_data)\n\n\[email protected](context_settings=UNKNOWN_OPTIONS)\[email protected]('module', type=PythonModule(), required=False,\n callback=check_not_none)\[email protected]('interface', type=str, required=False)\[email protected]('--list', is_flag=True, flag_value=True,\n help='List the available Interfaces inside the given module.')\[email protected]('-h', '--help', is_flag=True, flag_value=True,\n help='Show help message and exit.')\[email protected]_context\ndef run(ctx, module, interface, list, help):\n \"\"\"Run a Nipype Interface.\n\n Examples:\\n\n nipypecli run nipype.interfaces.nipy --list\\n\n nipypecli run nipype.interfaces.nipy ComputeMask --help\n \"\"\"\n import argparse\n from .utils import add_args_options\n from ..utils.nipype_cmd import run_instance\n\n # print run command help if no arguments are given\n module_given = bool(module)\n if not module_given:\n click.echo(ctx.command.get_help(ctx))\n\n # print the list of available interfaces for the given module\n elif (module_given and list) or (module_given and not interface):\n iface_names = list_interfaces(module)\n click.echo('Available Interfaces:')\n for if_name in iface_names:\n click.echo(' {}'.format(if_name))\n\n # check the interface\n elif (module_given and interface):\n # create the argument parser\n description = \"Run {}\".format(interface)\n prog = \" \".join([ctx.command_path,\n module.__name__,\n interface] + ctx.args)\n iface_parser = argparse.ArgumentParser(description=description,\n prog=prog)\n\n # instantiate the interface\n node = getattr(module, interface)()\n iface_parser = add_args_options(iface_parser, node)\n\n if not ctx.args:\n # print the interface help\n try:\n iface_parser.print_help()\n except:\n print('An error ocurred when trying to print the full'\n 'command help, printing usage.')\n finally:\n iface_parser.print_usage()\n else:\n # run the interface\n args = iface_parser.parse_args(args=ctx.args)\n run_instance(node, args)\n\n\[email protected]()\ndef convert():\n \"\"\"Export nipype interfaces to other formats.\"\"\"\n pass\n\n\[email protected](context_settings=CONTEXT_SETTINGS)\[email protected](\"-i\", \"--interface\", type=str, required=True,\n help=\"Name of the Nipype interface to export.\")\[email protected](\"-m\", \"--module\", type=PythonModule(), required=True,\n callback=check_not_none,\n help=\"Module where the interface is defined.\")\[email protected](\"-o\", \"--output\", type=UnexistingFilePath, required=True,\n callback=check_not_none,\n help=\"JSON file name where the Boutiques descriptor will be written.\")\[email protected](\"-t\", \"--ignored-template-inputs\", type=str, multiple=True,\n help=\"Interface inputs ignored in path template creations.\")\[email protected](\"-d\", \"--docker-image\", type=str,\n help=\"Name of the Docker image where the Nipype interface is available.\")\[email protected](\"-r\", \"--docker-index\", type=str,\n help=\"Docker index where the Docker image is stored (e.g. http://index.docker.io).\")\[email protected](\"-n\", \"--ignore-template-numbers\", is_flag=True, flag_value=True,\n help=\"Ignore all numbers in path template creations.\")\[email protected](\"-v\", \"--verbose\", is_flag=True, flag_value=True,\n help=\"Enable verbose output.\")\ndef boutiques(interface, module, output, ignored_template_inputs,\n docker_image, docker_index, ignore_template_numbers,\n verbose):\n \"\"\"Nipype to Boutiques exporter.\n\n See Boutiques specification at https://github.com/boutiques/schema.\n \"\"\"\n from nipype.utils.nipype2boutiques import generate_boutiques_descriptor\n\n # Generates JSON string\n json_string = generate_boutiques_descriptor(module,\n interface,\n ignored_template_inputs,\n docker_image,\n docker_index,\n verbose,\n ignore_template_numbers)\n\n # Writes JSON string to file\n with open(output, 'w') as f:\n f.write(json_string)\n", "path": "nipype/scripts/cli.py"}], "after_files": [{"content": "#!python\n# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-\n# vi: set ft=python sts=4 ts=4 sw=4 et:\nfrom io import open\n\nimport click\n\nfrom .instance import list_interfaces\nfrom .utils import (CONTEXT_SETTINGS,\n UNKNOWN_OPTIONS,\n ExistingDirPath,\n ExistingFilePath,\n UnexistingFilePath,\n RegularExpression,\n PythonModule,\n check_not_none,)\n\nfrom .. import __version__\n\n# declare the CLI group\[email protected](context_settings=CONTEXT_SETTINGS)\ndef cli():\n pass\n\n\[email protected](context_settings=CONTEXT_SETTINGS)\[email protected]('logdir', type=ExistingDirPath, callback=check_not_none)\[email protected]('-r', '--regex', type=RegularExpression(), callback=check_not_none,\n help='Regular expression to be searched in each traceback.')\ndef search(logdir, regex):\n \"\"\"Search for tracebacks content.\n\n Search for traceback inside a folder of nipype crash log files that match\n a given regular expression.\n\n Examples:\\n\n nipypecli search nipype/wd/log -r '.*subject123.*'\n \"\"\"\n from .crash_files import iter_tracebacks\n\n for file, trace in iter_tracebacks(logdir):\n if regex.search(trace):\n click.echo(\"-\" * len(file))\n click.echo(file)\n click.echo(\"-\" * len(file))\n click.echo(trace)\n\n\[email protected](context_settings=CONTEXT_SETTINGS)\[email protected]('crashfile', type=ExistingFilePath, callback=check_not_none)\[email protected]('-r', '--rerun', is_flag=True, flag_value=True,\n help='Rerun crashed node.')\[email protected]('-d', '--debug', is_flag=True, flag_value=True,\n help='Enable Python debugger when re-executing.')\[email protected]('-i', '--ipydebug', is_flag=True, flag_value=True,\n help='Enable IPython debugger when re-executing.')\[email protected]('-w', '--dir', type=ExistingDirPath,\n help='Directory where to run the node in.')\ndef crash(crashfile, rerun, debug, ipydebug, dir):\n \"\"\"Display Nipype crash files.\n\n For certain crash files, one can rerun a failed node in a temp directory.\n\n Examples:\\n\n nipypecli crash crashfile.pklz\\n\n nipypecli crash crashfile.pklz -r -i\\n\n \"\"\"\n from .crash_files import display_crash_file\n\n debug = 'ipython' if ipydebug else debug\n if debug == 'ipython':\n import sys\n from IPython.core import ultratb\n sys.excepthook = ultratb.FormattedTB(mode='Verbose',\n color_scheme='Linux',\n call_pdb=1)\n display_crash_file(crashfile, rerun, debug, dir)\n\n\[email protected](context_settings=CONTEXT_SETTINGS)\[email protected]('pklz_file', type=ExistingFilePath, callback=check_not_none)\ndef show(pklz_file):\n \"\"\"Print the content of Nipype node .pklz file.\n\n Examples:\\n\n nipypecli show node.pklz\n \"\"\"\n from pprint import pprint\n from ..utils.filemanip import loadpkl\n\n pkl_data = loadpkl(pklz_file)\n pprint(pkl_data)\n\n\[email protected](context_settings=UNKNOWN_OPTIONS)\[email protected]('module', type=PythonModule(), required=False,\n callback=check_not_none)\[email protected]('interface', type=str, required=False)\[email protected]('--list', is_flag=True, flag_value=True,\n help='List the available Interfaces inside the given module.')\[email protected]('-h', '--help', is_flag=True, flag_value=True,\n help='Show help message and exit.')\[email protected]_context\ndef run(ctx, module, interface, list, help):\n \"\"\"Run a Nipype Interface.\n\n Examples:\\n\n nipypecli run nipype.interfaces.nipy --list\\n\n nipypecli run nipype.interfaces.nipy ComputeMask --help\n \"\"\"\n import argparse\n from .utils import add_args_options\n from ..utils.nipype_cmd import run_instance\n\n # print run command help if no arguments are given\n module_given = bool(module)\n if not module_given:\n click.echo(ctx.command.get_help(ctx))\n\n # print the list of available interfaces for the given module\n elif (module_given and list) or (module_given and not interface):\n iface_names = list_interfaces(module)\n click.echo('Available Interfaces:')\n for if_name in iface_names:\n click.echo(' {}'.format(if_name))\n\n # check the interface\n elif (module_given and interface):\n # create the argument parser\n description = \"Run {}\".format(interface)\n prog = \" \".join([ctx.command_path,\n module.__name__,\n interface] + ctx.args)\n iface_parser = argparse.ArgumentParser(description=description,\n prog=prog)\n\n # instantiate the interface\n node = getattr(module, interface)()\n iface_parser = add_args_options(iface_parser, node)\n\n if not ctx.args:\n # print the interface help\n try:\n iface_parser.print_help()\n except:\n print('An error ocurred when trying to print the full'\n 'command help, printing usage.')\n finally:\n iface_parser.print_usage()\n else:\n # run the interface\n args = iface_parser.parse_args(args=ctx.args)\n run_instance(node, args)\n\[email protected](context_settings=CONTEXT_SETTINGS)\ndef version():\n \"\"\"Print current version of Nipype.\"\"\"\n click.echo(__version__)\n\[email protected]()\ndef convert():\n \"\"\"Export nipype interfaces to other formats.\"\"\"\n pass\n\n\[email protected](context_settings=CONTEXT_SETTINGS)\[email protected](\"-i\", \"--interface\", type=str, required=True,\n help=\"Name of the Nipype interface to export.\")\[email protected](\"-m\", \"--module\", type=PythonModule(), required=True,\n callback=check_not_none,\n help=\"Module where the interface is defined.\")\[email protected](\"-o\", \"--output\", type=UnexistingFilePath, required=True,\n callback=check_not_none,\n help=\"JSON file name where the Boutiques descriptor will be written.\")\[email protected](\"-t\", \"--ignored-template-inputs\", type=str, multiple=True,\n help=\"Interface inputs ignored in path template creations.\")\[email protected](\"-d\", \"--docker-image\", type=str,\n help=\"Name of the Docker image where the Nipype interface is available.\")\[email protected](\"-r\", \"--docker-index\", type=str,\n help=\"Docker index where the Docker image is stored (e.g. http://index.docker.io).\")\[email protected](\"-n\", \"--ignore-template-numbers\", is_flag=True, flag_value=True,\n help=\"Ignore all numbers in path template creations.\")\[email protected](\"-v\", \"--verbose\", is_flag=True, flag_value=True,\n help=\"Enable verbose output.\")\ndef boutiques(interface, module, output, ignored_template_inputs,\n docker_image, docker_index, ignore_template_numbers,\n verbose):\n \"\"\"Nipype to Boutiques exporter.\n\n See Boutiques specification at https://github.com/boutiques/schema.\n \"\"\"\n from nipype.utils.nipype2boutiques import generate_boutiques_descriptor\n\n # Generates JSON string\n json_string = generate_boutiques_descriptor(module,\n interface,\n ignored_template_inputs,\n docker_image,\n docker_index,\n verbose,\n ignore_template_numbers)\n\n # Writes JSON string to file\n with open(output, 'w') as f:\n f.write(json_string)\n", "path": "nipype/scripts/cli.py"}]}
2,815
156
gh_patches_debug_27790
rasdani/github-patches
git_diff
great-expectations__great_expectations-6871
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Use cleaner solution for non-truncating division in python 2 Prefer `from __future__ import division` to `1.*x/y` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `great_expectations/expectations/core/expect_column_values_to_not_match_regex.py` Content: ``` 1 from typing import TYPE_CHECKING, List, Optional 2 3 from great_expectations.core import ( 4 ExpectationConfiguration, 5 ExpectationValidationResult, 6 ) 7 from great_expectations.expectations.expectation import ( 8 ColumnMapExpectation, 9 InvalidExpectationConfigurationError, 10 render_evaluation_parameter_string, 11 ) 12 from great_expectations.render import ( 13 LegacyDescriptiveRendererType, 14 LegacyRendererType, 15 RenderedStringTemplateContent, 16 ) 17 from great_expectations.render.renderer.renderer import renderer 18 from great_expectations.render.renderer_configuration import ( 19 RendererConfiguration, 20 RendererValueType, 21 ) 22 from great_expectations.render.util import ( 23 num_to_str, 24 parse_row_condition_string_pandas_engine, 25 substitute_none_for_missing, 26 ) 27 from great_expectations.rule_based_profiler.config.base import ( 28 ParameterBuilderConfig, 29 RuleBasedProfilerConfig, 30 ) 31 from great_expectations.rule_based_profiler.parameter_container import ( 32 DOMAIN_KWARGS_PARAMETER_FULLY_QUALIFIED_NAME, 33 FULLY_QUALIFIED_PARAMETER_NAME_METADATA_KEY, 34 FULLY_QUALIFIED_PARAMETER_NAME_SEPARATOR_CHARACTER, 35 FULLY_QUALIFIED_PARAMETER_NAME_VALUE_KEY, 36 PARAMETER_KEY, 37 VARIABLES_KEY, 38 ) 39 40 if TYPE_CHECKING: 41 from great_expectations.render.renderer_configuration import AddParamArgs 42 43 try: 44 import sqlalchemy as sa # noqa: F401 45 except ImportError: 46 pass 47 48 49 class ExpectColumnValuesToNotMatchRegex(ColumnMapExpectation): 50 """Expect the column entries to be strings that do NOT match a given regular expression. 51 52 The regex must not match \ 53 any portion of the provided string. For example, "[at]+" would identify the following strings as expected: \ 54 "fish", "dog", and the following as unexpected: "cat", "hat". 55 56 expect_column_values_to_not_match_regex is a \ 57 [Column Map Expectation](https://docs.greatexpectations.io/docs/guides/expectations/creating_custom_expectations/how_to_create_custom_column_map_expectations). 58 59 Args: 60 column (str): \ 61 The column name. 62 regex (str): \ 63 The regular expression the column entries should NOT match. 64 65 Keyword Args: 66 mostly (None or a float between 0 and 1): \ 67 Successful if at least mostly fraction of values match the expectation. \ 68 For more detail, see [mostly](https://docs.greatexpectations.io/docs/reference/expectations/standard_arguments/#mostly). 69 70 Other Parameters: 71 result_format (str or None): \ 72 Which output mode to use: BOOLEAN_ONLY, BASIC, COMPLETE, or SUMMARY. \ 73 For more detail, see [result_format](https://docs.greatexpectations.io/docs/reference/expectations/result_format). 74 include_config (boolean): \ 75 If True, then include the expectation config as part of the result object. 76 catch_exceptions (boolean or None): \ 77 If True, then catch exceptions and include them as part of the result object. \ 78 For more detail, see [catch_exceptions](https://docs.greatexpectations.io/docs/reference/expectations/standard_arguments/#catch_exceptions). 79 meta (dict or None): \ 80 A JSON-serializable dictionary (nesting allowed) that will be included in the output without \ 81 modification. For more detail, see [meta](https://docs.greatexpectations.io/docs/reference/expectations/standard_arguments/#meta). 82 83 Returns: 84 An [ExpectationSuiteValidationResult](https://docs.greatexpectations.io/docs/terms/validation_result) 85 86 Exact fields vary depending on the values passed to result_format, include_config, catch_exceptions, and meta. 87 88 See Also: 89 [expect_column_values_to_match_regex](https://greatexpectations.io/expectations/expect_column_values_to_match_regex) 90 [expect_column_values_to_match_regex_list](https://greatexpectations.io/expectations/expect_column_values_to_match_regex_list) 91 [expect_column_values_to_not_match_regex_list](https://greatexpectations.io/expectations/expect_column_values_to_not_match_regex_list) 92 [expect_column_values_to_match_like_pattern](https://greatexpectations.io/expectations/expect_column_values_to_match_like_pattern) 93 [expect_column_values_to_match_like_pattern_list](https://greatexpectations.io/expectations/expect_column_values_to_match_like_pattern_list) 94 [expect_column_values_to_not_match_like_pattern](https://greatexpectations.io/expectations/expect_column_values_to_not_match_like_pattern) 95 [expect_column_values_to_not_match_like_pattern_list](https://greatexpectations.io/expectations/expect_column_values_to_not_match_like_pattern_list) 96 """ 97 98 library_metadata = { 99 "maturity": "production", 100 "tags": ["core expectation", "column map expectation"], 101 "contributors": [ 102 "@great_expectations", 103 ], 104 "requirements": [], 105 "has_full_test_suite": True, 106 "manually_reviewed_code": True, 107 } 108 109 map_metric = "column_values.not_match_regex" 110 success_keys = ( 111 "regex", 112 "mostly", 113 "auto", 114 "profiler_config", 115 ) 116 117 regex_pattern_string_parameter_builder_config: ParameterBuilderConfig = ( 118 ParameterBuilderConfig( 119 module_name="great_expectations.rule_based_profiler.parameter_builder", 120 class_name="RegexPatternStringParameterBuilder", 121 name="regex_pattern_string_parameter_builder", 122 metric_domain_kwargs=DOMAIN_KWARGS_PARAMETER_FULLY_QUALIFIED_NAME, 123 metric_value_kwargs=None, 124 evaluation_parameter_builder_configs=None, 125 ) 126 ) 127 validation_parameter_builder_configs: List[ParameterBuilderConfig] = [ 128 regex_pattern_string_parameter_builder_config 129 ] 130 default_profiler_config = RuleBasedProfilerConfig( 131 name="expect_column_values_to_not_match_regex", # Convention: use "expectation_type" as profiler name. 132 config_version=1.0, 133 variables={}, 134 rules={ 135 "default_expect_column_values_to_not_match_regex_rule": { 136 "variables": { 137 "mostly": 1.0, 138 }, 139 "domain_builder": { 140 "class_name": "ColumnDomainBuilder", 141 "module_name": "great_expectations.rule_based_profiler.domain_builder", 142 }, 143 "expectation_configuration_builders": [ 144 { 145 "expectation_type": "expect_column_values_to_not_match_regex", 146 "class_name": "DefaultExpectationConfigurationBuilder", 147 "module_name": "great_expectations.rule_based_profiler.expectation_configuration_builder", 148 "validation_parameter_builder_configs": validation_parameter_builder_configs, 149 "column": f"{DOMAIN_KWARGS_PARAMETER_FULLY_QUALIFIED_NAME}{FULLY_QUALIFIED_PARAMETER_NAME_SEPARATOR_CHARACTER}column", 150 "regex": f"{PARAMETER_KEY}{regex_pattern_string_parameter_builder_config.name}{FULLY_QUALIFIED_PARAMETER_NAME_SEPARATOR_CHARACTER}{FULLY_QUALIFIED_PARAMETER_NAME_VALUE_KEY}", 151 "mostly": f"{VARIABLES_KEY}mostly", 152 "meta": { 153 "profiler_details": f"{PARAMETER_KEY}{regex_pattern_string_parameter_builder_config.name}{FULLY_QUALIFIED_PARAMETER_NAME_SEPARATOR_CHARACTER}{FULLY_QUALIFIED_PARAMETER_NAME_METADATA_KEY}", 154 }, 155 }, 156 ], 157 }, 158 }, 159 ) 160 161 default_kwarg_values = { 162 "row_condition": None, 163 "condition_parser": None, # we expect this to be explicitly set whenever a row_condition is passed 164 "mostly": 1, 165 "result_format": "BASIC", 166 "include_config": True, 167 "catch_exceptions": True, 168 "auto": False, 169 "profiler_config": default_profiler_config, 170 } 171 args_keys = ( 172 "column", 173 "regex", 174 ) 175 176 def validate_configuration( 177 self, configuration: Optional[ExpectationConfiguration] = None 178 ) -> None: 179 super().validate_configuration(configuration) 180 configuration = configuration or self.configuration 181 try: 182 assert "regex" in configuration.kwargs, "regex is required" 183 assert isinstance( 184 configuration.kwargs["regex"], (str, dict) 185 ), "regex must be a string" 186 if isinstance(configuration.kwargs["regex"], dict): 187 assert ( 188 "$PARAMETER" in configuration.kwargs["regex"] 189 ), 'Evaluation Parameter dict for regex kwarg must have "$PARAMETER" key.' 190 except AssertionError as e: 191 raise InvalidExpectationConfigurationError(str(e)) 192 193 @classmethod 194 def _prescriptive_template( 195 cls, 196 renderer_configuration: RendererConfiguration, 197 ) -> RendererConfiguration: 198 add_param_args: AddParamArgs = ( 199 ("column", RendererValueType.STRING), 200 ("regex", RendererValueType.STRING), 201 ("mostly", RendererValueType.NUMBER), 202 ) 203 for name, param_type in add_param_args: 204 renderer_configuration.add_param(name=name, param_type=param_type) 205 206 params = renderer_configuration.params 207 208 if not params.regex: 209 template_str = ( 210 "values must not match a regular expression but none was specified." 211 ) 212 else: 213 if renderer_configuration.include_column_name: 214 template_str = ( 215 "$column values must not match this regular expression: $regex" 216 ) 217 else: 218 template_str = "values must not match this regular expression: $regex" 219 220 if params.mostly and params.mostly.value < 1.0: 221 renderer_configuration = cls._add_mostly_pct_param( 222 renderer_configuration=renderer_configuration 223 ) 224 template_str += ", at least $mostly_pct % of the time." 225 else: 226 template_str += "." 227 228 renderer_configuration.template_str = template_str 229 230 return renderer_configuration 231 232 @classmethod 233 @renderer(renderer_type=LegacyRendererType.PRESCRIPTIVE) 234 @render_evaluation_parameter_string 235 def _prescriptive_renderer( 236 cls, 237 configuration: Optional[ExpectationConfiguration] = None, 238 result: Optional[ExpectationValidationResult] = None, 239 runtime_configuration: Optional[dict] = None, 240 **kwargs, 241 ): 242 runtime_configuration = runtime_configuration or {} 243 include_column_name = ( 244 False if runtime_configuration.get("include_column_name") is False else True 245 ) 246 styling = runtime_configuration.get("styling") 247 params = substitute_none_for_missing( 248 configuration.kwargs, 249 ["column", "regex", "mostly", "row_condition", "condition_parser"], 250 ) 251 252 if not params.get("regex"): 253 template_str = ( 254 "values must not match a regular expression but none was specified." 255 ) 256 else: 257 if params["mostly"] is not None and params["mostly"] < 1.0: 258 params["mostly_pct"] = num_to_str( 259 params["mostly"] * 100, precision=15, no_scientific=True 260 ) 261 # params["mostly_pct"] = "{:.14f}".format(params["mostly"]*100).rstrip("0").rstrip(".") 262 if include_column_name: 263 template_str = "$column values must not match this regular expression: $regex, at least $mostly_pct % of the time." 264 else: 265 template_str = "values must not match this regular expression: $regex, at least $mostly_pct % of the time." 266 else: 267 if include_column_name: 268 template_str = ( 269 "$column values must not match this regular expression: $regex." 270 ) 271 else: 272 template_str = ( 273 "values must not match this regular expression: $regex." 274 ) 275 276 if params["row_condition"] is not None: 277 ( 278 conditional_template_str, 279 conditional_params, 280 ) = parse_row_condition_string_pandas_engine(params["row_condition"]) 281 template_str = f"{conditional_template_str}, then {template_str}" 282 params.update(conditional_params) 283 284 return [ 285 RenderedStringTemplateContent( 286 **{ 287 "content_block_type": "string_template", 288 "string_template": { 289 "template": template_str, 290 "params": params, 291 "styling": styling, 292 }, 293 } 294 ) 295 ] 296 297 @classmethod 298 @renderer( 299 renderer_type=LegacyDescriptiveRendererType.COLUMN_PROPERTIES_TABLE_REGEX_COUNT_ROW 300 ) 301 def _descriptive_column_properties_table_regex_count_row_renderer( 302 cls, 303 configuration: Optional[ExpectationConfiguration] = None, 304 result: Optional[ExpectationValidationResult] = None, 305 runtime_configuration: Optional[dict] = None, 306 **kwargs, 307 ): 308 assert result, "Must pass in result." 309 expectation_config = configuration or result.expectation_config 310 expectation_kwargs = expectation_config.kwargs 311 regex = expectation_kwargs.get("regex") 312 unexpected_count = result.result.get("unexpected_count", "--") 313 if regex == "^\\s+|\\s+$": 314 return ["Leading or trailing whitespace (n)", unexpected_count] 315 else: 316 return [f"Regex: {regex}", unexpected_count] 317 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/great_expectations/expectations/core/expect_column_values_to_not_match_regex.py b/great_expectations/expectations/core/expect_column_values_to_not_match_regex.py --- a/great_expectations/expectations/core/expect_column_values_to_not_match_regex.py +++ b/great_expectations/expectations/core/expect_column_values_to_not_match_regex.py @@ -176,13 +176,28 @@ def validate_configuration( self, configuration: Optional[ExpectationConfiguration] = None ) -> None: + """ + Validates the configuration of an Expectation. + + For `expect_column_values_to_not_match_regex` it is required that: + - 'regex' kwarg is of type str or dict + - if dict, assert a key "$PARAMETER" is present + + Args: + configuration: An `ExpectationConfiguration` to validate. If no configuration is provided, it will be pulled + from the configuration attribute of the Expectation instance. + + Raises: + `InvalidExpectationConfigurationError`: The configuration does not contain the values required by the + Expectation." + """ super().validate_configuration(configuration) configuration = configuration or self.configuration try: assert "regex" in configuration.kwargs, "regex is required" assert isinstance( configuration.kwargs["regex"], (str, dict) - ), "regex must be a string" + ), "regex must be a string or dict" if isinstance(configuration.kwargs["regex"], dict): assert ( "$PARAMETER" in configuration.kwargs["regex"]
{"golden_diff": "diff --git a/great_expectations/expectations/core/expect_column_values_to_not_match_regex.py b/great_expectations/expectations/core/expect_column_values_to_not_match_regex.py\n--- a/great_expectations/expectations/core/expect_column_values_to_not_match_regex.py\n+++ b/great_expectations/expectations/core/expect_column_values_to_not_match_regex.py\n@@ -176,13 +176,28 @@\n def validate_configuration(\n self, configuration: Optional[ExpectationConfiguration] = None\n ) -> None:\n+ \"\"\"\n+ Validates the configuration of an Expectation.\n+\n+ For `expect_column_values_to_not_match_regex` it is required that:\n+ - 'regex' kwarg is of type str or dict\n+ - if dict, assert a key \"$PARAMETER\" is present\n+\n+ Args:\n+ configuration: An `ExpectationConfiguration` to validate. If no configuration is provided, it will be pulled\n+ from the configuration attribute of the Expectation instance.\n+\n+ Raises:\n+ `InvalidExpectationConfigurationError`: The configuration does not contain the values required by the\n+ Expectation.\"\n+ \"\"\"\n super().validate_configuration(configuration)\n configuration = configuration or self.configuration\n try:\n assert \"regex\" in configuration.kwargs, \"regex is required\"\n assert isinstance(\n configuration.kwargs[\"regex\"], (str, dict)\n- ), \"regex must be a string\"\n+ ), \"regex must be a string or dict\"\n if isinstance(configuration.kwargs[\"regex\"], dict):\n assert (\n \"$PARAMETER\" in configuration.kwargs[\"regex\"]\n", "issue": "Use cleaner solution for non-truncating division in python 2\nPrefer `from __future__ import division` to `1.*x/y`\n", "before_files": [{"content": "from typing import TYPE_CHECKING, List, Optional\n\nfrom great_expectations.core import (\n ExpectationConfiguration,\n ExpectationValidationResult,\n)\nfrom great_expectations.expectations.expectation import (\n ColumnMapExpectation,\n InvalidExpectationConfigurationError,\n render_evaluation_parameter_string,\n)\nfrom great_expectations.render import (\n LegacyDescriptiveRendererType,\n LegacyRendererType,\n RenderedStringTemplateContent,\n)\nfrom great_expectations.render.renderer.renderer import renderer\nfrom great_expectations.render.renderer_configuration import (\n RendererConfiguration,\n RendererValueType,\n)\nfrom great_expectations.render.util import (\n num_to_str,\n parse_row_condition_string_pandas_engine,\n substitute_none_for_missing,\n)\nfrom great_expectations.rule_based_profiler.config.base import (\n ParameterBuilderConfig,\n RuleBasedProfilerConfig,\n)\nfrom great_expectations.rule_based_profiler.parameter_container import (\n DOMAIN_KWARGS_PARAMETER_FULLY_QUALIFIED_NAME,\n FULLY_QUALIFIED_PARAMETER_NAME_METADATA_KEY,\n FULLY_QUALIFIED_PARAMETER_NAME_SEPARATOR_CHARACTER,\n FULLY_QUALIFIED_PARAMETER_NAME_VALUE_KEY,\n PARAMETER_KEY,\n VARIABLES_KEY,\n)\n\nif TYPE_CHECKING:\n from great_expectations.render.renderer_configuration import AddParamArgs\n\ntry:\n import sqlalchemy as sa # noqa: F401\nexcept ImportError:\n pass\n\n\nclass ExpectColumnValuesToNotMatchRegex(ColumnMapExpectation):\n \"\"\"Expect the column entries to be strings that do NOT match a given regular expression.\n\n The regex must not match \\\n any portion of the provided string. For example, \"[at]+\" would identify the following strings as expected: \\\n \"fish\", \"dog\", and the following as unexpected: \"cat\", \"hat\".\n\n expect_column_values_to_not_match_regex is a \\\n [Column Map Expectation](https://docs.greatexpectations.io/docs/guides/expectations/creating_custom_expectations/how_to_create_custom_column_map_expectations).\n\n Args:\n column (str): \\\n The column name.\n regex (str): \\\n The regular expression the column entries should NOT match.\n\n Keyword Args:\n mostly (None or a float between 0 and 1): \\\n Successful if at least mostly fraction of values match the expectation. \\\n For more detail, see [mostly](https://docs.greatexpectations.io/docs/reference/expectations/standard_arguments/#mostly).\n\n Other Parameters:\n result_format (str or None): \\\n Which output mode to use: BOOLEAN_ONLY, BASIC, COMPLETE, or SUMMARY. \\\n For more detail, see [result_format](https://docs.greatexpectations.io/docs/reference/expectations/result_format).\n include_config (boolean): \\\n If True, then include the expectation config as part of the result object.\n catch_exceptions (boolean or None): \\\n If True, then catch exceptions and include them as part of the result object. \\\n For more detail, see [catch_exceptions](https://docs.greatexpectations.io/docs/reference/expectations/standard_arguments/#catch_exceptions).\n meta (dict or None): \\\n A JSON-serializable dictionary (nesting allowed) that will be included in the output without \\\n modification. For more detail, see [meta](https://docs.greatexpectations.io/docs/reference/expectations/standard_arguments/#meta).\n\n Returns:\n An [ExpectationSuiteValidationResult](https://docs.greatexpectations.io/docs/terms/validation_result)\n\n Exact fields vary depending on the values passed to result_format, include_config, catch_exceptions, and meta.\n\n See Also:\n [expect_column_values_to_match_regex](https://greatexpectations.io/expectations/expect_column_values_to_match_regex)\n [expect_column_values_to_match_regex_list](https://greatexpectations.io/expectations/expect_column_values_to_match_regex_list)\n [expect_column_values_to_not_match_regex_list](https://greatexpectations.io/expectations/expect_column_values_to_not_match_regex_list)\n [expect_column_values_to_match_like_pattern](https://greatexpectations.io/expectations/expect_column_values_to_match_like_pattern)\n [expect_column_values_to_match_like_pattern_list](https://greatexpectations.io/expectations/expect_column_values_to_match_like_pattern_list)\n [expect_column_values_to_not_match_like_pattern](https://greatexpectations.io/expectations/expect_column_values_to_not_match_like_pattern)\n [expect_column_values_to_not_match_like_pattern_list](https://greatexpectations.io/expectations/expect_column_values_to_not_match_like_pattern_list)\n \"\"\"\n\n library_metadata = {\n \"maturity\": \"production\",\n \"tags\": [\"core expectation\", \"column map expectation\"],\n \"contributors\": [\n \"@great_expectations\",\n ],\n \"requirements\": [],\n \"has_full_test_suite\": True,\n \"manually_reviewed_code\": True,\n }\n\n map_metric = \"column_values.not_match_regex\"\n success_keys = (\n \"regex\",\n \"mostly\",\n \"auto\",\n \"profiler_config\",\n )\n\n regex_pattern_string_parameter_builder_config: ParameterBuilderConfig = (\n ParameterBuilderConfig(\n module_name=\"great_expectations.rule_based_profiler.parameter_builder\",\n class_name=\"RegexPatternStringParameterBuilder\",\n name=\"regex_pattern_string_parameter_builder\",\n metric_domain_kwargs=DOMAIN_KWARGS_PARAMETER_FULLY_QUALIFIED_NAME,\n metric_value_kwargs=None,\n evaluation_parameter_builder_configs=None,\n )\n )\n validation_parameter_builder_configs: List[ParameterBuilderConfig] = [\n regex_pattern_string_parameter_builder_config\n ]\n default_profiler_config = RuleBasedProfilerConfig(\n name=\"expect_column_values_to_not_match_regex\", # Convention: use \"expectation_type\" as profiler name.\n config_version=1.0,\n variables={},\n rules={\n \"default_expect_column_values_to_not_match_regex_rule\": {\n \"variables\": {\n \"mostly\": 1.0,\n },\n \"domain_builder\": {\n \"class_name\": \"ColumnDomainBuilder\",\n \"module_name\": \"great_expectations.rule_based_profiler.domain_builder\",\n },\n \"expectation_configuration_builders\": [\n {\n \"expectation_type\": \"expect_column_values_to_not_match_regex\",\n \"class_name\": \"DefaultExpectationConfigurationBuilder\",\n \"module_name\": \"great_expectations.rule_based_profiler.expectation_configuration_builder\",\n \"validation_parameter_builder_configs\": validation_parameter_builder_configs,\n \"column\": f\"{DOMAIN_KWARGS_PARAMETER_FULLY_QUALIFIED_NAME}{FULLY_QUALIFIED_PARAMETER_NAME_SEPARATOR_CHARACTER}column\",\n \"regex\": f\"{PARAMETER_KEY}{regex_pattern_string_parameter_builder_config.name}{FULLY_QUALIFIED_PARAMETER_NAME_SEPARATOR_CHARACTER}{FULLY_QUALIFIED_PARAMETER_NAME_VALUE_KEY}\",\n \"mostly\": f\"{VARIABLES_KEY}mostly\",\n \"meta\": {\n \"profiler_details\": f\"{PARAMETER_KEY}{regex_pattern_string_parameter_builder_config.name}{FULLY_QUALIFIED_PARAMETER_NAME_SEPARATOR_CHARACTER}{FULLY_QUALIFIED_PARAMETER_NAME_METADATA_KEY}\",\n },\n },\n ],\n },\n },\n )\n\n default_kwarg_values = {\n \"row_condition\": None,\n \"condition_parser\": None, # we expect this to be explicitly set whenever a row_condition is passed\n \"mostly\": 1,\n \"result_format\": \"BASIC\",\n \"include_config\": True,\n \"catch_exceptions\": True,\n \"auto\": False,\n \"profiler_config\": default_profiler_config,\n }\n args_keys = (\n \"column\",\n \"regex\",\n )\n\n def validate_configuration(\n self, configuration: Optional[ExpectationConfiguration] = None\n ) -> None:\n super().validate_configuration(configuration)\n configuration = configuration or self.configuration\n try:\n assert \"regex\" in configuration.kwargs, \"regex is required\"\n assert isinstance(\n configuration.kwargs[\"regex\"], (str, dict)\n ), \"regex must be a string\"\n if isinstance(configuration.kwargs[\"regex\"], dict):\n assert (\n \"$PARAMETER\" in configuration.kwargs[\"regex\"]\n ), 'Evaluation Parameter dict for regex kwarg must have \"$PARAMETER\" key.'\n except AssertionError as e:\n raise InvalidExpectationConfigurationError(str(e))\n\n @classmethod\n def _prescriptive_template(\n cls,\n renderer_configuration: RendererConfiguration,\n ) -> RendererConfiguration:\n add_param_args: AddParamArgs = (\n (\"column\", RendererValueType.STRING),\n (\"regex\", RendererValueType.STRING),\n (\"mostly\", RendererValueType.NUMBER),\n )\n for name, param_type in add_param_args:\n renderer_configuration.add_param(name=name, param_type=param_type)\n\n params = renderer_configuration.params\n\n if not params.regex:\n template_str = (\n \"values must not match a regular expression but none was specified.\"\n )\n else:\n if renderer_configuration.include_column_name:\n template_str = (\n \"$column values must not match this regular expression: $regex\"\n )\n else:\n template_str = \"values must not match this regular expression: $regex\"\n\n if params.mostly and params.mostly.value < 1.0:\n renderer_configuration = cls._add_mostly_pct_param(\n renderer_configuration=renderer_configuration\n )\n template_str += \", at least $mostly_pct % of the time.\"\n else:\n template_str += \".\"\n\n renderer_configuration.template_str = template_str\n\n return renderer_configuration\n\n @classmethod\n @renderer(renderer_type=LegacyRendererType.PRESCRIPTIVE)\n @render_evaluation_parameter_string\n def _prescriptive_renderer(\n cls,\n configuration: Optional[ExpectationConfiguration] = None,\n result: Optional[ExpectationValidationResult] = None,\n runtime_configuration: Optional[dict] = None,\n **kwargs,\n ):\n runtime_configuration = runtime_configuration or {}\n include_column_name = (\n False if runtime_configuration.get(\"include_column_name\") is False else True\n )\n styling = runtime_configuration.get(\"styling\")\n params = substitute_none_for_missing(\n configuration.kwargs,\n [\"column\", \"regex\", \"mostly\", \"row_condition\", \"condition_parser\"],\n )\n\n if not params.get(\"regex\"):\n template_str = (\n \"values must not match a regular expression but none was specified.\"\n )\n else:\n if params[\"mostly\"] is not None and params[\"mostly\"] < 1.0:\n params[\"mostly_pct\"] = num_to_str(\n params[\"mostly\"] * 100, precision=15, no_scientific=True\n )\n # params[\"mostly_pct\"] = \"{:.14f}\".format(params[\"mostly\"]*100).rstrip(\"0\").rstrip(\".\")\n if include_column_name:\n template_str = \"$column values must not match this regular expression: $regex, at least $mostly_pct % of the time.\"\n else:\n template_str = \"values must not match this regular expression: $regex, at least $mostly_pct % of the time.\"\n else:\n if include_column_name:\n template_str = (\n \"$column values must not match this regular expression: $regex.\"\n )\n else:\n template_str = (\n \"values must not match this regular expression: $regex.\"\n )\n\n if params[\"row_condition\"] is not None:\n (\n conditional_template_str,\n conditional_params,\n ) = parse_row_condition_string_pandas_engine(params[\"row_condition\"])\n template_str = f\"{conditional_template_str}, then {template_str}\"\n params.update(conditional_params)\n\n return [\n RenderedStringTemplateContent(\n **{\n \"content_block_type\": \"string_template\",\n \"string_template\": {\n \"template\": template_str,\n \"params\": params,\n \"styling\": styling,\n },\n }\n )\n ]\n\n @classmethod\n @renderer(\n renderer_type=LegacyDescriptiveRendererType.COLUMN_PROPERTIES_TABLE_REGEX_COUNT_ROW\n )\n def _descriptive_column_properties_table_regex_count_row_renderer(\n cls,\n configuration: Optional[ExpectationConfiguration] = None,\n result: Optional[ExpectationValidationResult] = None,\n runtime_configuration: Optional[dict] = None,\n **kwargs,\n ):\n assert result, \"Must pass in result.\"\n expectation_config = configuration or result.expectation_config\n expectation_kwargs = expectation_config.kwargs\n regex = expectation_kwargs.get(\"regex\")\n unexpected_count = result.result.get(\"unexpected_count\", \"--\")\n if regex == \"^\\\\s+|\\\\s+$\":\n return [\"Leading or trailing whitespace (n)\", unexpected_count]\n else:\n return [f\"Regex: {regex}\", unexpected_count]\n", "path": "great_expectations/expectations/core/expect_column_values_to_not_match_regex.py"}], "after_files": [{"content": "from typing import TYPE_CHECKING, List, Optional\n\nfrom great_expectations.core import (\n ExpectationConfiguration,\n ExpectationValidationResult,\n)\nfrom great_expectations.expectations.expectation import (\n ColumnMapExpectation,\n InvalidExpectationConfigurationError,\n render_evaluation_parameter_string,\n)\nfrom great_expectations.render import (\n LegacyDescriptiveRendererType,\n LegacyRendererType,\n RenderedStringTemplateContent,\n)\nfrom great_expectations.render.renderer.renderer import renderer\nfrom great_expectations.render.renderer_configuration import (\n RendererConfiguration,\n RendererValueType,\n)\nfrom great_expectations.render.util import (\n num_to_str,\n parse_row_condition_string_pandas_engine,\n substitute_none_for_missing,\n)\nfrom great_expectations.rule_based_profiler.config.base import (\n ParameterBuilderConfig,\n RuleBasedProfilerConfig,\n)\nfrom great_expectations.rule_based_profiler.parameter_container import (\n DOMAIN_KWARGS_PARAMETER_FULLY_QUALIFIED_NAME,\n FULLY_QUALIFIED_PARAMETER_NAME_METADATA_KEY,\n FULLY_QUALIFIED_PARAMETER_NAME_SEPARATOR_CHARACTER,\n FULLY_QUALIFIED_PARAMETER_NAME_VALUE_KEY,\n PARAMETER_KEY,\n VARIABLES_KEY,\n)\n\nif TYPE_CHECKING:\n from great_expectations.render.renderer_configuration import AddParamArgs\n\ntry:\n import sqlalchemy as sa # noqa: F401\nexcept ImportError:\n pass\n\n\nclass ExpectColumnValuesToNotMatchRegex(ColumnMapExpectation):\n \"\"\"Expect the column entries to be strings that do NOT match a given regular expression.\n\n The regex must not match \\\n any portion of the provided string. For example, \"[at]+\" would identify the following strings as expected: \\\n \"fish\", \"dog\", and the following as unexpected: \"cat\", \"hat\".\n\n expect_column_values_to_not_match_regex is a \\\n [Column Map Expectation](https://docs.greatexpectations.io/docs/guides/expectations/creating_custom_expectations/how_to_create_custom_column_map_expectations).\n\n Args:\n column (str): \\\n The column name.\n regex (str): \\\n The regular expression the column entries should NOT match.\n\n Keyword Args:\n mostly (None or a float between 0 and 1): \\\n Successful if at least mostly fraction of values match the expectation. \\\n For more detail, see [mostly](https://docs.greatexpectations.io/docs/reference/expectations/standard_arguments/#mostly).\n\n Other Parameters:\n result_format (str or None): \\\n Which output mode to use: BOOLEAN_ONLY, BASIC, COMPLETE, or SUMMARY. \\\n For more detail, see [result_format](https://docs.greatexpectations.io/docs/reference/expectations/result_format).\n include_config (boolean): \\\n If True, then include the expectation config as part of the result object.\n catch_exceptions (boolean or None): \\\n If True, then catch exceptions and include them as part of the result object. \\\n For more detail, see [catch_exceptions](https://docs.greatexpectations.io/docs/reference/expectations/standard_arguments/#catch_exceptions).\n meta (dict or None): \\\n A JSON-serializable dictionary (nesting allowed) that will be included in the output without \\\n modification. For more detail, see [meta](https://docs.greatexpectations.io/docs/reference/expectations/standard_arguments/#meta).\n\n Returns:\n An [ExpectationSuiteValidationResult](https://docs.greatexpectations.io/docs/terms/validation_result)\n\n Exact fields vary depending on the values passed to result_format, include_config, catch_exceptions, and meta.\n\n See Also:\n [expect_column_values_to_match_regex](https://greatexpectations.io/expectations/expect_column_values_to_match_regex)\n [expect_column_values_to_match_regex_list](https://greatexpectations.io/expectations/expect_column_values_to_match_regex_list)\n [expect_column_values_to_not_match_regex_list](https://greatexpectations.io/expectations/expect_column_values_to_not_match_regex_list)\n [expect_column_values_to_match_like_pattern](https://greatexpectations.io/expectations/expect_column_values_to_match_like_pattern)\n [expect_column_values_to_match_like_pattern_list](https://greatexpectations.io/expectations/expect_column_values_to_match_like_pattern_list)\n [expect_column_values_to_not_match_like_pattern](https://greatexpectations.io/expectations/expect_column_values_to_not_match_like_pattern)\n [expect_column_values_to_not_match_like_pattern_list](https://greatexpectations.io/expectations/expect_column_values_to_not_match_like_pattern_list)\n \"\"\"\n\n library_metadata = {\n \"maturity\": \"production\",\n \"tags\": [\"core expectation\", \"column map expectation\"],\n \"contributors\": [\n \"@great_expectations\",\n ],\n \"requirements\": [],\n \"has_full_test_suite\": True,\n \"manually_reviewed_code\": True,\n }\n\n map_metric = \"column_values.not_match_regex\"\n success_keys = (\n \"regex\",\n \"mostly\",\n \"auto\",\n \"profiler_config\",\n )\n\n regex_pattern_string_parameter_builder_config: ParameterBuilderConfig = (\n ParameterBuilderConfig(\n module_name=\"great_expectations.rule_based_profiler.parameter_builder\",\n class_name=\"RegexPatternStringParameterBuilder\",\n name=\"regex_pattern_string_parameter_builder\",\n metric_domain_kwargs=DOMAIN_KWARGS_PARAMETER_FULLY_QUALIFIED_NAME,\n metric_value_kwargs=None,\n evaluation_parameter_builder_configs=None,\n )\n )\n validation_parameter_builder_configs: List[ParameterBuilderConfig] = [\n regex_pattern_string_parameter_builder_config\n ]\n default_profiler_config = RuleBasedProfilerConfig(\n name=\"expect_column_values_to_not_match_regex\", # Convention: use \"expectation_type\" as profiler name.\n config_version=1.0,\n variables={},\n rules={\n \"default_expect_column_values_to_not_match_regex_rule\": {\n \"variables\": {\n \"mostly\": 1.0,\n },\n \"domain_builder\": {\n \"class_name\": \"ColumnDomainBuilder\",\n \"module_name\": \"great_expectations.rule_based_profiler.domain_builder\",\n },\n \"expectation_configuration_builders\": [\n {\n \"expectation_type\": \"expect_column_values_to_not_match_regex\",\n \"class_name\": \"DefaultExpectationConfigurationBuilder\",\n \"module_name\": \"great_expectations.rule_based_profiler.expectation_configuration_builder\",\n \"validation_parameter_builder_configs\": validation_parameter_builder_configs,\n \"column\": f\"{DOMAIN_KWARGS_PARAMETER_FULLY_QUALIFIED_NAME}{FULLY_QUALIFIED_PARAMETER_NAME_SEPARATOR_CHARACTER}column\",\n \"regex\": f\"{PARAMETER_KEY}{regex_pattern_string_parameter_builder_config.name}{FULLY_QUALIFIED_PARAMETER_NAME_SEPARATOR_CHARACTER}{FULLY_QUALIFIED_PARAMETER_NAME_VALUE_KEY}\",\n \"mostly\": f\"{VARIABLES_KEY}mostly\",\n \"meta\": {\n \"profiler_details\": f\"{PARAMETER_KEY}{regex_pattern_string_parameter_builder_config.name}{FULLY_QUALIFIED_PARAMETER_NAME_SEPARATOR_CHARACTER}{FULLY_QUALIFIED_PARAMETER_NAME_METADATA_KEY}\",\n },\n },\n ],\n },\n },\n )\n\n default_kwarg_values = {\n \"row_condition\": None,\n \"condition_parser\": None, # we expect this to be explicitly set whenever a row_condition is passed\n \"mostly\": 1,\n \"result_format\": \"BASIC\",\n \"include_config\": True,\n \"catch_exceptions\": True,\n \"auto\": False,\n \"profiler_config\": default_profiler_config,\n }\n args_keys = (\n \"column\",\n \"regex\",\n )\n\n def validate_configuration(\n self, configuration: Optional[ExpectationConfiguration] = None\n ) -> None:\n \"\"\"\n Validates the configuration of an Expectation.\n\n For `expect_column_values_to_not_match_regex` it is required that:\n - 'regex' kwarg is of type str or dict\n - if dict, assert a key \"$PARAMETER\" is present\n\n Args:\n configuration: An `ExpectationConfiguration` to validate. If no configuration is provided, it will be pulled\n from the configuration attribute of the Expectation instance.\n\n Raises:\n `InvalidExpectationConfigurationError`: The configuration does not contain the values required by the\n Expectation.\"\n \"\"\"\n super().validate_configuration(configuration)\n configuration = configuration or self.configuration\n try:\n assert \"regex\" in configuration.kwargs, \"regex is required\"\n assert isinstance(\n configuration.kwargs[\"regex\"], (str, dict)\n ), \"regex must be a string or dict\"\n if isinstance(configuration.kwargs[\"regex\"], dict):\n assert (\n \"$PARAMETER\" in configuration.kwargs[\"regex\"]\n ), 'Evaluation Parameter dict for regex kwarg must have \"$PARAMETER\" key.'\n except AssertionError as e:\n raise InvalidExpectationConfigurationError(str(e))\n\n @classmethod\n def _prescriptive_template(\n cls,\n renderer_configuration: RendererConfiguration,\n ) -> RendererConfiguration:\n add_param_args: AddParamArgs = (\n (\"column\", RendererValueType.STRING),\n (\"regex\", RendererValueType.STRING),\n (\"mostly\", RendererValueType.NUMBER),\n )\n for name, param_type in add_param_args:\n renderer_configuration.add_param(name=name, param_type=param_type)\n\n params = renderer_configuration.params\n\n if not params.regex:\n template_str = (\n \"values must not match a regular expression but none was specified.\"\n )\n else:\n if renderer_configuration.include_column_name:\n template_str = (\n \"$column values must not match this regular expression: $regex\"\n )\n else:\n template_str = \"values must not match this regular expression: $regex\"\n\n if params.mostly and params.mostly.value < 1.0:\n renderer_configuration = cls._add_mostly_pct_param(\n renderer_configuration=renderer_configuration\n )\n template_str += \", at least $mostly_pct % of the time.\"\n else:\n template_str += \".\"\n\n renderer_configuration.template_str = template_str\n\n return renderer_configuration\n\n @classmethod\n @renderer(renderer_type=LegacyRendererType.PRESCRIPTIVE)\n @render_evaluation_parameter_string\n def _prescriptive_renderer(\n cls,\n configuration: Optional[ExpectationConfiguration] = None,\n result: Optional[ExpectationValidationResult] = None,\n runtime_configuration: Optional[dict] = None,\n **kwargs,\n ):\n runtime_configuration = runtime_configuration or {}\n include_column_name = (\n False if runtime_configuration.get(\"include_column_name\") is False else True\n )\n styling = runtime_configuration.get(\"styling\")\n params = substitute_none_for_missing(\n configuration.kwargs,\n [\"column\", \"regex\", \"mostly\", \"row_condition\", \"condition_parser\"],\n )\n\n if not params.get(\"regex\"):\n template_str = (\n \"values must not match a regular expression but none was specified.\"\n )\n else:\n if params[\"mostly\"] is not None and params[\"mostly\"] < 1.0:\n params[\"mostly_pct\"] = num_to_str(\n params[\"mostly\"] * 100, precision=15, no_scientific=True\n )\n # params[\"mostly_pct\"] = \"{:.14f}\".format(params[\"mostly\"]*100).rstrip(\"0\").rstrip(\".\")\n if include_column_name:\n template_str = \"$column values must not match this regular expression: $regex, at least $mostly_pct % of the time.\"\n else:\n template_str = \"values must not match this regular expression: $regex, at least $mostly_pct % of the time.\"\n else:\n if include_column_name:\n template_str = (\n \"$column values must not match this regular expression: $regex.\"\n )\n else:\n template_str = (\n \"values must not match this regular expression: $regex.\"\n )\n\n if params[\"row_condition\"] is not None:\n (\n conditional_template_str,\n conditional_params,\n ) = parse_row_condition_string_pandas_engine(params[\"row_condition\"])\n template_str = f\"{conditional_template_str}, then {template_str}\"\n params.update(conditional_params)\n\n return [\n RenderedStringTemplateContent(\n **{\n \"content_block_type\": \"string_template\",\n \"string_template\": {\n \"template\": template_str,\n \"params\": params,\n \"styling\": styling,\n },\n }\n )\n ]\n\n @classmethod\n @renderer(\n renderer_type=LegacyDescriptiveRendererType.COLUMN_PROPERTIES_TABLE_REGEX_COUNT_ROW\n )\n def _descriptive_column_properties_table_regex_count_row_renderer(\n cls,\n configuration: Optional[ExpectationConfiguration] = None,\n result: Optional[ExpectationValidationResult] = None,\n runtime_configuration: Optional[dict] = None,\n **kwargs,\n ):\n assert result, \"Must pass in result.\"\n expectation_config = configuration or result.expectation_config\n expectation_kwargs = expectation_config.kwargs\n regex = expectation_kwargs.get(\"regex\")\n unexpected_count = result.result.get(\"unexpected_count\", \"--\")\n if regex == \"^\\\\s+|\\\\s+$\":\n return [\"Leading or trailing whitespace (n)\", unexpected_count]\n else:\n return [f\"Regex: {regex}\", unexpected_count]\n", "path": "great_expectations/expectations/core/expect_column_values_to_not_match_regex.py"}]}
3,807
349
gh_patches_debug_12631
rasdani/github-patches
git_diff
cupy__cupy-6118
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Indexing with assignment between broadcastable arrays is inconsistent with NumPy When performing `A[idx, ...] = B` with `B` broadcastable over `A[idx, ...]` (so no storage expansion for `A[idx, ...]` is necessary) with `B.ndim > A.ndim` CuPy throws a shape mismatch error while NumPy handles this case. * Code to reproduce ```python In [1]: import numpy In [2]: import cupy In [3]: def test(module): ...: x = module.zeros((3, 3, 3)) ...: y = module.ones((1, 3, 3)) ...: x[0, ...] = y ...: return x ...: ...: In [4]: test(numpy) Out[4]: array([[[1., 1., 1.], [1., 1., 1.], [1., 1., 1.]], [[0., 0., 0.], [0., 0., 0.], [0., 0., 0.]], [[0., 0., 0.], [0., 0., 0.], [0., 0., 0.]]]) In [5]: test(cupy) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-5-3f849ce2914e> in <module>() ----> 1 test(cupy) <ipython-input-3-450cff366473> in test(module) 2 x = module.zeros((3, 3, 3)) 3 y = module.ones((1, 3, 3)) ----> 4 x[0, ...] = y 5 return x cupy/_core/core.pyx in cupy._core.core.ndarray.__setitem__() cupy/_core/_routines_indexing.pyx in cupy._core._routines_indexing._ndarray_setitem() cupy/_core/_routines_indexing.pyx in cupy._core._routines_indexing._scatter_op() cupy/_core/_kernel.pyx in cupy._core._kernel.ufunc.__call__() cupy/_core/_kernel.pyx in cupy._core._kernel._get_out_args() ValueError: Out shape is mismatched ``` * Conditions ``` OS : Linux-5.4.0-81-generic-x86_64-with-debian-bullseye-sid Python Version : 3.6.7 CuPy Version : 9.5.0 CuPy Platform : NVIDIA CUDA NumPy Version : 1.19.5 SciPy Version : None Cython Build Version : 0.29.24 Cython Runtime Version : None CUDA Root : /home/nik/.conda/envs/pytorch-cuda-dev nvcc PATH : /home/nik/.conda/envs/pytorch-cuda-dev/bin/nvcc CUDA Build Version : 11020 CUDA Driver Version : 11030 CUDA Runtime Version : 11020 cuBLAS Version : (available) cuFFT Version : 10401 cuRAND Version : 10203 cuSOLVER Version : (11, 1, 0) cuSPARSE Version : (available) NVRTC Version : (11, 2) Thrust Version : 101000 CUB Build Version : 101000 Jitify Build Version : <unknown> cuDNN Build Version : 8201 cuDNN Version : 8004 NCCL Build Version : None NCCL Runtime Version : None cuTENSOR Version : None cuSPARSELt Build Version : None Device 0 Name : NVIDIA GeForce RTX 2060 Device 0 Compute Capability : 75 Device 0 PCI Bus ID : 0000:01:00.0 Device 1 Name : NVIDIA GeForce RTX 2060 Device 1 Compute Capability : 75 Device 1 PCI Bus ID : 0000:21:00.0 ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `cupy/_manipulation/basic.py` Content: ``` 1 import numpy 2 3 from cupy import _core 4 from cupy._core import _fusion_interface 5 from cupy._core import fusion 6 from cupy._sorting import search 7 from cupy_backends.cuda.api import runtime 8 9 10 def copyto(dst, src, casting='same_kind', where=None): 11 """Copies values from one array to another with broadcasting. 12 13 This function can be called for arrays on different devices. In this case, 14 casting, ``where``, and broadcasting is not supported, and an exception is 15 raised if these are used. 16 17 Args: 18 dst (cupy.ndarray): Target array. 19 src (cupy.ndarray): Source array. 20 casting (str): Casting rule. See :func:`numpy.can_cast` for detail. 21 where (cupy.ndarray of bool): If specified, this array acts as a mask, 22 and an element is copied only if the corresponding element of 23 ``where`` is True. 24 25 .. seealso:: :func:`numpy.copyto` 26 27 """ 28 29 src_type = type(src) 30 src_is_python_scalar = src_type in ( 31 int, bool, float, complex, 32 fusion._FusionVarScalar, _fusion_interface._ScalarProxy) 33 if src_is_python_scalar: 34 src_dtype = numpy.dtype(type(src)) 35 can_cast = numpy.can_cast(src, dst.dtype, casting) 36 else: 37 src_dtype = src.dtype 38 can_cast = numpy.can_cast(src_dtype, dst.dtype, casting) 39 40 if not can_cast: 41 raise TypeError('Cannot cast %s to %s in %s casting mode' % 42 (src_dtype, dst.dtype, casting)) 43 if fusion._is_fusing(): 44 if where is None: 45 _core.elementwise_copy(src, dst) 46 else: 47 fusion._call_ufunc(search._where_ufunc, where, src, dst, dst) 48 return 49 50 if where is not None: 51 _core.elementwise_copy(src, dst, _where=where) 52 return 53 54 if dst.size == 0: 55 return 56 57 if src_is_python_scalar: 58 dst.fill(src) 59 return 60 61 if _can_memcpy(dst, src): 62 dst.data.copy_from_async(src.data, src.nbytes) 63 return 64 65 device = dst.device 66 prev_device = runtime.getDevice() 67 try: 68 runtime.setDevice(device.id) 69 if src.device != device: 70 src = src.copy() 71 _core.elementwise_copy(src, dst) 72 finally: 73 runtime.setDevice(prev_device) 74 75 76 def _can_memcpy(dst, src): 77 c_contiguous = dst.flags.c_contiguous and src.flags.c_contiguous 78 f_contiguous = dst.flags.f_contiguous and src.flags.f_contiguous 79 return (c_contiguous or f_contiguous) and dst.dtype == src.dtype and \ 80 dst.size == src.size 81 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/cupy/_manipulation/basic.py b/cupy/_manipulation/basic.py --- a/cupy/_manipulation/basic.py +++ b/cupy/_manipulation/basic.py @@ -40,6 +40,16 @@ if not can_cast: raise TypeError('Cannot cast %s to %s in %s casting mode' % (src_dtype, dst.dtype, casting)) + + if not src_is_python_scalar and src.ndim > dst.ndim: + # NumPy allows stripping leading unit dimensions. + try: + src = src.squeeze(tuple(range(src.ndim - dst.ndim))) + except ValueError: + # "cannot select an axis to squeeze out + # which has size not equal to one" + pass # raise an error later + if fusion._is_fusing(): if where is None: _core.elementwise_copy(src, dst)
{"golden_diff": "diff --git a/cupy/_manipulation/basic.py b/cupy/_manipulation/basic.py\n--- a/cupy/_manipulation/basic.py\n+++ b/cupy/_manipulation/basic.py\n@@ -40,6 +40,16 @@\n if not can_cast:\n raise TypeError('Cannot cast %s to %s in %s casting mode' %\n (src_dtype, dst.dtype, casting))\n+\n+ if not src_is_python_scalar and src.ndim > dst.ndim:\n+ # NumPy allows stripping leading unit dimensions.\n+ try:\n+ src = src.squeeze(tuple(range(src.ndim - dst.ndim)))\n+ except ValueError:\n+ # \"cannot select an axis to squeeze out\n+ # which has size not equal to one\"\n+ pass # raise an error later\n+\n if fusion._is_fusing():\n if where is None:\n _core.elementwise_copy(src, dst)\n", "issue": "Indexing with assignment between broadcastable arrays is inconsistent with NumPy\nWhen performing `A[idx, ...] = B` with `B` broadcastable over `A[idx, ...]` (so no storage expansion for `A[idx, ...]` is necessary) with `B.ndim > A.ndim` CuPy throws a shape mismatch error while NumPy handles this case.\r\n\r\n* Code to reproduce\r\n```python\r\nIn [1]: import numpy\r\n\r\nIn [2]: import cupy\r\n\r\nIn [3]: def test(module):\r\n ...: x = module.zeros((3, 3, 3))\r\n ...: y = module.ones((1, 3, 3))\r\n ...: x[0, ...] = y\r\n ...: return x\r\n ...: \r\n ...: \r\n\r\nIn [4]: test(numpy)\r\nOut[4]: \r\narray([[[1., 1., 1.],\r\n [1., 1., 1.],\r\n [1., 1., 1.]],\r\n\r\n [[0., 0., 0.],\r\n [0., 0., 0.],\r\n [0., 0., 0.]],\r\n\r\n [[0., 0., 0.],\r\n [0., 0., 0.],\r\n [0., 0., 0.]]])\r\n\r\nIn [5]: test(cupy)\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-5-3f849ce2914e> in <module>()\r\n----> 1 test(cupy)\r\n\r\n<ipython-input-3-450cff366473> in test(module)\r\n 2 x = module.zeros((3, 3, 3))\r\n 3 y = module.ones((1, 3, 3))\r\n----> 4 x[0, ...] = y\r\n 5 return x\r\n\r\ncupy/_core/core.pyx in cupy._core.core.ndarray.__setitem__()\r\n\r\ncupy/_core/_routines_indexing.pyx in cupy._core._routines_indexing._ndarray_setitem()\r\n\r\ncupy/_core/_routines_indexing.pyx in cupy._core._routines_indexing._scatter_op()\r\n\r\ncupy/_core/_kernel.pyx in cupy._core._kernel.ufunc.__call__()\r\n\r\ncupy/_core/_kernel.pyx in cupy._core._kernel._get_out_args()\r\n\r\nValueError: Out shape is mismatched\r\n\r\n```\r\n\r\n* Conditions\r\n```\r\nOS : Linux-5.4.0-81-generic-x86_64-with-debian-bullseye-sid\r\nPython Version : 3.6.7\r\nCuPy Version : 9.5.0\r\nCuPy Platform : NVIDIA CUDA\r\nNumPy Version : 1.19.5\r\nSciPy Version : None\r\nCython Build Version : 0.29.24\r\nCython Runtime Version : None\r\nCUDA Root : /home/nik/.conda/envs/pytorch-cuda-dev\r\nnvcc PATH : /home/nik/.conda/envs/pytorch-cuda-dev/bin/nvcc\r\nCUDA Build Version : 11020\r\nCUDA Driver Version : 11030\r\nCUDA Runtime Version : 11020\r\ncuBLAS Version : (available)\r\ncuFFT Version : 10401\r\ncuRAND Version : 10203\r\ncuSOLVER Version : (11, 1, 0)\r\ncuSPARSE Version : (available)\r\nNVRTC Version : (11, 2)\r\nThrust Version : 101000\r\nCUB Build Version : 101000\r\nJitify Build Version : <unknown>\r\ncuDNN Build Version : 8201\r\ncuDNN Version : 8004\r\nNCCL Build Version : None\r\nNCCL Runtime Version : None\r\ncuTENSOR Version : None\r\ncuSPARSELt Build Version : None\r\nDevice 0 Name : NVIDIA GeForce RTX 2060\r\nDevice 0 Compute Capability : 75\r\nDevice 0 PCI Bus ID : 0000:01:00.0\r\nDevice 1 Name : NVIDIA GeForce RTX 2060\r\nDevice 1 Compute Capability : 75\r\nDevice 1 PCI Bus ID : 0000:21:00.0\r\n```\r\n\n", "before_files": [{"content": "import numpy\n\nfrom cupy import _core\nfrom cupy._core import _fusion_interface\nfrom cupy._core import fusion\nfrom cupy._sorting import search\nfrom cupy_backends.cuda.api import runtime\n\n\ndef copyto(dst, src, casting='same_kind', where=None):\n \"\"\"Copies values from one array to another with broadcasting.\n\n This function can be called for arrays on different devices. In this case,\n casting, ``where``, and broadcasting is not supported, and an exception is\n raised if these are used.\n\n Args:\n dst (cupy.ndarray): Target array.\n src (cupy.ndarray): Source array.\n casting (str): Casting rule. See :func:`numpy.can_cast` for detail.\n where (cupy.ndarray of bool): If specified, this array acts as a mask,\n and an element is copied only if the corresponding element of\n ``where`` is True.\n\n .. seealso:: :func:`numpy.copyto`\n\n \"\"\"\n\n src_type = type(src)\n src_is_python_scalar = src_type in (\n int, bool, float, complex,\n fusion._FusionVarScalar, _fusion_interface._ScalarProxy)\n if src_is_python_scalar:\n src_dtype = numpy.dtype(type(src))\n can_cast = numpy.can_cast(src, dst.dtype, casting)\n else:\n src_dtype = src.dtype\n can_cast = numpy.can_cast(src_dtype, dst.dtype, casting)\n\n if not can_cast:\n raise TypeError('Cannot cast %s to %s in %s casting mode' %\n (src_dtype, dst.dtype, casting))\n if fusion._is_fusing():\n if where is None:\n _core.elementwise_copy(src, dst)\n else:\n fusion._call_ufunc(search._where_ufunc, where, src, dst, dst)\n return\n\n if where is not None:\n _core.elementwise_copy(src, dst, _where=where)\n return\n\n if dst.size == 0:\n return\n\n if src_is_python_scalar:\n dst.fill(src)\n return\n\n if _can_memcpy(dst, src):\n dst.data.copy_from_async(src.data, src.nbytes)\n return\n\n device = dst.device\n prev_device = runtime.getDevice()\n try:\n runtime.setDevice(device.id)\n if src.device != device:\n src = src.copy()\n _core.elementwise_copy(src, dst)\n finally:\n runtime.setDevice(prev_device)\n\n\ndef _can_memcpy(dst, src):\n c_contiguous = dst.flags.c_contiguous and src.flags.c_contiguous\n f_contiguous = dst.flags.f_contiguous and src.flags.f_contiguous\n return (c_contiguous or f_contiguous) and dst.dtype == src.dtype and \\\n dst.size == src.size\n", "path": "cupy/_manipulation/basic.py"}], "after_files": [{"content": "import numpy\n\nfrom cupy import _core\nfrom cupy._core import _fusion_interface\nfrom cupy._core import fusion\nfrom cupy._sorting import search\nfrom cupy_backends.cuda.api import runtime\n\n\ndef copyto(dst, src, casting='same_kind', where=None):\n \"\"\"Copies values from one array to another with broadcasting.\n\n This function can be called for arrays on different devices. In this case,\n casting, ``where``, and broadcasting is not supported, and an exception is\n raised if these are used.\n\n Args:\n dst (cupy.ndarray): Target array.\n src (cupy.ndarray): Source array.\n casting (str): Casting rule. See :func:`numpy.can_cast` for detail.\n where (cupy.ndarray of bool): If specified, this array acts as a mask,\n and an element is copied only if the corresponding element of\n ``where`` is True.\n\n .. seealso:: :func:`numpy.copyto`\n\n \"\"\"\n\n src_type = type(src)\n src_is_python_scalar = src_type in (\n int, bool, float, complex,\n fusion._FusionVarScalar, _fusion_interface._ScalarProxy)\n if src_is_python_scalar:\n src_dtype = numpy.dtype(type(src))\n can_cast = numpy.can_cast(src, dst.dtype, casting)\n else:\n src_dtype = src.dtype\n can_cast = numpy.can_cast(src_dtype, dst.dtype, casting)\n\n if not can_cast:\n raise TypeError('Cannot cast %s to %s in %s casting mode' %\n (src_dtype, dst.dtype, casting))\n\n if not src_is_python_scalar and src.ndim > dst.ndim:\n # NumPy allows stripping leading unit dimensions.\n try:\n src = src.squeeze(tuple(range(src.ndim - dst.ndim)))\n except ValueError:\n # \"cannot select an axis to squeeze out\n # which has size not equal to one\"\n pass # raise an error later\n\n if fusion._is_fusing():\n if where is None:\n _core.elementwise_copy(src, dst)\n else:\n fusion._call_ufunc(search._where_ufunc, where, src, dst, dst)\n return\n\n if where is not None:\n _core.elementwise_copy(src, dst, _where=where)\n return\n\n if dst.size == 0:\n return\n\n if src_is_python_scalar:\n dst.fill(src)\n return\n\n if _can_memcpy(dst, src):\n dst.data.copy_from_async(src.data, src.nbytes)\n return\n\n device = dst.device\n prev_device = runtime.getDevice()\n try:\n runtime.setDevice(device.id)\n if src.device != device:\n src = src.copy()\n _core.elementwise_copy(src, dst)\n finally:\n runtime.setDevice(prev_device)\n\n\ndef _can_memcpy(dst, src):\n c_contiguous = dst.flags.c_contiguous and src.flags.c_contiguous\n f_contiguous = dst.flags.f_contiguous and src.flags.f_contiguous\n return (c_contiguous or f_contiguous) and dst.dtype == src.dtype and \\\n dst.size == src.size\n", "path": "cupy/_manipulation/basic.py"}]}
2,017
199
gh_patches_debug_43430
rasdani/github-patches
git_diff
alltheplaces__alltheplaces-3346
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Spider jackinthebox is broken During the global build at 2021-06-23-14-42-18, spider **jackinthebox** failed with **0 features** and **1 errors**. Here's [the log](https://data.alltheplaces.xyz/runs/2021-06-23-14-42-18/logs/jackinthebox.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-23-14-42-18/output/jackinthebox.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-23-14-42-18/output/jackinthebox.geojson)) --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `locations/spiders/jackinthebox.py` Content: ``` 1 import json 2 import re 3 import scrapy 4 from locations.items import GeojsonPointItem 5 6 class JackInTheBoxSpider(scrapy.Spider): 7 name = "jackinthebox" 8 item_attributes = { 'brand': "Jack In The Box" } 9 allowed_domains = ["jackinthebox.com"] 10 start_urls = ( 11 "https://www.jackinthebox.com/api/locations", 12 ) 13 dayMap = { 14 'monday': 'Mo', 15 'tuesday': 'Tu', 16 'wednesday': 'We', 17 'thursday': 'Th', 18 'friday': 'Fr', 19 'saturday': 'Sa', 20 'sunday': 'Su' 21 } 22 def opening_hours(self, days_hours): 23 day_groups = [] 24 this_day_group = None 25 for day_hours in days_hours: 26 day = day_hours[0] 27 hours = day_hours[1] 28 match = re.search(r'^(\d{1,2}):(\d{2})\w*(a|p)m-(\d{1,2}):(\d{2})\w*(a|p)m?$', hours) 29 (f_hr, f_min, f_ampm, t_hr, t_min, t_ampm) = match.groups() 30 31 f_hr = int(f_hr) 32 if f_ampm == 'p': 33 f_hr += 12 34 elif f_ampm == 'a' and f_hr == 12: 35 f_hr = 0 36 t_hr = int(t_hr) 37 if t_ampm == 'p': 38 t_hr += 12 39 elif t_ampm == 'a' and t_hr == 12: 40 t_hr = 0 41 42 hours = '{:02d}:{}-{:02d}:{}'.format( 43 f_hr, 44 f_min, 45 t_hr, 46 t_min, 47 ) 48 49 if not this_day_group: 50 this_day_group = { 51 'from_day': day, 52 'to_day': day, 53 'hours': hours 54 } 55 elif this_day_group['hours'] != hours: 56 day_groups.append(this_day_group) 57 this_day_group = { 58 'from_day': day, 59 'to_day': day, 60 'hours': hours 61 } 62 elif this_day_group['hours'] == hours: 63 this_day_group['to_day'] = day 64 65 day_groups.append(this_day_group) 66 67 opening_hours = "" 68 if len(day_groups) == 1 and day_groups[0]['hours'] in ('00:00-23:59', '00:00-00:00'): 69 opening_hours = '24/7' 70 else: 71 for day_group in day_groups: 72 if day_group['from_day'] == day_group['to_day']: 73 opening_hours += '{from_day} {hours}; '.format(**day_group) 74 elif day_group['from_day'] == 'Su' and day_group['to_day'] == 'Sa': 75 opening_hours += '{hours}; '.format(**day_group) 76 else: 77 opening_hours += '{from_day}-{to_day} {hours}; '.format(**day_group) 78 opening_hours = opening_hours[:-2] 79 80 return opening_hours 81 82 def parse(self, response): 83 stores = json.loads(response.body_as_unicode()) 84 for store in stores: 85 properties = { 86 'ref': store['id'], 87 'addr_full': store['address'], 88 'city': store['city'], 89 'state': store['state'], 90 'postcode': store['postal'], 91 'lat': store['lat'], 92 'lon': store['lng'], 93 'phone': store['phone'], 94 } 95 96 if store['twentyfourhours']: 97 properties['opening_hours'] = '24/7' 98 elif 'hours' in store: 99 hours = store['hours'] 100 if not all(hours[d] == '' for d in hours): 101 days_hours = [] 102 for day in ['monday', 'tuesday', 'wednesday', 'thursday', 'friday', 'saturday', 'sunday']: 103 days_hours.append([ 104 self.dayMap[day], 105 hours[day].lower().replace(' ', '') 106 ]) 107 properties['opening_hours'] = self.opening_hours(days_hours) 108 109 yield GeojsonPointItem(**properties) 110 111 112 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/locations/spiders/jackinthebox.py b/locations/spiders/jackinthebox.py --- a/locations/spiders/jackinthebox.py +++ b/locations/spiders/jackinthebox.py @@ -11,13 +11,13 @@ "https://www.jackinthebox.com/api/locations", ) dayMap = { - 'monday': 'Mo', - 'tuesday': 'Tu', - 'wednesday': 'We', - 'thursday': 'Th', - 'friday': 'Fr', - 'saturday': 'Sa', - 'sunday': 'Su' + 'Monday': 'Mo', + 'Tuesday': 'Tu', + 'Wednesday': 'We', + 'Thursday': 'Th', + 'Friday': 'Fr', + 'Saturday': 'Sa', + 'Sunday': 'Su' } def opening_hours(self, days_hours): day_groups = [] @@ -25,6 +25,9 @@ for day_hours in days_hours: day = day_hours[0] hours = day_hours[1] + if not hours: + continue + match = re.search(r'^(\d{1,2}):(\d{2})\w*(a|p)m-(\d{1,2}):(\d{2})\w*(a|p)m?$', hours) (f_hr, f_min, f_ampm, t_hr, t_min, t_ampm) = match.groups() @@ -62,7 +65,8 @@ elif this_day_group['hours'] == hours: this_day_group['to_day'] = day - day_groups.append(this_day_group) + if this_day_group: + day_groups.append(this_day_group) opening_hours = "" if len(day_groups) == 1 and day_groups[0]['hours'] in ('00:00-23:59', '00:00-00:00'): @@ -80,31 +84,32 @@ return opening_hours def parse(self, response): - stores = json.loads(response.body_as_unicode()) - for store in stores: + stores = json.loads(response.body_as_unicode())['Locations'] + for store in stores: + address = store['Address'] properties = { - 'ref': store['id'], - 'addr_full': store['address'], - 'city': store['city'], - 'state': store['state'], - 'postcode': store['postal'], - 'lat': store['lat'], - 'lon': store['lng'], - 'phone': store['phone'], + 'ref': store['LocationId'], + 'addr_full': ", ".join([address['StreetLine1'], address['StreetLine2']]), + 'city': address['City'], + 'state': address['State'], + 'postcode': address['Zipcode'], + 'lat': store['Coordinates']['Lat'], + 'lon': store['Coordinates']['Lon'], + 'phone': store['OperationsData']['BusinessPhoneNumber'], } - if store['twentyfourhours']: + hours = store['OperatingHours'] + if all (hours['DineInAllDay'][day] == True for day in hours['DineInAllDay']): properties['opening_hours'] = '24/7' - elif 'hours' in store: - hours = store['hours'] - if not all(hours[d] == '' for d in hours): - days_hours = [] - for day in ['monday', 'tuesday', 'wednesday', 'thursday', 'friday', 'saturday', 'sunday']: - days_hours.append([ - self.dayMap[day], - hours[day].lower().replace(' ', '') - ]) - properties['opening_hours'] = self.opening_hours(days_hours) + + else: + days_hours = [] + for day in ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']: + days_hours.append([ + self.dayMap[day], + hours['DineIn'][day].lower().replace(' ', '') + ]) + properties['opening_hours'] = self.opening_hours(days_hours) yield GeojsonPointItem(**properties)
{"golden_diff": "diff --git a/locations/spiders/jackinthebox.py b/locations/spiders/jackinthebox.py\n--- a/locations/spiders/jackinthebox.py\n+++ b/locations/spiders/jackinthebox.py\n@@ -11,13 +11,13 @@\n \"https://www.jackinthebox.com/api/locations\",\n )\n dayMap = {\n- 'monday': 'Mo',\n- 'tuesday': 'Tu',\n- 'wednesday': 'We',\n- 'thursday': 'Th',\n- 'friday': 'Fr',\n- 'saturday': 'Sa',\n- 'sunday': 'Su'\n+ 'Monday': 'Mo',\n+ 'Tuesday': 'Tu',\n+ 'Wednesday': 'We',\n+ 'Thursday': 'Th',\n+ 'Friday': 'Fr',\n+ 'Saturday': 'Sa',\n+ 'Sunday': 'Su'\n }\n def opening_hours(self, days_hours):\n day_groups = []\n@@ -25,6 +25,9 @@\n for day_hours in days_hours:\n day = day_hours[0]\n hours = day_hours[1]\n+ if not hours:\n+ continue\n+\n match = re.search(r'^(\\d{1,2}):(\\d{2})\\w*(a|p)m-(\\d{1,2}):(\\d{2})\\w*(a|p)m?$', hours)\n (f_hr, f_min, f_ampm, t_hr, t_min, t_ampm) = match.groups()\n \n@@ -62,7 +65,8 @@\n elif this_day_group['hours'] == hours:\n this_day_group['to_day'] = day\n \n- day_groups.append(this_day_group)\n+ if this_day_group:\n+ day_groups.append(this_day_group)\n \n opening_hours = \"\"\n if len(day_groups) == 1 and day_groups[0]['hours'] in ('00:00-23:59', '00:00-00:00'):\n@@ -80,31 +84,32 @@\n return opening_hours\n \n def parse(self, response):\n- stores = json.loads(response.body_as_unicode())\n- for store in stores: \n+ stores = json.loads(response.body_as_unicode())['Locations']\n+ for store in stores:\n+ address = store['Address']\n properties = { \n- 'ref': store['id'], \n- 'addr_full': store['address'],\n- 'city': store['city'], \n- 'state': store['state'], \n- 'postcode': store['postal'], \n- 'lat': store['lat'], \n- 'lon': store['lng'], \n- 'phone': store['phone'],\n+ 'ref': store['LocationId'],\n+ 'addr_full': \", \".join([address['StreetLine1'], address['StreetLine2']]),\n+ 'city': address['City'],\n+ 'state': address['State'],\n+ 'postcode': address['Zipcode'],\n+ 'lat': store['Coordinates']['Lat'],\n+ 'lon': store['Coordinates']['Lon'],\n+ 'phone': store['OperationsData']['BusinessPhoneNumber'],\n } \n \n- if store['twentyfourhours']:\n+ hours = store['OperatingHours']\n+ if all (hours['DineInAllDay'][day] == True for day in hours['DineInAllDay']):\n properties['opening_hours'] = '24/7'\n- elif 'hours' in store:\n- hours = store['hours']\n- if not all(hours[d] == '' for d in hours):\n- days_hours = []\n- for day in ['monday', 'tuesday', 'wednesday', 'thursday', 'friday', 'saturday', 'sunday']:\n- days_hours.append([\n- self.dayMap[day],\n- hours[day].lower().replace(' ', '')\n- ])\n- properties['opening_hours'] = self.opening_hours(days_hours)\n+\n+ else:\n+ days_hours = []\n+ for day in ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']:\n+ days_hours.append([\n+ self.dayMap[day],\n+ hours['DineIn'][day].lower().replace(' ', '')\n+ ])\n+ properties['opening_hours'] = self.opening_hours(days_hours)\n \n yield GeojsonPointItem(**properties)\n", "issue": "Spider jackinthebox is broken\nDuring the global build at 2021-06-23-14-42-18, spider **jackinthebox** failed with **0 features** and **1 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-06-23-14-42-18/logs/jackinthebox.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-23-14-42-18/output/jackinthebox.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-23-14-42-18/output/jackinthebox.geojson))\n", "before_files": [{"content": "import json\nimport re\nimport scrapy\nfrom locations.items import GeojsonPointItem\n\nclass JackInTheBoxSpider(scrapy.Spider):\n name = \"jackinthebox\"\n item_attributes = { 'brand': \"Jack In The Box\" }\n allowed_domains = [\"jackinthebox.com\"]\n start_urls = (\n \"https://www.jackinthebox.com/api/locations\",\n )\n dayMap = {\n 'monday': 'Mo',\n 'tuesday': 'Tu',\n 'wednesday': 'We',\n 'thursday': 'Th',\n 'friday': 'Fr',\n 'saturday': 'Sa',\n 'sunday': 'Su'\n }\n def opening_hours(self, days_hours):\n day_groups = []\n this_day_group = None\n for day_hours in days_hours:\n day = day_hours[0]\n hours = day_hours[1]\n match = re.search(r'^(\\d{1,2}):(\\d{2})\\w*(a|p)m-(\\d{1,2}):(\\d{2})\\w*(a|p)m?$', hours)\n (f_hr, f_min, f_ampm, t_hr, t_min, t_ampm) = match.groups()\n\n f_hr = int(f_hr)\n if f_ampm == 'p':\n f_hr += 12\n elif f_ampm == 'a' and f_hr == 12:\n f_hr = 0\n t_hr = int(t_hr)\n if t_ampm == 'p':\n t_hr += 12\n elif t_ampm == 'a' and t_hr == 12:\n t_hr = 0\n\n hours = '{:02d}:{}-{:02d}:{}'.format(\n f_hr,\n f_min,\n t_hr,\n t_min,\n )\n\n if not this_day_group:\n this_day_group = {\n 'from_day': day,\n 'to_day': day,\n 'hours': hours\n }\n elif this_day_group['hours'] != hours:\n day_groups.append(this_day_group)\n this_day_group = {\n 'from_day': day,\n 'to_day': day,\n 'hours': hours\n }\n elif this_day_group['hours'] == hours:\n this_day_group['to_day'] = day\n\n day_groups.append(this_day_group)\n\n opening_hours = \"\"\n if len(day_groups) == 1 and day_groups[0]['hours'] in ('00:00-23:59', '00:00-00:00'):\n opening_hours = '24/7'\n else:\n for day_group in day_groups:\n if day_group['from_day'] == day_group['to_day']:\n opening_hours += '{from_day} {hours}; '.format(**day_group)\n elif day_group['from_day'] == 'Su' and day_group['to_day'] == 'Sa':\n opening_hours += '{hours}; '.format(**day_group)\n else:\n opening_hours += '{from_day}-{to_day} {hours}; '.format(**day_group)\n opening_hours = opening_hours[:-2]\n\n return opening_hours\n\n def parse(self, response):\n stores = json.loads(response.body_as_unicode())\n for store in stores: \n properties = { \n 'ref': store['id'], \n 'addr_full': store['address'],\n 'city': store['city'], \n 'state': store['state'], \n 'postcode': store['postal'], \n 'lat': store['lat'], \n 'lon': store['lng'], \n 'phone': store['phone'],\n } \n \n if store['twentyfourhours']:\n properties['opening_hours'] = '24/7'\n elif 'hours' in store:\n hours = store['hours']\n if not all(hours[d] == '' for d in hours):\n days_hours = []\n for day in ['monday', 'tuesday', 'wednesday', 'thursday', 'friday', 'saturday', 'sunday']:\n days_hours.append([\n self.dayMap[day],\n hours[day].lower().replace(' ', '')\n ])\n properties['opening_hours'] = self.opening_hours(days_hours)\n \n yield GeojsonPointItem(**properties) \n\n\n", "path": "locations/spiders/jackinthebox.py"}], "after_files": [{"content": "import json\nimport re\nimport scrapy\nfrom locations.items import GeojsonPointItem\n\nclass JackInTheBoxSpider(scrapy.Spider):\n name = \"jackinthebox\"\n item_attributes = { 'brand': \"Jack In The Box\" }\n allowed_domains = [\"jackinthebox.com\"]\n start_urls = (\n \"https://www.jackinthebox.com/api/locations\",\n )\n dayMap = {\n 'Monday': 'Mo',\n 'Tuesday': 'Tu',\n 'Wednesday': 'We',\n 'Thursday': 'Th',\n 'Friday': 'Fr',\n 'Saturday': 'Sa',\n 'Sunday': 'Su'\n }\n def opening_hours(self, days_hours):\n day_groups = []\n this_day_group = None\n for day_hours in days_hours:\n day = day_hours[0]\n hours = day_hours[1]\n if not hours:\n continue\n\n match = re.search(r'^(\\d{1,2}):(\\d{2})\\w*(a|p)m-(\\d{1,2}):(\\d{2})\\w*(a|p)m?$', hours)\n (f_hr, f_min, f_ampm, t_hr, t_min, t_ampm) = match.groups()\n\n f_hr = int(f_hr)\n if f_ampm == 'p':\n f_hr += 12\n elif f_ampm == 'a' and f_hr == 12:\n f_hr = 0\n t_hr = int(t_hr)\n if t_ampm == 'p':\n t_hr += 12\n elif t_ampm == 'a' and t_hr == 12:\n t_hr = 0\n\n hours = '{:02d}:{}-{:02d}:{}'.format(\n f_hr,\n f_min,\n t_hr,\n t_min,\n )\n\n if not this_day_group:\n this_day_group = {\n 'from_day': day,\n 'to_day': day,\n 'hours': hours\n }\n elif this_day_group['hours'] != hours:\n day_groups.append(this_day_group)\n this_day_group = {\n 'from_day': day,\n 'to_day': day,\n 'hours': hours\n }\n elif this_day_group['hours'] == hours:\n this_day_group['to_day'] = day\n\n if this_day_group:\n day_groups.append(this_day_group)\n\n opening_hours = \"\"\n if len(day_groups) == 1 and day_groups[0]['hours'] in ('00:00-23:59', '00:00-00:00'):\n opening_hours = '24/7'\n else:\n for day_group in day_groups:\n if day_group['from_day'] == day_group['to_day']:\n opening_hours += '{from_day} {hours}; '.format(**day_group)\n elif day_group['from_day'] == 'Su' and day_group['to_day'] == 'Sa':\n opening_hours += '{hours}; '.format(**day_group)\n else:\n opening_hours += '{from_day}-{to_day} {hours}; '.format(**day_group)\n opening_hours = opening_hours[:-2]\n\n return opening_hours\n\n def parse(self, response):\n stores = json.loads(response.body_as_unicode())['Locations']\n for store in stores:\n address = store['Address']\n properties = { \n 'ref': store['LocationId'],\n 'addr_full': \", \".join([address['StreetLine1'], address['StreetLine2']]),\n 'city': address['City'],\n 'state': address['State'],\n 'postcode': address['Zipcode'],\n 'lat': store['Coordinates']['Lat'],\n 'lon': store['Coordinates']['Lon'],\n 'phone': store['OperationsData']['BusinessPhoneNumber'],\n } \n \n hours = store['OperatingHours']\n if all (hours['DineInAllDay'][day] == True for day in hours['DineInAllDay']):\n properties['opening_hours'] = '24/7'\n\n else:\n days_hours = []\n for day in ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']:\n days_hours.append([\n self.dayMap[day],\n hours['DineIn'][day].lower().replace(' ', '')\n ])\n properties['opening_hours'] = self.opening_hours(days_hours)\n \n yield GeojsonPointItem(**properties) \n\n\n", "path": "locations/spiders/jackinthebox.py"}]}
1,629
986
gh_patches_debug_5360
rasdani/github-patches
git_diff
ibis-project__ibis-2884
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- BUG: File pseudo-backends failing for missing pandas option The next code is failing in master since #2833: ```python >>> import ibis >>> con = ibis.csv.connect('/home/mgarcia/src/ibis/ci/ibis-testing-data/') >>> expr = con.table('functional_alltypes')['double_col'] * 2 >>> print(expr.execute()) OptionError: "No such keys(s): 'pandas.enable_trace'" ``` The problem is when the `csv` backend (or other file backends) are loaded, but the pandas backend is not. This is because `ibis.pandas` loads the pandas options, which looks like they are needed by the file pseudo-backends. The CI is not failing, I guess because we test pandas and the file backends are tested together, and pandas is loaded when the file backends are tested. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `ibis/backends/base/file/__init__.py` Content: ``` 1 from pathlib import Path 2 3 import ibis.expr.types as ir 4 from ibis.backends.base import BaseBackend, Client, Database 5 from ibis.backends.pandas.core import execute_and_reset 6 7 8 class FileClient(Client): 9 def __init__(self, backend, root): 10 self.backend = backend 11 self.extension = backend.extension 12 self.table_class = backend.table_class 13 self.root = Path(str(root)) 14 self.dictionary = {} 15 16 def insert(self, path, expr, **kwargs): 17 raise NotImplementedError 18 19 def table(self, name, path): 20 raise NotImplementedError 21 22 def database(self, name=None, path=None): 23 if name is None: 24 return FileDatabase('root', self, path=path) 25 26 if name not in self.list_databases(path): 27 raise AttributeError(name) 28 if path is None: 29 path = self.root 30 31 new_name = "{}.{}".format(name, self.extension) 32 if (self.root / name).is_dir(): 33 path /= name 34 elif not str(path).endswith(new_name): 35 path /= new_name 36 37 return FileDatabase(name, self, path=path) 38 39 def execute(self, expr, params=None, **kwargs): # noqa 40 assert isinstance(expr, ir.Expr) 41 return execute_and_reset(expr, params=params, **kwargs) 42 43 def list_tables(self, path=None): 44 raise NotImplementedError 45 46 def _list_tables_files(self, path=None): 47 # tables are files in a dir 48 if path is None: 49 path = self.root 50 51 tables = [] 52 if path.is_dir(): 53 for d in path.iterdir(): 54 if d.is_file(): 55 if str(d).endswith(self.extension): 56 tables.append(d.stem) 57 elif path.is_file(): 58 if str(path).endswith(self.extension): 59 tables.append(path.stem) 60 return tables 61 62 def list_databases(self, path=None): 63 raise NotImplementedError 64 65 def _list_databases_dirs(self, path=None): 66 # databases are dir 67 if path is None: 68 path = self.root 69 70 tables = [] 71 if path.is_dir(): 72 for d in path.iterdir(): 73 if d.is_dir(): 74 tables.append(d.name) 75 return tables 76 77 def _list_databases_dirs_or_files(self, path=None): 78 # databases are dir & file 79 if path is None: 80 path = self.root 81 82 tables = [] 83 if path.is_dir(): 84 for d in path.iterdir(): 85 if d.is_dir(): 86 tables.append(d.name) 87 elif d.is_file(): 88 if str(d).endswith(self.extension): 89 tables.append(d.stem) 90 elif path.is_file(): 91 # by definition we are at the db level at this point 92 pass 93 94 return tables 95 96 97 class FileDatabase(Database): 98 def __init__(self, name, client, path=None): 99 super().__init__(name, client) 100 self.path = path 101 102 def __str__(self): 103 return '{0.__class__.__name__}({0.name})'.format(self) 104 105 def __dir__(self): 106 dbs = self.list_databases(path=self.path) 107 tables = self.list_tables(path=self.path) 108 return sorted(set(dbs).union(set(tables))) 109 110 def __getattr__(self, name): 111 try: 112 return self.table(name, path=self.path) 113 except AttributeError: 114 return self.database(name, path=self.path) 115 116 def table(self, name, path): 117 return self.client.table(name, path=path) 118 119 def database(self, name=None, path=None): 120 return self.client.database(name=name, path=path) 121 122 def list_databases(self, path=None): 123 if path is None: 124 path = self.path 125 return sorted(self.client.list_databases(path=path)) 126 127 def list_tables(self, path=None): 128 if path is None: 129 path = self.path 130 return sorted(self.client.list_tables(path=path)) 131 132 133 class BaseFileBackend(BaseBackend): 134 """ 135 Base backend class for pandas pseudo-backends for file formats. 136 """ 137 138 def connect(self, path): 139 """Create a Client for use with Ibis 140 141 Parameters 142 ---------- 143 path : str or pathlib.Path 144 145 Returns 146 ------- 147 Client 148 """ 149 return self.client_class(backend=self, root=path) 150 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/ibis/backends/base/file/__init__.py b/ibis/backends/base/file/__init__.py --- a/ibis/backends/base/file/__init__.py +++ b/ibis/backends/base/file/__init__.py @@ -1,9 +1,13 @@ from pathlib import Path +import ibis import ibis.expr.types as ir from ibis.backends.base import BaseBackend, Client, Database from ibis.backends.pandas.core import execute_and_reset +# Load options of pandas backend +ibis.pandas + class FileClient(Client): def __init__(self, backend, root):
{"golden_diff": "diff --git a/ibis/backends/base/file/__init__.py b/ibis/backends/base/file/__init__.py\n--- a/ibis/backends/base/file/__init__.py\n+++ b/ibis/backends/base/file/__init__.py\n@@ -1,9 +1,13 @@\n from pathlib import Path\n \n+import ibis\n import ibis.expr.types as ir\n from ibis.backends.base import BaseBackend, Client, Database\n from ibis.backends.pandas.core import execute_and_reset\n \n+# Load options of pandas backend\n+ibis.pandas\n+\n \n class FileClient(Client):\n def __init__(self, backend, root):\n", "issue": "BUG: File pseudo-backends failing for missing pandas option\nThe next code is failing in master since #2833:\r\n\r\n```python\r\n>>> import ibis\r\n>>> con = ibis.csv.connect('/home/mgarcia/src/ibis/ci/ibis-testing-data/')\r\n>>> expr = con.table('functional_alltypes')['double_col'] * 2\r\n>>> print(expr.execute())\r\nOptionError: \"No such keys(s): 'pandas.enable_trace'\"\r\n```\r\n\r\nThe problem is when the `csv` backend (or other file backends) are loaded, but the pandas backend is not. This is because `ibis.pandas` loads the pandas options, which looks like they are needed by the file pseudo-backends.\r\n\r\nThe CI is not failing, I guess because we test pandas and the file backends are tested together, and pandas is loaded when the file backends are tested.\n", "before_files": [{"content": "from pathlib import Path\n\nimport ibis.expr.types as ir\nfrom ibis.backends.base import BaseBackend, Client, Database\nfrom ibis.backends.pandas.core import execute_and_reset\n\n\nclass FileClient(Client):\n def __init__(self, backend, root):\n self.backend = backend\n self.extension = backend.extension\n self.table_class = backend.table_class\n self.root = Path(str(root))\n self.dictionary = {}\n\n def insert(self, path, expr, **kwargs):\n raise NotImplementedError\n\n def table(self, name, path):\n raise NotImplementedError\n\n def database(self, name=None, path=None):\n if name is None:\n return FileDatabase('root', self, path=path)\n\n if name not in self.list_databases(path):\n raise AttributeError(name)\n if path is None:\n path = self.root\n\n new_name = \"{}.{}\".format(name, self.extension)\n if (self.root / name).is_dir():\n path /= name\n elif not str(path).endswith(new_name):\n path /= new_name\n\n return FileDatabase(name, self, path=path)\n\n def execute(self, expr, params=None, **kwargs): # noqa\n assert isinstance(expr, ir.Expr)\n return execute_and_reset(expr, params=params, **kwargs)\n\n def list_tables(self, path=None):\n raise NotImplementedError\n\n def _list_tables_files(self, path=None):\n # tables are files in a dir\n if path is None:\n path = self.root\n\n tables = []\n if path.is_dir():\n for d in path.iterdir():\n if d.is_file():\n if str(d).endswith(self.extension):\n tables.append(d.stem)\n elif path.is_file():\n if str(path).endswith(self.extension):\n tables.append(path.stem)\n return tables\n\n def list_databases(self, path=None):\n raise NotImplementedError\n\n def _list_databases_dirs(self, path=None):\n # databases are dir\n if path is None:\n path = self.root\n\n tables = []\n if path.is_dir():\n for d in path.iterdir():\n if d.is_dir():\n tables.append(d.name)\n return tables\n\n def _list_databases_dirs_or_files(self, path=None):\n # databases are dir & file\n if path is None:\n path = self.root\n\n tables = []\n if path.is_dir():\n for d in path.iterdir():\n if d.is_dir():\n tables.append(d.name)\n elif d.is_file():\n if str(d).endswith(self.extension):\n tables.append(d.stem)\n elif path.is_file():\n # by definition we are at the db level at this point\n pass\n\n return tables\n\n\nclass FileDatabase(Database):\n def __init__(self, name, client, path=None):\n super().__init__(name, client)\n self.path = path\n\n def __str__(self):\n return '{0.__class__.__name__}({0.name})'.format(self)\n\n def __dir__(self):\n dbs = self.list_databases(path=self.path)\n tables = self.list_tables(path=self.path)\n return sorted(set(dbs).union(set(tables)))\n\n def __getattr__(self, name):\n try:\n return self.table(name, path=self.path)\n except AttributeError:\n return self.database(name, path=self.path)\n\n def table(self, name, path):\n return self.client.table(name, path=path)\n\n def database(self, name=None, path=None):\n return self.client.database(name=name, path=path)\n\n def list_databases(self, path=None):\n if path is None:\n path = self.path\n return sorted(self.client.list_databases(path=path))\n\n def list_tables(self, path=None):\n if path is None:\n path = self.path\n return sorted(self.client.list_tables(path=path))\n\n\nclass BaseFileBackend(BaseBackend):\n \"\"\"\n Base backend class for pandas pseudo-backends for file formats.\n \"\"\"\n\n def connect(self, path):\n \"\"\"Create a Client for use with Ibis\n\n Parameters\n ----------\n path : str or pathlib.Path\n\n Returns\n -------\n Client\n \"\"\"\n return self.client_class(backend=self, root=path)\n", "path": "ibis/backends/base/file/__init__.py"}], "after_files": [{"content": "from pathlib import Path\n\nimport ibis\nimport ibis.expr.types as ir\nfrom ibis.backends.base import BaseBackend, Client, Database\nfrom ibis.backends.pandas.core import execute_and_reset\n\n# Load options of pandas backend\nibis.pandas\n\n\nclass FileClient(Client):\n def __init__(self, backend, root):\n self.backend = backend\n self.extension = backend.extension\n self.table_class = backend.table_class\n self.root = Path(str(root))\n self.dictionary = {}\n\n def insert(self, path, expr, **kwargs):\n raise NotImplementedError\n\n def table(self, name, path):\n raise NotImplementedError\n\n def database(self, name=None, path=None):\n if name is None:\n return FileDatabase('root', self, path=path)\n\n if name not in self.list_databases(path):\n raise AttributeError(name)\n if path is None:\n path = self.root\n\n new_name = \"{}.{}\".format(name, self.extension)\n if (self.root / name).is_dir():\n path /= name\n elif not str(path).endswith(new_name):\n path /= new_name\n\n return FileDatabase(name, self, path=path)\n\n def execute(self, expr, params=None, **kwargs): # noqa\n assert isinstance(expr, ir.Expr)\n return execute_and_reset(expr, params=params, **kwargs)\n\n def list_tables(self, path=None):\n raise NotImplementedError\n\n def _list_tables_files(self, path=None):\n # tables are files in a dir\n if path is None:\n path = self.root\n\n tables = []\n if path.is_dir():\n for d in path.iterdir():\n if d.is_file():\n if str(d).endswith(self.extension):\n tables.append(d.stem)\n elif path.is_file():\n if str(path).endswith(self.extension):\n tables.append(path.stem)\n return tables\n\n def list_databases(self, path=None):\n raise NotImplementedError\n\n def _list_databases_dirs(self, path=None):\n # databases are dir\n if path is None:\n path = self.root\n\n tables = []\n if path.is_dir():\n for d in path.iterdir():\n if d.is_dir():\n tables.append(d.name)\n return tables\n\n def _list_databases_dirs_or_files(self, path=None):\n # databases are dir & file\n if path is None:\n path = self.root\n\n tables = []\n if path.is_dir():\n for d in path.iterdir():\n if d.is_dir():\n tables.append(d.name)\n elif d.is_file():\n if str(d).endswith(self.extension):\n tables.append(d.stem)\n elif path.is_file():\n # by definition we are at the db level at this point\n pass\n\n return tables\n\n\nclass FileDatabase(Database):\n def __init__(self, name, client, path=None):\n super().__init__(name, client)\n self.path = path\n\n def __str__(self):\n return '{0.__class__.__name__}({0.name})'.format(self)\n\n def __dir__(self):\n dbs = self.list_databases(path=self.path)\n tables = self.list_tables(path=self.path)\n return sorted(set(dbs).union(set(tables)))\n\n def __getattr__(self, name):\n try:\n return self.table(name, path=self.path)\n except AttributeError:\n return self.database(name, path=self.path)\n\n def table(self, name, path):\n return self.client.table(name, path=path)\n\n def database(self, name=None, path=None):\n return self.client.database(name=name, path=path)\n\n def list_databases(self, path=None):\n if path is None:\n path = self.path\n return sorted(self.client.list_databases(path=path))\n\n def list_tables(self, path=None):\n if path is None:\n path = self.path\n return sorted(self.client.list_tables(path=path))\n\n\nclass BaseFileBackend(BaseBackend):\n \"\"\"\n Base backend class for pandas pseudo-backends for file formats.\n \"\"\"\n\n def connect(self, path):\n \"\"\"Create a Client for use with Ibis\n\n Parameters\n ----------\n path : str or pathlib.Path\n\n Returns\n -------\n Client\n \"\"\"\n return self.client_class(backend=self, root=path)\n", "path": "ibis/backends/base/file/__init__.py"}]}
1,715
141
gh_patches_debug_35728
rasdani/github-patches
git_diff
mindsdb__lightwood-518
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Lightwood.api.ensemble is not necessary This script is deprecated, as the ensemble module has moved to `lw.ensemble` with a base abstraction. A quick inspection of the code (ex: grep for this call) and I don't see any references. Please double check if this file is required, as I think it should be removed. The culprit link is [here](https://github.com/mindsdb/lightwood/blob/0372d292796a6d1f91ac9df9b8658ad2f128b7c9/lightwood/api/ensemble.py) --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `lightwood/api/ensemble.py` Content: ``` 1 from lightwood import Predictor 2 from lightwood.constants.lightwood import ColumnDataTypes 3 from collections import Counter 4 import numpy as np 5 import pickle 6 import os 7 8 9 class LightwoodEnsemble: 10 def __init__(self, predictors=None, load_from_path=None): 11 self.path_list = None 12 if load_from_path is not None: 13 with open(os.path.join(load_from_path, 'lightwood_data'), 'rb') as pickle_in: 14 obj = pickle.load(pickle_in) 15 self.path = load_from_path 16 self.path_list = obj.path_list 17 self.ensemble = [Predictor(load_from_path=path) for path in self.path_list] 18 elif isinstance(predictors, Predictor): 19 self.ensemble = [predictors] 20 elif isinstance(predictors, list): 21 self.ensemble = predictors 22 23 def append(self, predictor): 24 if isinstance(self.ensemble, list): 25 self.ensemble.append(predictor) 26 else: 27 self.ensemble = [predictor] 28 29 def __iter__(self): 30 yield self.ensemble 31 32 def predict(self, when_data): 33 predictions = [p.predict(when_data=when_data) for p in self.ensemble] 34 formatted_predictions = {} 35 for target in self.ensemble[0].config['output_features']: 36 target_name = target['name'] 37 formatted_predictions[target_name] = {} 38 pred_arr = np.array([p[target_name]['predictions'] for p in predictions]) 39 if target['type'] == ColumnDataTypes.NUMERIC: 40 final_preds = np.mean(pred_arr, axis=0).tolist() 41 elif target['type'] == ColumnDataTypes.CATEGORICAL: 42 final_preds = [max(Counter(pred_arr[:, idx])) for idx in range(pred_arr.shape[1])] 43 44 # @TODO: implement class distribution for ensembles 45 # NOTE: label set *could* grow when adding predictors, which complicates belief score computation 46 formatted_predictions[target_name]['class_distribution'] = np.ones(shape=(len(final_preds), 1)) 47 else: 48 raise Exception('Only numeric and categorical datatypes are supported for ensembles') 49 50 formatted_predictions[target_name]['predictions'] = final_preds 51 52 return formatted_predictions 53 54 def save(self, path_to): 55 # TODO: potentially save predictors inside ensemble pickle, though there's the issue of nonpersistent stuff with torch.save() # noqa 56 path_list = [] 57 for i, model in enumerate(self.ensemble): 58 path = os.path.join(path_to, f'lightwood_predictor_{i}') 59 path_list.append(path) 60 model.save(path_to=path) 61 62 self.path_list = path_list 63 64 # TODO: in the future, save preds inside this data struct 65 self.ensemble = None # we deref predictors for now 66 with open(os.path.join(path_to, 'lightwood_data'), 'wb') as file: 67 pickle.dump(self, file, pickle.HIGHEST_PROTOCOL) 68 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/lightwood/api/ensemble.py b/lightwood/api/ensemble.py deleted file mode 100644 --- a/lightwood/api/ensemble.py +++ /dev/null @@ -1,67 +0,0 @@ -from lightwood import Predictor -from lightwood.constants.lightwood import ColumnDataTypes -from collections import Counter -import numpy as np -import pickle -import os - - -class LightwoodEnsemble: - def __init__(self, predictors=None, load_from_path=None): - self.path_list = None - if load_from_path is not None: - with open(os.path.join(load_from_path, 'lightwood_data'), 'rb') as pickle_in: - obj = pickle.load(pickle_in) - self.path = load_from_path - self.path_list = obj.path_list - self.ensemble = [Predictor(load_from_path=path) for path in self.path_list] - elif isinstance(predictors, Predictor): - self.ensemble = [predictors] - elif isinstance(predictors, list): - self.ensemble = predictors - - def append(self, predictor): - if isinstance(self.ensemble, list): - self.ensemble.append(predictor) - else: - self.ensemble = [predictor] - - def __iter__(self): - yield self.ensemble - - def predict(self, when_data): - predictions = [p.predict(when_data=when_data) for p in self.ensemble] - formatted_predictions = {} - for target in self.ensemble[0].config['output_features']: - target_name = target['name'] - formatted_predictions[target_name] = {} - pred_arr = np.array([p[target_name]['predictions'] for p in predictions]) - if target['type'] == ColumnDataTypes.NUMERIC: - final_preds = np.mean(pred_arr, axis=0).tolist() - elif target['type'] == ColumnDataTypes.CATEGORICAL: - final_preds = [max(Counter(pred_arr[:, idx])) for idx in range(pred_arr.shape[1])] - - # @TODO: implement class distribution for ensembles - # NOTE: label set *could* grow when adding predictors, which complicates belief score computation - formatted_predictions[target_name]['class_distribution'] = np.ones(shape=(len(final_preds), 1)) - else: - raise Exception('Only numeric and categorical datatypes are supported for ensembles') - - formatted_predictions[target_name]['predictions'] = final_preds - - return formatted_predictions - - def save(self, path_to): - # TODO: potentially save predictors inside ensemble pickle, though there's the issue of nonpersistent stuff with torch.save() # noqa - path_list = [] - for i, model in enumerate(self.ensemble): - path = os.path.join(path_to, f'lightwood_predictor_{i}') - path_list.append(path) - model.save(path_to=path) - - self.path_list = path_list - - # TODO: in the future, save preds inside this data struct - self.ensemble = None # we deref predictors for now - with open(os.path.join(path_to, 'lightwood_data'), 'wb') as file: - pickle.dump(self, file, pickle.HIGHEST_PROTOCOL)
{"golden_diff": "diff --git a/lightwood/api/ensemble.py b/lightwood/api/ensemble.py\ndeleted file mode 100644\n--- a/lightwood/api/ensemble.py\n+++ /dev/null\n@@ -1,67 +0,0 @@\n-from lightwood import Predictor\n-from lightwood.constants.lightwood import ColumnDataTypes\n-from collections import Counter\n-import numpy as np\n-import pickle\n-import os\n-\n-\n-class LightwoodEnsemble:\n- def __init__(self, predictors=None, load_from_path=None):\n- self.path_list = None\n- if load_from_path is not None:\n- with open(os.path.join(load_from_path, 'lightwood_data'), 'rb') as pickle_in:\n- obj = pickle.load(pickle_in)\n- self.path = load_from_path\n- self.path_list = obj.path_list\n- self.ensemble = [Predictor(load_from_path=path) for path in self.path_list]\n- elif isinstance(predictors, Predictor):\n- self.ensemble = [predictors]\n- elif isinstance(predictors, list):\n- self.ensemble = predictors\n-\n- def append(self, predictor):\n- if isinstance(self.ensemble, list):\n- self.ensemble.append(predictor)\n- else:\n- self.ensemble = [predictor]\n-\n- def __iter__(self):\n- yield self.ensemble\n-\n- def predict(self, when_data):\n- predictions = [p.predict(when_data=when_data) for p in self.ensemble]\n- formatted_predictions = {}\n- for target in self.ensemble[0].config['output_features']:\n- target_name = target['name']\n- formatted_predictions[target_name] = {}\n- pred_arr = np.array([p[target_name]['predictions'] for p in predictions])\n- if target['type'] == ColumnDataTypes.NUMERIC:\n- final_preds = np.mean(pred_arr, axis=0).tolist()\n- elif target['type'] == ColumnDataTypes.CATEGORICAL:\n- final_preds = [max(Counter(pred_arr[:, idx])) for idx in range(pred_arr.shape[1])]\n-\n- # @TODO: implement class distribution for ensembles\n- # NOTE: label set *could* grow when adding predictors, which complicates belief score computation\n- formatted_predictions[target_name]['class_distribution'] = np.ones(shape=(len(final_preds), 1))\n- else:\n- raise Exception('Only numeric and categorical datatypes are supported for ensembles')\n-\n- formatted_predictions[target_name]['predictions'] = final_preds\n-\n- return formatted_predictions\n-\n- def save(self, path_to):\n- # TODO: potentially save predictors inside ensemble pickle, though there's the issue of nonpersistent stuff with torch.save() # noqa\n- path_list = []\n- for i, model in enumerate(self.ensemble):\n- path = os.path.join(path_to, f'lightwood_predictor_{i}')\n- path_list.append(path)\n- model.save(path_to=path)\n-\n- self.path_list = path_list\n-\n- # TODO: in the future, save preds inside this data struct\n- self.ensemble = None # we deref predictors for now\n- with open(os.path.join(path_to, 'lightwood_data'), 'wb') as file:\n- pickle.dump(self, file, pickle.HIGHEST_PROTOCOL)\n", "issue": "Lightwood.api.ensemble is not necessary\nThis script is deprecated, as the ensemble module has moved to `lw.ensemble` with a base abstraction. A quick inspection of the code (ex: grep for this call) and I don't see any references. Please double check if this file is required, as I think it should be removed.\r\n\r\nThe culprit link is [here](https://github.com/mindsdb/lightwood/blob/0372d292796a6d1f91ac9df9b8658ad2f128b7c9/lightwood/api/ensemble.py)\n", "before_files": [{"content": "from lightwood import Predictor\nfrom lightwood.constants.lightwood import ColumnDataTypes\nfrom collections import Counter\nimport numpy as np\nimport pickle\nimport os\n\n\nclass LightwoodEnsemble:\n def __init__(self, predictors=None, load_from_path=None):\n self.path_list = None\n if load_from_path is not None:\n with open(os.path.join(load_from_path, 'lightwood_data'), 'rb') as pickle_in:\n obj = pickle.load(pickle_in)\n self.path = load_from_path\n self.path_list = obj.path_list\n self.ensemble = [Predictor(load_from_path=path) for path in self.path_list]\n elif isinstance(predictors, Predictor):\n self.ensemble = [predictors]\n elif isinstance(predictors, list):\n self.ensemble = predictors\n\n def append(self, predictor):\n if isinstance(self.ensemble, list):\n self.ensemble.append(predictor)\n else:\n self.ensemble = [predictor]\n\n def __iter__(self):\n yield self.ensemble\n\n def predict(self, when_data):\n predictions = [p.predict(when_data=when_data) for p in self.ensemble]\n formatted_predictions = {}\n for target in self.ensemble[0].config['output_features']:\n target_name = target['name']\n formatted_predictions[target_name] = {}\n pred_arr = np.array([p[target_name]['predictions'] for p in predictions])\n if target['type'] == ColumnDataTypes.NUMERIC:\n final_preds = np.mean(pred_arr, axis=0).tolist()\n elif target['type'] == ColumnDataTypes.CATEGORICAL:\n final_preds = [max(Counter(pred_arr[:, idx])) for idx in range(pred_arr.shape[1])]\n\n # @TODO: implement class distribution for ensembles\n # NOTE: label set *could* grow when adding predictors, which complicates belief score computation\n formatted_predictions[target_name]['class_distribution'] = np.ones(shape=(len(final_preds), 1))\n else:\n raise Exception('Only numeric and categorical datatypes are supported for ensembles')\n\n formatted_predictions[target_name]['predictions'] = final_preds\n\n return formatted_predictions\n\n def save(self, path_to):\n # TODO: potentially save predictors inside ensemble pickle, though there's the issue of nonpersistent stuff with torch.save() # noqa\n path_list = []\n for i, model in enumerate(self.ensemble):\n path = os.path.join(path_to, f'lightwood_predictor_{i}')\n path_list.append(path)\n model.save(path_to=path)\n\n self.path_list = path_list\n\n # TODO: in the future, save preds inside this data struct\n self.ensemble = None # we deref predictors for now\n with open(os.path.join(path_to, 'lightwood_data'), 'wb') as file:\n pickle.dump(self, file, pickle.HIGHEST_PROTOCOL)\n", "path": "lightwood/api/ensemble.py"}], "after_files": [{"content": null, "path": "lightwood/api/ensemble.py"}]}
1,140
724
gh_patches_debug_1352
rasdani/github-patches
git_diff
pwr-Solaar__Solaar-1826
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Release 1.1.7 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `setup.py` Content: ``` 1 #!/usr/bin/env python3 2 3 from glob import glob as _glob 4 5 try: 6 from setuptools import setup 7 except ImportError: 8 from distutils.core import setup 9 10 # from solaar import NAME, __version__ 11 __version__ = '1.1.7' 12 NAME = 'Solaar' 13 14 15 def _data_files(): 16 from os.path import dirname as _dirname 17 18 yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg') 19 yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png') 20 yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg'] 21 22 for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'): 23 yield _dirname(mo), [mo] 24 25 yield 'share/applications', ['share/applications/solaar.desktop'] 26 yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules'] 27 yield 'share/metainfo', ['share/solaar/io.github.pwr_solaar.solaar.metainfo.xml'] 28 29 del _dirname 30 31 32 setup( 33 name=NAME.lower(), 34 version=__version__, 35 description='Linux device manager for Logitech receivers, keyboards, mice, and tablets.', 36 long_description=''' 37 Solaar is a Linux device manager for many Logitech peripherals that connect through 38 Unifying and other receivers or via USB or Bluetooth. 39 Solaar is able to pair/unpair devices with receivers and show and modify some of the 40 modifiable features of devices. 41 For instructions on installing Solaar see https://pwr-solaar.github.io/Solaar/installation'''.strip(), 42 author='Daniel Pavel', 43 license='GPLv2', 44 url='http://pwr-solaar.github.io/Solaar/', 45 classifiers=[ 46 'Development Status :: 4 - Beta', 47 'Environment :: X11 Applications :: GTK', 48 'Environment :: Console', 49 'Intended Audience :: End Users/Desktop', 50 'License :: DFSG approved', 51 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)', 52 'Natural Language :: English', 53 'Programming Language :: Python :: 3 :: Only', 54 'Operating System :: POSIX :: Linux', 55 'Topic :: Utilities', 56 ], 57 platforms=['linux'], 58 59 # sudo apt install python-gi python3-gi \ 60 # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1 61 # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'], 62 python_requires='>=3.7', 63 install_requires=[ 64 'evdev (>= 1.1.2)', 65 'pyudev (>= 0.13)', 66 'PyYAML (>= 3.12)', 67 'python-xlib (>= 0.27)', 68 'psutil (>= 5.4.3)', 69 'typing_extensions (>=4.0.0)', 70 ], 71 extras_require={ 72 'report-descriptor': ['hid-parser'], 73 'desktop-notifications': ['Notify (>= 0.7)'], 74 }, 75 package_dir={'': 'lib'}, 76 packages=['keysyms', 'hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'], 77 data_files=list(_data_files()), 78 scripts=_glob('bin/*'), 79 ) 80 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -66,7 +66,6 @@ 'PyYAML (>= 3.12)', 'python-xlib (>= 0.27)', 'psutil (>= 5.4.3)', - 'typing_extensions (>=4.0.0)', ], extras_require={ 'report-descriptor': ['hid-parser'],
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -66,7 +66,6 @@\n 'PyYAML (>= 3.12)',\n 'python-xlib (>= 0.27)',\n 'psutil (>= 5.4.3)',\n- 'typing_extensions (>=4.0.0)',\n ],\n extras_require={\n 'report-descriptor': ['hid-parser'],\n", "issue": "Release 1.1.7\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nfrom glob import glob as _glob\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\n# from solaar import NAME, __version__\n__version__ = '1.1.7'\nNAME = 'Solaar'\n\n\ndef _data_files():\n from os.path import dirname as _dirname\n\n yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')\n yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')\n yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']\n\n for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):\n yield _dirname(mo), [mo]\n\n yield 'share/applications', ['share/applications/solaar.desktop']\n yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']\n yield 'share/metainfo', ['share/solaar/io.github.pwr_solaar.solaar.metainfo.xml']\n\n del _dirname\n\n\nsetup(\n name=NAME.lower(),\n version=__version__,\n description='Linux device manager for Logitech receivers, keyboards, mice, and tablets.',\n long_description='''\nSolaar is a Linux device manager for many Logitech peripherals that connect through\nUnifying and other receivers or via USB or Bluetooth.\nSolaar is able to pair/unpair devices with receivers and show and modify some of the\nmodifiable features of devices.\nFor instructions on installing Solaar see https://pwr-solaar.github.io/Solaar/installation'''.strip(),\n author='Daniel Pavel',\n license='GPLv2',\n url='http://pwr-solaar.github.io/Solaar/',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: X11 Applications :: GTK',\n 'Environment :: Console',\n 'Intended Audience :: End Users/Desktop',\n 'License :: DFSG approved',\n 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3 :: Only',\n 'Operating System :: POSIX :: Linux',\n 'Topic :: Utilities',\n ],\n platforms=['linux'],\n\n # sudo apt install python-gi python3-gi \\\n # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1\n # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],\n python_requires='>=3.7',\n install_requires=[\n 'evdev (>= 1.1.2)',\n 'pyudev (>= 0.13)',\n 'PyYAML (>= 3.12)',\n 'python-xlib (>= 0.27)',\n 'psutil (>= 5.4.3)',\n 'typing_extensions (>=4.0.0)',\n ],\n extras_require={\n 'report-descriptor': ['hid-parser'],\n 'desktop-notifications': ['Notify (>= 0.7)'],\n },\n package_dir={'': 'lib'},\n packages=['keysyms', 'hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],\n data_files=list(_data_files()),\n scripts=_glob('bin/*'),\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nfrom glob import glob as _glob\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\n# from solaar import NAME, __version__\n__version__ = '1.1.7'\nNAME = 'Solaar'\n\n\ndef _data_files():\n from os.path import dirname as _dirname\n\n yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')\n yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')\n yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']\n\n for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):\n yield _dirname(mo), [mo]\n\n yield 'share/applications', ['share/applications/solaar.desktop']\n yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']\n yield 'share/metainfo', ['share/solaar/io.github.pwr_solaar.solaar.metainfo.xml']\n\n del _dirname\n\n\nsetup(\n name=NAME.lower(),\n version=__version__,\n description='Linux device manager for Logitech receivers, keyboards, mice, and tablets.',\n long_description='''\nSolaar is a Linux device manager for many Logitech peripherals that connect through\nUnifying and other receivers or via USB or Bluetooth.\nSolaar is able to pair/unpair devices with receivers and show and modify some of the\nmodifiable features of devices.\nFor instructions on installing Solaar see https://pwr-solaar.github.io/Solaar/installation'''.strip(),\n author='Daniel Pavel',\n license='GPLv2',\n url='http://pwr-solaar.github.io/Solaar/',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: X11 Applications :: GTK',\n 'Environment :: Console',\n 'Intended Audience :: End Users/Desktop',\n 'License :: DFSG approved',\n 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3 :: Only',\n 'Operating System :: POSIX :: Linux',\n 'Topic :: Utilities',\n ],\n platforms=['linux'],\n\n # sudo apt install python-gi python3-gi \\\n # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1\n # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],\n python_requires='>=3.7',\n install_requires=[\n 'evdev (>= 1.1.2)',\n 'pyudev (>= 0.13)',\n 'PyYAML (>= 3.12)',\n 'python-xlib (>= 0.27)',\n 'psutil (>= 5.4.3)',\n ],\n extras_require={\n 'report-descriptor': ['hid-parser'],\n 'desktop-notifications': ['Notify (>= 0.7)'],\n },\n package_dir={'': 'lib'},\n packages=['keysyms', 'hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],\n data_files=list(_data_files()),\n scripts=_glob('bin/*'),\n)\n", "path": "setup.py"}]}
1,187
100
gh_patches_debug_28360
rasdani/github-patches
git_diff
pwr-Solaar__Solaar-711
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Running solaar does not open solaar ``` ➜ solaar --version solaar 1.0.1 ``` **What happens** When running `solaar`, it does not open the application itself. It _does_ add a try icon with a battery indicator or whatever. Running `solaar` a second time _does_ open the application windo. **Expected result** Running an application should open the application (that's pretty much as obvious as it gets). I shouldn't need to run it twice. If anyone wants just the try icon, something like `solaar --tray-icon` would probably work. **Other notes** I no longer have a tray bar set up on my desktop. So right now, running solaar once is a no-op. Running it twice actually opens the application. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `lib/solaar/gtk.py` Content: ``` 1 #!/usr/bin/env python3 2 # -*- python-mode -*- 3 # -*- coding: UTF-8 -*- 4 5 ## Copyright (C) 2012-2013 Daniel Pavel 6 ## 7 ## This program is free software; you can redistribute it and/or modify 8 ## it under the terms of the GNU General Public License as published by 9 ## the Free Software Foundation; either version 2 of the License, or 10 ## (at your option) any later version. 11 ## 12 ## This program is distributed in the hope that it will be useful, 13 ## but WITHOUT ANY WARRANTY; without even the implied warranty of 14 ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 ## GNU General Public License for more details. 16 ## 17 ## You should have received a copy of the GNU General Public License along 18 ## with this program; if not, write to the Free Software Foundation, Inc., 19 ## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. 20 21 from __future__ import absolute_import, division, print_function, unicode_literals 22 23 import importlib 24 25 26 from solaar import __version__, NAME 27 import solaar.i18n as _i18n 28 import solaar.cli as _cli 29 30 # 31 # 32 # 33 34 def _require(module, os_package): 35 try: 36 return importlib.import_module(module) 37 except ImportError: 38 import sys 39 sys.exit("%s: missing required package '%s'" % (NAME, os_package)) 40 41 42 def _parse_arguments(): 43 import argparse 44 arg_parser = argparse.ArgumentParser(prog=NAME.lower()) 45 arg_parser.add_argument('-d', '--debug', action='count', default=0, 46 help='print logging messages, for debugging purposes (may be repeated for extra verbosity)') 47 arg_parser.add_argument('-D', '--hidraw', action='store', dest='hidraw_path', metavar='PATH', 48 help='unifying receiver to use; the first detected receiver if unspecified. Example: /dev/hidraw2') 49 arg_parser.add_argument('--restart-on-wake-up', action='store_true', 50 help='restart Solaar on sleep wake-up (experimental)') 51 arg_parser.add_argument('-w', '--window', choices=('hide','show','only'), help='start with window hidden / showing / only (no tray icon)') 52 arg_parser.add_argument('-V', '--version', action='version', version='%(prog)s ' + __version__) 53 arg_parser.add_argument('--help-actions', action='store_true', 54 help='print help for the optional actions') 55 arg_parser.add_argument('action', nargs=argparse.REMAINDER, choices=_cli.actions, 56 help='optional actions to perform') 57 58 args = arg_parser.parse_args() 59 60 if args.help_actions: 61 _cli.print_help() 62 return 63 64 if args.window is None: 65 args.window = 'hide' 66 67 import logging 68 if args.debug > 0: 69 log_level = logging.WARNING - 10 * args.debug 70 log_format='%(asctime)s,%(msecs)03d %(levelname)8s [%(threadName)s] %(name)s: %(message)s' 71 logging.basicConfig(level=max(log_level, logging.DEBUG), format=log_format, datefmt='%H:%M:%S') 72 else: 73 logging.root.addHandler(logging.NullHandler()) 74 logging.root.setLevel(logging.ERROR) 75 76 if not args.action: 77 if logging.root.isEnabledFor(logging.INFO): 78 logging.info("language %s (%s), translations path %s", _i18n.language, _i18n.encoding, _i18n.path) 79 80 return args 81 82 83 def main(): 84 _require('pyudev', 'python-pyudev') 85 86 # handle ^C in console 87 import signal 88 signal.signal(signal.SIGINT, signal.SIG_DFL) 89 90 args = _parse_arguments() 91 if not args: return 92 if args.action: 93 # if any argument, run comandline and exit 94 return _cli.run(args.action, args.hidraw_path) 95 96 gi = _require('gi', 'python-gi') 97 gi.require_version('Gtk', '3.0') 98 _require('gi.repository.Gtk', 'gir1.2-gtk-3.0') 99 100 try: 101 import solaar.ui as ui 102 import solaar.listener as listener 103 listener.setup_scanner(ui.status_changed, ui.error_dialog) 104 105 import solaar.upower as _upower 106 if args.restart_on_wake_up: 107 _upower.watch(listener.start_all, listener.stop_all) 108 else: 109 _upower.watch(lambda: listener.ping_all(True)) 110 111 # main UI event loop 112 ui.run_loop(listener.start_all, listener.stop_all, args.window!='only', args.window!='hide') 113 except Exception as e: 114 import sys 115 sys.exit('%s: error: %s' % (NAME.lower(), e)) 116 117 118 if __name__ == '__main__': 119 main() 120 ``` Path: `setup.py` Content: ``` 1 #!/usr/bin/env python3 2 3 from glob import glob as _glob 4 try: 5 from setuptools import setup 6 except ImportError: 7 from distutils.core import setup 8 9 autostart_path = '/etc/xdg/autostart' 10 11 import sys 12 backup_path_0 = sys.path[0] 13 sys.path[0] = backup_path_0 + '/lib' 14 #from solaar import NAME, __version__ 15 __version__ = '1.0.1' 16 NAME = 'Solaar' 17 18 sys.path[0] = backup_path_0 19 20 if 'install' in sys.argv: 21 # naively guess where the autostart .desktop file should be installed 22 if '--prefix' in sys.argv or any(x.startswith('--prefix=') for x in sys.argv) or '--home' in sys.argv: 23 autostart_path = 'etc/xdg/autostart' 24 elif '--user' in sys.argv: 25 from os import environ 26 from os import path 27 xdg_config_home = environ.get('XDG_CONFIG_HOME', path.expanduser(path.join('~', '.config'))) 28 autostart_path = path.join(xdg_config_home, 'autostart') 29 del environ, path, xdg_config_home 30 31 del sys, backup_path_0 32 33 34 def _data_files(): 35 from os.path import dirname as _dirname 36 37 yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg') 38 yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png') 39 yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg'] 40 41 for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'): 42 yield _dirname(mo), [mo] 43 44 yield 'share/applications', ['share/applications/solaar.desktop'] 45 yield autostart_path, ['share/applications/solaar.desktop'] 46 47 del _dirname 48 49 50 setup(name=NAME.lower(), 51 version=__version__, 52 description='Linux devices manager for the Logitech Unifying Receiver.', 53 long_description=''' 54 Solaar is a Linux device manager for Logitech's Unifying Receiver peripherals. 55 It is able to pair/unpair devices to the receiver, and for some devices read 56 battery status. 57 '''.strip(), 58 author='Daniel Pavel', 59 author_email='[email protected]', 60 license='GPLv2', 61 url='http://pwr-solaar.github.io/Solaar/', 62 classifiers=[ 63 'Development Status :: 4 - Beta', 64 'Environment :: X11 Applications :: GTK', 65 'Environment :: Console', 66 'Intended Audience :: End Users/Desktop', 67 'License :: DFSG approved', 68 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)', 69 'Natural Language :: English', 70 'Programming Language :: Python :: 3 :: Only', 71 'Operating System :: POSIX :: Linux', 72 'Topic :: Utilities', 73 ], 74 75 platforms=['linux'], 76 77 # sudo apt install python-gi python3-gi \ 78 # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-appindicator3-0.1 79 # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'], 80 81 python_requires='>=3.2', 82 install_requires=['pyudev (>= 0.13)', ], 83 package_dir={'': 'lib'}, 84 packages=['hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'], 85 data_files=list(_data_files()), 86 scripts=_glob('bin/*'), 87 ) 88 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/lib/solaar/gtk.py b/lib/solaar/gtk.py --- a/lib/solaar/gtk.py +++ b/lib/solaar/gtk.py @@ -48,7 +48,7 @@ help='unifying receiver to use; the first detected receiver if unspecified. Example: /dev/hidraw2') arg_parser.add_argument('--restart-on-wake-up', action='store_true', help='restart Solaar on sleep wake-up (experimental)') - arg_parser.add_argument('-w', '--window', choices=('hide','show','only'), help='start with window hidden / showing / only (no tray icon)') + arg_parser.add_argument('-w', '--window', choices=('show','hide','only'), help='start with window showing / hidden / only (no tray icon)') arg_parser.add_argument('-V', '--version', action='version', version='%(prog)s ' + __version__) arg_parser.add_argument('--help-actions', action='store_true', help='print help for the optional actions') @@ -62,7 +62,7 @@ return if args.window is None: - args.window = 'hide' + args.window = 'show' # default behaviour is to show main window import logging if args.debug > 0: diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -42,7 +42,7 @@ yield _dirname(mo), [mo] yield 'share/applications', ['share/applications/solaar.desktop'] - yield autostart_path, ['share/applications/solaar.desktop'] + yield autostart_path, ['share/autostart/solaar.desktop'] del _dirname
{"golden_diff": "diff --git a/lib/solaar/gtk.py b/lib/solaar/gtk.py\n--- a/lib/solaar/gtk.py\n+++ b/lib/solaar/gtk.py\n@@ -48,7 +48,7 @@\n \t\t\t\t\t\t\thelp='unifying receiver to use; the first detected receiver if unspecified. Example: /dev/hidraw2')\n \targ_parser.add_argument('--restart-on-wake-up', action='store_true',\n \t\t\t\t\t\t\thelp='restart Solaar on sleep wake-up (experimental)')\n-\targ_parser.add_argument('-w', '--window', choices=('hide','show','only'), help='start with window hidden / showing / only (no tray icon)')\n+\targ_parser.add_argument('-w', '--window', choices=('show','hide','only'), help='start with window showing / hidden / only (no tray icon)')\n \targ_parser.add_argument('-V', '--version', action='version', version='%(prog)s ' + __version__)\n \targ_parser.add_argument('--help-actions', action='store_true',\n \t\t\t\t\t\t\thelp='print help for the optional actions')\n@@ -62,7 +62,7 @@\n \t\treturn\n \n \tif args.window is None:\n-\t\targs.window = 'hide'\n+\t\targs.window = 'show' # default behaviour is to show main window\n \n \timport logging\n \tif args.debug > 0:\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -42,7 +42,7 @@\n \t\tyield _dirname(mo), [mo]\n \n \tyield 'share/applications', ['share/applications/solaar.desktop']\n-\tyield autostart_path, ['share/applications/solaar.desktop']\n+\tyield autostart_path, ['share/autostart/solaar.desktop']\n \n \tdel _dirname\n", "issue": "Running solaar does not open solaar\n```\r\n\u279c solaar --version\r\nsolaar 1.0.1\r\n```\r\n**What happens**\r\nWhen running `solaar`, it does not open the application itself. It _does_ add a try icon with a battery indicator or whatever.\r\nRunning `solaar` a second time _does_ open the application windo.\r\n\r\n**Expected result**\r\nRunning an application should open the application (that's pretty much as obvious as it gets). I shouldn't need to run it twice.\r\nIf anyone wants just the try icon, something like `solaar --tray-icon` would probably work.\r\n\r\n**Other notes**\r\nI no longer have a tray bar set up on my desktop. So right now, running solaar once is a no-op. Running it twice actually opens the application.\n", "before_files": [{"content": "#!/usr/bin/env python3\n# -*- python-mode -*-\n# -*- coding: UTF-8 -*-\n\n## Copyright (C) 2012-2013 Daniel Pavel\n##\n## This program is free software; you can redistribute it and/or modify\n## it under the terms of the GNU General Public License as published by\n## the Free Software Foundation; either version 2 of the License, or\n## (at your option) any later version.\n##\n## This program is distributed in the hope that it will be useful,\n## but WITHOUT ANY WARRANTY; without even the implied warranty of\n## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n## GNU General Public License for more details.\n##\n## You should have received a copy of the GNU General Public License along\n## with this program; if not, write to the Free Software Foundation, Inc.,\n## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport importlib\n\n\nfrom solaar import __version__, NAME\nimport solaar.i18n as _i18n\nimport solaar.cli as _cli\n\n#\n#\n#\n\ndef _require(module, os_package):\n\ttry:\n\t\treturn importlib.import_module(module)\n\texcept ImportError:\n\t\timport sys\n\t\tsys.exit(\"%s: missing required package '%s'\" % (NAME, os_package))\n\n\ndef _parse_arguments():\n\timport argparse\n\targ_parser = argparse.ArgumentParser(prog=NAME.lower())\n\targ_parser.add_argument('-d', '--debug', action='count', default=0,\n\t\t\t\t\t\t\thelp='print logging messages, for debugging purposes (may be repeated for extra verbosity)')\n\targ_parser.add_argument('-D', '--hidraw', action='store', dest='hidraw_path', metavar='PATH',\n\t\t\t\t\t\t\thelp='unifying receiver to use; the first detected receiver if unspecified. Example: /dev/hidraw2')\n\targ_parser.add_argument('--restart-on-wake-up', action='store_true',\n\t\t\t\t\t\t\thelp='restart Solaar on sleep wake-up (experimental)')\n\targ_parser.add_argument('-w', '--window', choices=('hide','show','only'), help='start with window hidden / showing / only (no tray icon)')\n\targ_parser.add_argument('-V', '--version', action='version', version='%(prog)s ' + __version__)\n\targ_parser.add_argument('--help-actions', action='store_true',\n\t\t\t\t\t\t\thelp='print help for the optional actions')\n\targ_parser.add_argument('action', nargs=argparse.REMAINDER, choices=_cli.actions,\n\t\t\t\t\t\t\thelp='optional actions to perform')\n\n\targs = arg_parser.parse_args()\n\n\tif args.help_actions:\n\t\t_cli.print_help()\n\t\treturn\n\n\tif args.window is None:\n\t\targs.window = 'hide'\n\n\timport logging\n\tif args.debug > 0:\n\t\tlog_level = logging.WARNING - 10 * args.debug\n\t\tlog_format='%(asctime)s,%(msecs)03d %(levelname)8s [%(threadName)s] %(name)s: %(message)s'\n\t\tlogging.basicConfig(level=max(log_level, logging.DEBUG), format=log_format, datefmt='%H:%M:%S')\n\telse:\n\t\tlogging.root.addHandler(logging.NullHandler())\n\t\tlogging.root.setLevel(logging.ERROR)\n\n\tif not args.action:\n\t\tif logging.root.isEnabledFor(logging.INFO):\n\t\t\tlogging.info(\"language %s (%s), translations path %s\", _i18n.language, _i18n.encoding, _i18n.path)\n\n\treturn args\n\n\ndef main():\n\t_require('pyudev', 'python-pyudev')\n\n\t# handle ^C in console\n\timport signal\n\tsignal.signal(signal.SIGINT, signal.SIG_DFL)\n\n\targs = _parse_arguments()\n\tif not args: return\n\tif args.action:\n\t\t# if any argument, run comandline and exit\n\t\treturn _cli.run(args.action, args.hidraw_path)\n\n\tgi = _require('gi', 'python-gi')\n\tgi.require_version('Gtk', '3.0')\n\t_require('gi.repository.Gtk', 'gir1.2-gtk-3.0')\n\n\ttry:\n\t\timport solaar.ui as ui\n\t\timport solaar.listener as listener\n\t\tlistener.setup_scanner(ui.status_changed, ui.error_dialog)\n\n\t\timport solaar.upower as _upower\n\t\tif args.restart_on_wake_up:\n\t\t\t_upower.watch(listener.start_all, listener.stop_all)\n\t\telse:\n\t\t\t_upower.watch(lambda: listener.ping_all(True))\n\n\t\t# main UI event loop\n\t\tui.run_loop(listener.start_all, listener.stop_all, args.window!='only', args.window!='hide')\n\texcept Exception as e:\n\t\timport sys\n\t\tsys.exit('%s: error: %s' % (NAME.lower(), e))\n\n\nif __name__ == '__main__':\n\tmain()\n", "path": "lib/solaar/gtk.py"}, {"content": "#!/usr/bin/env python3\n\nfrom glob import glob as _glob\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\nautostart_path = '/etc/xdg/autostart'\n\nimport sys\nbackup_path_0 = sys.path[0]\nsys.path[0] = backup_path_0 + '/lib'\n#from solaar import NAME, __version__\n__version__ = '1.0.1'\nNAME = 'Solaar'\n\nsys.path[0] = backup_path_0\n\nif 'install' in sys.argv:\n\t# naively guess where the autostart .desktop file should be installed\n\tif '--prefix' in sys.argv or any(x.startswith('--prefix=') for x in sys.argv) or '--home' in sys.argv:\n\t\tautostart_path = 'etc/xdg/autostart'\n\telif '--user' in sys.argv:\n\t\tfrom os import environ\n\t\tfrom os import path\n\t\txdg_config_home = environ.get('XDG_CONFIG_HOME', path.expanduser(path.join('~', '.config')))\n\t\tautostart_path = path.join(xdg_config_home, 'autostart')\n\t\tdel environ, path, xdg_config_home\n\ndel sys, backup_path_0\n\n\ndef _data_files():\n\tfrom os.path import dirname as _dirname\n\n\tyield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')\n\tyield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')\n\tyield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']\n\n\tfor mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):\n\t\tyield _dirname(mo), [mo]\n\n\tyield 'share/applications', ['share/applications/solaar.desktop']\n\tyield autostart_path, ['share/applications/solaar.desktop']\n\n\tdel _dirname\n\n\nsetup(name=NAME.lower(),\n\t\tversion=__version__,\n\t\tdescription='Linux devices manager for the Logitech Unifying Receiver.',\n\t\tlong_description='''\nSolaar is a Linux device manager for Logitech's Unifying Receiver peripherals.\nIt is able to pair/unpair devices to the receiver, and for some devices read\nbattery status.\n'''.strip(),\n\t\tauthor='Daniel Pavel',\n\t\tauthor_email='[email protected]',\n\t\tlicense='GPLv2',\n\t\turl='http://pwr-solaar.github.io/Solaar/',\n\t\tclassifiers=[\n\t\t\t'Development Status :: 4 - Beta',\n\t\t\t'Environment :: X11 Applications :: GTK',\n\t\t\t'Environment :: Console',\n\t\t\t'Intended Audience :: End Users/Desktop',\n\t\t\t'License :: DFSG approved',\n\t\t\t'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',\n\t\t\t'Natural Language :: English',\n\t\t\t'Programming Language :: Python :: 3 :: Only',\n\t\t\t'Operating System :: POSIX :: Linux',\n\t\t\t'Topic :: Utilities',\n\t\t\t],\n\n\t\tplatforms=['linux'],\n\n\t\t# sudo apt install python-gi python3-gi \\\n\t\t# gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-appindicator3-0.1\n\t\t# os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],\n\n\t\tpython_requires='>=3.2',\n\t\tinstall_requires=['pyudev (>= 0.13)', ],\n\t\tpackage_dir={'': 'lib'},\n\t\tpackages=['hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],\n\t\tdata_files=list(_data_files()),\n\t\tscripts=_glob('bin/*'),\n\t)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n# -*- python-mode -*-\n# -*- coding: UTF-8 -*-\n\n## Copyright (C) 2012-2013 Daniel Pavel\n##\n## This program is free software; you can redistribute it and/or modify\n## it under the terms of the GNU General Public License as published by\n## the Free Software Foundation; either version 2 of the License, or\n## (at your option) any later version.\n##\n## This program is distributed in the hope that it will be useful,\n## but WITHOUT ANY WARRANTY; without even the implied warranty of\n## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n## GNU General Public License for more details.\n##\n## You should have received a copy of the GNU General Public License along\n## with this program; if not, write to the Free Software Foundation, Inc.,\n## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport importlib\n\n\nfrom solaar import __version__, NAME\nimport solaar.i18n as _i18n\nimport solaar.cli as _cli\n\n#\n#\n#\n\ndef _require(module, os_package):\n\ttry:\n\t\treturn importlib.import_module(module)\n\texcept ImportError:\n\t\timport sys\n\t\tsys.exit(\"%s: missing required package '%s'\" % (NAME, os_package))\n\n\ndef _parse_arguments():\n\timport argparse\n\targ_parser = argparse.ArgumentParser(prog=NAME.lower())\n\targ_parser.add_argument('-d', '--debug', action='count', default=0,\n\t\t\t\t\t\t\thelp='print logging messages, for debugging purposes (may be repeated for extra verbosity)')\n\targ_parser.add_argument('-D', '--hidraw', action='store', dest='hidraw_path', metavar='PATH',\n\t\t\t\t\t\t\thelp='unifying receiver to use; the first detected receiver if unspecified. Example: /dev/hidraw2')\n\targ_parser.add_argument('--restart-on-wake-up', action='store_true',\n\t\t\t\t\t\t\thelp='restart Solaar on sleep wake-up (experimental)')\n\targ_parser.add_argument('-w', '--window', choices=('show','hide','only'), help='start with window showing / hidden / only (no tray icon)')\n\targ_parser.add_argument('-V', '--version', action='version', version='%(prog)s ' + __version__)\n\targ_parser.add_argument('--help-actions', action='store_true',\n\t\t\t\t\t\t\thelp='print help for the optional actions')\n\targ_parser.add_argument('action', nargs=argparse.REMAINDER, choices=_cli.actions,\n\t\t\t\t\t\t\thelp='optional actions to perform')\n\n\targs = arg_parser.parse_args()\n\n\tif args.help_actions:\n\t\t_cli.print_help()\n\t\treturn\n\n\tif args.window is None:\n\t\targs.window = 'show' # default behaviour is to show main window\n\n\timport logging\n\tif args.debug > 0:\n\t\tlog_level = logging.WARNING - 10 * args.debug\n\t\tlog_format='%(asctime)s,%(msecs)03d %(levelname)8s [%(threadName)s] %(name)s: %(message)s'\n\t\tlogging.basicConfig(level=max(log_level, logging.DEBUG), format=log_format, datefmt='%H:%M:%S')\n\telse:\n\t\tlogging.root.addHandler(logging.NullHandler())\n\t\tlogging.root.setLevel(logging.ERROR)\n\n\tif not args.action:\n\t\tif logging.root.isEnabledFor(logging.INFO):\n\t\t\tlogging.info(\"language %s (%s), translations path %s\", _i18n.language, _i18n.encoding, _i18n.path)\n\n\treturn args\n\n\ndef main():\n\t_require('pyudev', 'python-pyudev')\n\n\t# handle ^C in console\n\timport signal\n\tsignal.signal(signal.SIGINT, signal.SIG_DFL)\n\n\targs = _parse_arguments()\n\tif not args: return\n\tif args.action:\n\t\t# if any argument, run comandline and exit\n\t\treturn _cli.run(args.action, args.hidraw_path)\n\n\tgi = _require('gi', 'python-gi')\n\tgi.require_version('Gtk', '3.0')\n\t_require('gi.repository.Gtk', 'gir1.2-gtk-3.0')\n\n\ttry:\n\t\timport solaar.ui as ui\n\t\timport solaar.listener as listener\n\t\tlistener.setup_scanner(ui.status_changed, ui.error_dialog)\n\n\t\timport solaar.upower as _upower\n\t\tif args.restart_on_wake_up:\n\t\t\t_upower.watch(listener.start_all, listener.stop_all)\n\t\telse:\n\t\t\t_upower.watch(lambda: listener.ping_all(True))\n\n\t\t# main UI event loop\n\t\tui.run_loop(listener.start_all, listener.stop_all, args.window!='only', args.window!='hide')\n\texcept Exception as e:\n\t\timport sys\n\t\tsys.exit('%s: error: %s' % (NAME.lower(), e))\n\n\nif __name__ == '__main__':\n\tmain()\n", "path": "lib/solaar/gtk.py"}, {"content": "#!/usr/bin/env python3\n\nfrom glob import glob as _glob\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\nautostart_path = '/etc/xdg/autostart'\n\nimport sys\nbackup_path_0 = sys.path[0]\nsys.path[0] = backup_path_0 + '/lib'\n#from solaar import NAME, __version__\n__version__ = '1.0.1'\nNAME = 'Solaar'\n\nsys.path[0] = backup_path_0\n\nif 'install' in sys.argv:\n\t# naively guess where the autostart .desktop file should be installed\n\tif '--prefix' in sys.argv or any(x.startswith('--prefix=') for x in sys.argv) or '--home' in sys.argv:\n\t\tautostart_path = 'etc/xdg/autostart'\n\telif '--user' in sys.argv:\n\t\tfrom os import environ\n\t\tfrom os import path\n\t\txdg_config_home = environ.get('XDG_CONFIG_HOME', path.expanduser(path.join('~', '.config')))\n\t\tautostart_path = path.join(xdg_config_home, 'autostart')\n\t\tdel environ, path, xdg_config_home\n\ndel sys, backup_path_0\n\n\ndef _data_files():\n\tfrom os.path import dirname as _dirname\n\n\tyield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')\n\tyield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')\n\tyield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']\n\n\tfor mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):\n\t\tyield _dirname(mo), [mo]\n\n\tyield 'share/applications', ['share/applications/solaar.desktop']\n\tyield autostart_path, ['share/autostart/solaar.desktop']\n\n\tdel _dirname\n\n\nsetup(name=NAME.lower(),\n\t\tversion=__version__,\n\t\tdescription='Linux devices manager for the Logitech Unifying Receiver.',\n\t\tlong_description='''\nSolaar is a Linux device manager for Logitech's Unifying Receiver peripherals.\nIt is able to pair/unpair devices to the receiver, and for some devices read\nbattery status.\n'''.strip(),\n\t\tauthor='Daniel Pavel',\n\t\tauthor_email='[email protected]',\n\t\tlicense='GPLv2',\n\t\turl='http://pwr-solaar.github.io/Solaar/',\n\t\tclassifiers=[\n\t\t\t'Development Status :: 4 - Beta',\n\t\t\t'Environment :: X11 Applications :: GTK',\n\t\t\t'Environment :: Console',\n\t\t\t'Intended Audience :: End Users/Desktop',\n\t\t\t'License :: DFSG approved',\n\t\t\t'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',\n\t\t\t'Natural Language :: English',\n\t\t\t'Programming Language :: Python :: 3 :: Only',\n\t\t\t'Operating System :: POSIX :: Linux',\n\t\t\t'Topic :: Utilities',\n\t\t\t],\n\n\t\tplatforms=['linux'],\n\n\t\t# sudo apt install python-gi python3-gi \\\n\t\t# gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-appindicator3-0.1\n\t\t# os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],\n\n\t\tpython_requires='>=3.2',\n\t\tinstall_requires=['pyudev (>= 0.13)', ],\n\t\tpackage_dir={'': 'lib'},\n\t\tpackages=['hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],\n\t\tdata_files=list(_data_files()),\n\t\tscripts=_glob('bin/*'),\n\t)\n", "path": "setup.py"}]}
2,766
383
gh_patches_debug_6252
rasdani/github-patches
git_diff
google__turbinia-809
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- GrepTask issue ``` 2021-04-28 17:13:25 [ERROR] GrepTask Task failed with exception: [a bytes-like object is required, not 'str'] 2021-04-28 17:13:25 [ERROR] Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/turbinia-20210330-py3.6.egg/turbinia/workers/__init__.py", line 893, in run_wrapper self.result = self.run(evidence, self.result) File "/usr/local/lib/python3.6/dist-packages/turbinia-20210330-py3.6.egg/turbinia/workers/grep.py", line 49, in run fh.write('\n'.join(patterns)) File "/usr/lib/python3.6/tempfile.py", line 624, in func_wrapper return func(*args, **kwargs) TypeError: a bytes-like object is required, not 'str' 2021-04-28 17:13:26 [ERROR] GrepTask Task failed with exception: [a bytes-like object is required, not 'str'] 2021-04-28 17:13:26 [INFO] Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/turbinia-20210330-py3.6.egg/turbinia/workers/__init__.py", line 893, in run_wrapper self.result = self.run(evidence, self.result) File "/usr/local/lib/python3.6/dist-packages/turbinia-20210330-py3.6.egg/turbinia/workers/grep.py", line 49, in run fh.write('\n'.join(patterns)) File "/usr/lib/python3.6/tempfile.py", line 624, in func_wrapper return func(*args, **kwargs) TypeError: a bytes-like object is required, not 'str' ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `turbinia/workers/grep.py` Content: ``` 1 # -*- coding: utf-8 -*- 2 # Copyright 2015 Google Inc. 3 # 4 # Licensed under the Apache License, Version 2.0 (the "License"); 5 # you may not use this file except in compliance with the License. 6 # You may obtain a copy of the License at 7 # 8 # http://www.apache.org/licenses/LICENSE-2.0 9 # 10 # Unless required by applicable law or agreed to in writing, software 11 # distributed under the License is distributed on an "AS IS" BASIS, 12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 # See the License for the specific language governing permissions and 14 # limitations under the License. 15 """Task to filter a text file using extended regular expression patterns.""" 16 17 from __future__ import unicode_literals 18 19 import os 20 from tempfile import NamedTemporaryFile 21 22 from turbinia.evidence import FilteredTextFile 23 from turbinia.workers import TurbiniaTask 24 25 26 class GrepTask(TurbiniaTask): 27 """Filter input based on extended regular expression patterns.""" 28 29 def run(self, evidence, result): 30 """Run grep binary. 31 32 Args: 33 evidence (Evidence object): The evidence we will process 34 result (TurbiniaTaskResult): The object to place task results into. 35 36 Returns: 37 TurbiniaTaskResult object. 38 """ 39 40 patterns = evidence.config.get('filter_patterns') 41 if not patterns: 42 result.close(self, success=True, status='No patterns supplied, exit task') 43 return result 44 45 # Create temporary file to write patterns to. 46 # Used as input to grep (-f). 47 with NamedTemporaryFile(dir=self.output_dir, delete=False) as fh: 48 patterns_file_path = fh.name 49 fh.write('\n'.join(patterns)) 50 51 # Create a path that we can write the new file to. 52 base_name = os.path.basename(evidence.local_path) 53 output_file_path = os.path.join( 54 self.output_dir, '{0:s}.filtered'.format(base_name)) 55 56 output_evidence = FilteredTextFile(source_path=output_file_path) 57 cmd = 'grep -E -b -n -f {0:s} {1:s} > {2:s}'.format( 58 patterns_file_path, evidence.local_path, output_file_path) 59 60 result.log('Running [{0:s}]'.format(cmd)) 61 ret, result = self.execute( 62 cmd, result, new_evidence=[output_evidence], shell=True, 63 success_codes=[0, 1]) 64 65 # Grep returns 0 on success and 1 if no results are found. 66 if ret == 0: 67 status = 'Grep Task found results in {0:s}'.format(evidence.name) 68 result.close(self, success=True, status=status) 69 elif ret == 1: 70 status = 'Grep Task did not find any results in {0:s}'.format( 71 evidence.name) 72 result.close(self, success=True, status=status) 73 else: 74 result.close(self, success=False) 75 76 return result 77 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/turbinia/workers/grep.py b/turbinia/workers/grep.py --- a/turbinia/workers/grep.py +++ b/turbinia/workers/grep.py @@ -46,7 +46,7 @@ # Used as input to grep (-f). with NamedTemporaryFile(dir=self.output_dir, delete=False) as fh: patterns_file_path = fh.name - fh.write('\n'.join(patterns)) + fh.write('\n'.join(patterns.encode('utf-8'))) # Create a path that we can write the new file to. base_name = os.path.basename(evidence.local_path)
{"golden_diff": "diff --git a/turbinia/workers/grep.py b/turbinia/workers/grep.py\n--- a/turbinia/workers/grep.py\n+++ b/turbinia/workers/grep.py\n@@ -46,7 +46,7 @@\n # Used as input to grep (-f).\n with NamedTemporaryFile(dir=self.output_dir, delete=False) as fh:\n patterns_file_path = fh.name\n- fh.write('\\n'.join(patterns))\n+ fh.write('\\n'.join(patterns.encode('utf-8')))\n \n # Create a path that we can write the new file to.\n base_name = os.path.basename(evidence.local_path)\n", "issue": "GrepTask issue\n```\r\n2021-04-28 17:13:25 [ERROR] GrepTask Task failed with exception: [a bytes-like object is required, not 'str']\r\n2021-04-28 17:13:25 [ERROR] Traceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/turbinia-20210330-py3.6.egg/turbinia/workers/__init__.py\", line 893, in run_wrapper\r\n self.result = self.run(evidence, self.result)\r\n File \"/usr/local/lib/python3.6/dist-packages/turbinia-20210330-py3.6.egg/turbinia/workers/grep.py\", line 49, in run\r\n fh.write('\\n'.join(patterns))\r\n File \"/usr/lib/python3.6/tempfile.py\", line 624, in func_wrapper\r\n return func(*args, **kwargs)\r\nTypeError: a bytes-like object is required, not 'str'\r\n\r\n2021-04-28 17:13:26 [ERROR] GrepTask Task failed with exception: [a bytes-like object is required, not 'str']\r\n2021-04-28 17:13:26 [INFO] Traceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/turbinia-20210330-py3.6.egg/turbinia/workers/__init__.py\", line 893, in run_wrapper\r\n self.result = self.run(evidence, self.result)\r\n File \"/usr/local/lib/python3.6/dist-packages/turbinia-20210330-py3.6.egg/turbinia/workers/grep.py\", line 49, in run\r\n fh.write('\\n'.join(patterns))\r\n File \"/usr/lib/python3.6/tempfile.py\", line 624, in func_wrapper\r\n return func(*args, **kwargs)\r\nTypeError: a bytes-like object is required, not 'str'\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Task to filter a text file using extended regular expression patterns.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport os\nfrom tempfile import NamedTemporaryFile\n\nfrom turbinia.evidence import FilteredTextFile\nfrom turbinia.workers import TurbiniaTask\n\n\nclass GrepTask(TurbiniaTask):\n \"\"\"Filter input based on extended regular expression patterns.\"\"\"\n\n def run(self, evidence, result):\n \"\"\"Run grep binary.\n\n Args:\n evidence (Evidence object): The evidence we will process\n result (TurbiniaTaskResult): The object to place task results into.\n\n Returns:\n TurbiniaTaskResult object.\n \"\"\"\n\n patterns = evidence.config.get('filter_patterns')\n if not patterns:\n result.close(self, success=True, status='No patterns supplied, exit task')\n return result\n\n # Create temporary file to write patterns to.\n # Used as input to grep (-f).\n with NamedTemporaryFile(dir=self.output_dir, delete=False) as fh:\n patterns_file_path = fh.name\n fh.write('\\n'.join(patterns))\n\n # Create a path that we can write the new file to.\n base_name = os.path.basename(evidence.local_path)\n output_file_path = os.path.join(\n self.output_dir, '{0:s}.filtered'.format(base_name))\n\n output_evidence = FilteredTextFile(source_path=output_file_path)\n cmd = 'grep -E -b -n -f {0:s} {1:s} > {2:s}'.format(\n patterns_file_path, evidence.local_path, output_file_path)\n\n result.log('Running [{0:s}]'.format(cmd))\n ret, result = self.execute(\n cmd, result, new_evidence=[output_evidence], shell=True,\n success_codes=[0, 1])\n\n # Grep returns 0 on success and 1 if no results are found.\n if ret == 0:\n status = 'Grep Task found results in {0:s}'.format(evidence.name)\n result.close(self, success=True, status=status)\n elif ret == 1:\n status = 'Grep Task did not find any results in {0:s}'.format(\n evidence.name)\n result.close(self, success=True, status=status)\n else:\n result.close(self, success=False)\n\n return result\n", "path": "turbinia/workers/grep.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Task to filter a text file using extended regular expression patterns.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport os\nfrom tempfile import NamedTemporaryFile\n\nfrom turbinia.evidence import FilteredTextFile\nfrom turbinia.workers import TurbiniaTask\n\n\nclass GrepTask(TurbiniaTask):\n \"\"\"Filter input based on extended regular expression patterns.\"\"\"\n\n def run(self, evidence, result):\n \"\"\"Run grep binary.\n\n Args:\n evidence (Evidence object): The evidence we will process\n result (TurbiniaTaskResult): The object to place task results into.\n\n Returns:\n TurbiniaTaskResult object.\n \"\"\"\n\n patterns = evidence.config.get('filter_patterns')\n if not patterns:\n result.close(self, success=True, status='No patterns supplied, exit task')\n return result\n\n # Create temporary file to write patterns to.\n # Used as input to grep (-f).\n with NamedTemporaryFile(dir=self.output_dir, delete=False) as fh:\n patterns_file_path = fh.name\n fh.write('\\n'.join(patterns.encode('utf-8')))\n\n # Create a path that we can write the new file to.\n base_name = os.path.basename(evidence.local_path)\n output_file_path = os.path.join(\n self.output_dir, '{0:s}.filtered'.format(base_name))\n\n output_evidence = FilteredTextFile(source_path=output_file_path)\n cmd = 'grep -E -b -n -f {0:s} {1:s} > {2:s}'.format(\n patterns_file_path, evidence.local_path, output_file_path)\n\n result.log('Running [{0:s}]'.format(cmd))\n ret, result = self.execute(\n cmd, result, new_evidence=[output_evidence], shell=True,\n success_codes=[0, 1])\n\n # Grep returns 0 on success and 1 if no results are found.\n if ret == 0:\n status = 'Grep Task found results in {0:s}'.format(evidence.name)\n result.close(self, success=True, status=status)\n elif ret == 1:\n status = 'Grep Task did not find any results in {0:s}'.format(\n evidence.name)\n result.close(self, success=True, status=status)\n else:\n result.close(self, success=False)\n\n return result\n", "path": "turbinia/workers/grep.py"}]}
1,543
148
gh_patches_debug_37596
rasdani/github-patches
git_diff
streamlink__streamlink-4550
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- plugins.useetv: log if no link has been found <!-- Thanks for opening a pull request! Before you continue, please make sure that you have read and understood the contribution guidelines, otherwise your changes may be rejected: https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink If possible, run the tests, perform code linting and build the documentation locally on your system first to avoid unnecessary build failures: https://streamlink.github.io/latest/developing.html#validating-changes Also don't forget to add a meaningful description of your changes, so that the reviewing process is as simple as possible for the maintainers. Thank you very much! --> **Why this PR ?** This PR has been made to verify if no link has been found. Indeed, USeeTV doesn't provide all his channels worldwide. Some channels are blocked for Indonesian people only, and some others need a subscription to work (see beIN Asia as an example). Some channels like SeaToday would work, but channels like this one : ![image](https://user-images.githubusercontent.com/30985701/170096616-4d22b9aa-9972-418e-8bc6-1c99be1c1e88.png) will only show a Geo-restriction message above the player, telling the end-user he has no access to the stream. This also reflects inside the player, meaning no link can be scraped. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `src/streamlink/plugins/useetv.py` Content: ``` 1 """ 2 $description Live TV channels and video on-demand service from UseeTV, owned by Telkom Indonesia. 3 $url useetv.com 4 $type live, vod 5 """ 6 7 import re 8 9 from streamlink.plugin import Plugin, pluginmatcher 10 from streamlink.plugin.api import validate 11 from streamlink.stream.dash import DASHStream 12 from streamlink.stream.hls import HLSStream 13 14 15 @pluginmatcher(re.compile(r"https?://(?:www\.)?useetv\.com/")) 16 class UseeTV(Plugin): 17 def find_url(self): 18 url_re = re.compile(r"""['"](https://.*?/(?:[Pp]laylist\.m3u8|manifest\.mpd)[^'"]+)['"]""") 19 20 return self.session.http.get(self.url, schema=validate.Schema( 21 validate.parse_html(), 22 validate.any( 23 validate.all( 24 validate.xml_xpath_string(""" 25 .//script[contains(text(), 'laylist.m3u8') or contains(text(), 'manifest.mpd')][1]/text() 26 """), 27 str, 28 validate.transform(url_re.search), 29 validate.any(None, validate.all(validate.get(1), validate.url())), 30 ), 31 validate.all( 32 validate.xml_xpath_string(".//video[@id='video-player']/source/@src"), 33 validate.any(None, validate.url()), 34 ), 35 ), 36 )) 37 38 def _get_streams(self): 39 url = self.find_url() 40 41 if url and ".m3u8" in url: 42 return HLSStream.parse_variant_playlist(self.session, url) 43 elif url and ".mpd" in url: 44 return DASHStream.parse_manifest(self.session, url) 45 46 47 __plugin__ = UseeTV 48 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/src/streamlink/plugins/useetv.py b/src/streamlink/plugins/useetv.py --- a/src/streamlink/plugins/useetv.py +++ b/src/streamlink/plugins/useetv.py @@ -4,6 +4,7 @@ $type live, vod """ +import logging import re from streamlink.plugin import Plugin, pluginmatcher @@ -11,32 +12,46 @@ from streamlink.stream.dash import DASHStream from streamlink.stream.hls import HLSStream +log = logging.getLogger(__name__) + @pluginmatcher(re.compile(r"https?://(?:www\.)?useetv\.com/")) class UseeTV(Plugin): - def find_url(self): - url_re = re.compile(r"""['"](https://.*?/(?:[Pp]laylist\.m3u8|manifest\.mpd)[^'"]+)['"]""") + def _get_streams(self): + root = self.session.http.get(self.url, schema=validate.Schema(validate.parse_html())) + + for needle, errormsg in ( + ( + "This service is not available in your Country", + "The content is not available in your region", + ), + ( + "Silahkan login Menggunakan akun MyIndihome dan berlangganan minipack", + "The content is not available without a subscription", + ), + ): + if validate.Schema(validate.xml_xpath(f""".//script[contains(text(), '"{needle}"')]""")).validate(root): + log.error(errormsg) + return - return self.session.http.get(self.url, schema=validate.Schema( - validate.parse_html(), + url = validate.Schema( validate.any( validate.all( validate.xml_xpath_string(""" .//script[contains(text(), 'laylist.m3u8') or contains(text(), 'manifest.mpd')][1]/text() """), str, - validate.transform(url_re.search), - validate.any(None, validate.all(validate.get(1), validate.url())), + validate.transform( + re.compile(r"""(?P<q>['"])(?P<url>https://.*?/(?:[Pp]laylist\.m3u8|manifest\.mpd).+?)(?P=q)""").search + ), + validate.any(None, validate.all(validate.get("url"), validate.url())), ), validate.all( validate.xml_xpath_string(".//video[@id='video-player']/source/@src"), validate.any(None, validate.url()), ), - ), - )) - - def _get_streams(self): - url = self.find_url() + ) + ).validate(root) if url and ".m3u8" in url: return HLSStream.parse_variant_playlist(self.session, url)
{"golden_diff": "diff --git a/src/streamlink/plugins/useetv.py b/src/streamlink/plugins/useetv.py\n--- a/src/streamlink/plugins/useetv.py\n+++ b/src/streamlink/plugins/useetv.py\n@@ -4,6 +4,7 @@\n $type live, vod\n \"\"\"\n \n+import logging\n import re\n \n from streamlink.plugin import Plugin, pluginmatcher\n@@ -11,32 +12,46 @@\n from streamlink.stream.dash import DASHStream\n from streamlink.stream.hls import HLSStream\n \n+log = logging.getLogger(__name__)\n+\n \n @pluginmatcher(re.compile(r\"https?://(?:www\\.)?useetv\\.com/\"))\n class UseeTV(Plugin):\n- def find_url(self):\n- url_re = re.compile(r\"\"\"['\"](https://.*?/(?:[Pp]laylist\\.m3u8|manifest\\.mpd)[^'\"]+)['\"]\"\"\")\n+ def _get_streams(self):\n+ root = self.session.http.get(self.url, schema=validate.Schema(validate.parse_html()))\n+\n+ for needle, errormsg in (\n+ (\n+ \"This service is not available in your Country\",\n+ \"The content is not available in your region\",\n+ ),\n+ (\n+ \"Silahkan login Menggunakan akun MyIndihome dan berlangganan minipack\",\n+ \"The content is not available without a subscription\",\n+ ),\n+ ):\n+ if validate.Schema(validate.xml_xpath(f\"\"\".//script[contains(text(), '\"{needle}\"')]\"\"\")).validate(root):\n+ log.error(errormsg)\n+ return\n \n- return self.session.http.get(self.url, schema=validate.Schema(\n- validate.parse_html(),\n+ url = validate.Schema(\n validate.any(\n validate.all(\n validate.xml_xpath_string(\"\"\"\n .//script[contains(text(), 'laylist.m3u8') or contains(text(), 'manifest.mpd')][1]/text()\n \"\"\"),\n str,\n- validate.transform(url_re.search),\n- validate.any(None, validate.all(validate.get(1), validate.url())),\n+ validate.transform(\n+ re.compile(r\"\"\"(?P<q>['\"])(?P<url>https://.*?/(?:[Pp]laylist\\.m3u8|manifest\\.mpd).+?)(?P=q)\"\"\").search\n+ ),\n+ validate.any(None, validate.all(validate.get(\"url\"), validate.url())),\n ),\n validate.all(\n validate.xml_xpath_string(\".//video[@id='video-player']/source/@src\"),\n validate.any(None, validate.url()),\n ),\n- ),\n- ))\n-\n- def _get_streams(self):\n- url = self.find_url()\n+ )\n+ ).validate(root)\n \n if url and \".m3u8\" in url:\n return HLSStream.parse_variant_playlist(self.session, url)\n", "issue": "plugins.useetv: log if no link has been found\n<!--\r\nThanks for opening a pull request!\r\n\r\nBefore you continue, please make sure that you have read and understood the contribution guidelines, otherwise your changes may be rejected:\r\nhttps://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink\r\n\r\nIf possible, run the tests, perform code linting and build the documentation locally on your system first to avoid unnecessary build failures:\r\nhttps://streamlink.github.io/latest/developing.html#validating-changes\r\n\r\nAlso don't forget to add a meaningful description of your changes, so that the reviewing process is as simple as possible for the maintainers.\r\n\r\nThank you very much!\r\n-->\r\n\r\n**Why this PR ?**\r\n\r\nThis PR has been made to verify if no link has been found. Indeed, USeeTV doesn't provide all his channels worldwide. Some channels are blocked for Indonesian people only, and some others need a subscription to work (see beIN Asia as an example). Some channels like SeaToday would work, but channels like this one : \r\n![image](https://user-images.githubusercontent.com/30985701/170096616-4d22b9aa-9972-418e-8bc6-1c99be1c1e88.png)\r\nwill only show a Geo-restriction message above the player, telling the end-user he has no access to the stream. \r\n\r\nThis also reflects inside the player, meaning no link can be scraped.\r\n\n", "before_files": [{"content": "\"\"\"\n$description Live TV channels and video on-demand service from UseeTV, owned by Telkom Indonesia.\n$url useetv.com\n$type live, vod\n\"\"\"\n\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.dash import DASHStream\nfrom streamlink.stream.hls import HLSStream\n\n\n@pluginmatcher(re.compile(r\"https?://(?:www\\.)?useetv\\.com/\"))\nclass UseeTV(Plugin):\n def find_url(self):\n url_re = re.compile(r\"\"\"['\"](https://.*?/(?:[Pp]laylist\\.m3u8|manifest\\.mpd)[^'\"]+)['\"]\"\"\")\n\n return self.session.http.get(self.url, schema=validate.Schema(\n validate.parse_html(),\n validate.any(\n validate.all(\n validate.xml_xpath_string(\"\"\"\n .//script[contains(text(), 'laylist.m3u8') or contains(text(), 'manifest.mpd')][1]/text()\n \"\"\"),\n str,\n validate.transform(url_re.search),\n validate.any(None, validate.all(validate.get(1), validate.url())),\n ),\n validate.all(\n validate.xml_xpath_string(\".//video[@id='video-player']/source/@src\"),\n validate.any(None, validate.url()),\n ),\n ),\n ))\n\n def _get_streams(self):\n url = self.find_url()\n\n if url and \".m3u8\" in url:\n return HLSStream.parse_variant_playlist(self.session, url)\n elif url and \".mpd\" in url:\n return DASHStream.parse_manifest(self.session, url)\n\n\n__plugin__ = UseeTV\n", "path": "src/streamlink/plugins/useetv.py"}], "after_files": [{"content": "\"\"\"\n$description Live TV channels and video on-demand service from UseeTV, owned by Telkom Indonesia.\n$url useetv.com\n$type live, vod\n\"\"\"\n\nimport logging\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.dash import DASHStream\nfrom streamlink.stream.hls import HLSStream\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(r\"https?://(?:www\\.)?useetv\\.com/\"))\nclass UseeTV(Plugin):\n def _get_streams(self):\n root = self.session.http.get(self.url, schema=validate.Schema(validate.parse_html()))\n\n for needle, errormsg in (\n (\n \"This service is not available in your Country\",\n \"The content is not available in your region\",\n ),\n (\n \"Silahkan login Menggunakan akun MyIndihome dan berlangganan minipack\",\n \"The content is not available without a subscription\",\n ),\n ):\n if validate.Schema(validate.xml_xpath(f\"\"\".//script[contains(text(), '\"{needle}\"')]\"\"\")).validate(root):\n log.error(errormsg)\n return\n\n url = validate.Schema(\n validate.any(\n validate.all(\n validate.xml_xpath_string(\"\"\"\n .//script[contains(text(), 'laylist.m3u8') or contains(text(), 'manifest.mpd')][1]/text()\n \"\"\"),\n str,\n validate.transform(\n re.compile(r\"\"\"(?P<q>['\"])(?P<url>https://.*?/(?:[Pp]laylist\\.m3u8|manifest\\.mpd).+?)(?P=q)\"\"\").search\n ),\n validate.any(None, validate.all(validate.get(\"url\"), validate.url())),\n ),\n validate.all(\n validate.xml_xpath_string(\".//video[@id='video-player']/source/@src\"),\n validate.any(None, validate.url()),\n ),\n )\n ).validate(root)\n\n if url and \".m3u8\" in url:\n return HLSStream.parse_variant_playlist(self.session, url)\n elif url and \".mpd\" in url:\n return DASHStream.parse_manifest(self.session, url)\n\n\n__plugin__ = UseeTV\n", "path": "src/streamlink/plugins/useetv.py"}]}
1,037
618
gh_patches_debug_12345
rasdani/github-patches
git_diff
meltano__meltano-7636
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- bug: When meltano.yml is empty, no error message is printed. Rather, it just mentions to reach out to community ### Meltano Version 2.19.0 ### Python Version 3.9 ### Bug scope CLI (options, error messages, logging, etc.) ### Operating System Windows - WSL(Ubuntu) ### Description when `meltano.yml` is empty, `cli`(`meltano.cli.__init__.py: 105`) raises `EmptyMeltanoFileException` exception whenever we try to run any command such as `meltano add` or `meltano ui`. But, since there's no exception message, it just prints the troubleshooting message and blank lines as follows ``` Need help fixing this problem? Visit http://melta.no/ for troubleshooting steps, or to join our friendly Slack community. ``` ### Code _No response_ --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `src/meltano/core/error.py` Content: ``` 1 """Base Error classes.""" 2 3 from __future__ import annotations 4 5 import typing as t 6 from asyncio.streams import StreamReader 7 from asyncio.subprocess import Process 8 from enum import Enum 9 10 if t.TYPE_CHECKING: 11 from meltano.core.project import Project 12 13 14 class ExitCode(int, Enum): # noqa: D101 15 OK = 0 16 FAIL = 1 17 NO_RETRY = 2 18 19 20 class MeltanoError(Exception): 21 """Base class for all user-facing errors.""" 22 23 def __init__( 24 self, 25 reason: str, 26 instruction: str | None = None, 27 *args: t.Any, 28 **kwargs: t.Any, 29 ) -> None: 30 """Initialize a MeltanoError. 31 32 Args: 33 reason: A short explanation of the error. 34 instruction: A short instruction on how to fix the error. 35 args: Additional arguments to pass to the base exception class. 36 kwargs: Keyword arguments to pass to the base exception class. 37 """ 38 self.reason = reason 39 self.instruction = instruction 40 super().__init__(reason, instruction, *args, **kwargs) 41 42 def __str__(self) -> str: 43 """Return a string representation of the error. 44 45 Returns: 46 A string representation of the error. 47 """ 48 return ( 49 f"{self.reason}. {self.instruction}." 50 if self.instruction 51 else f"{self.reason}." 52 ) 53 54 55 class Error(Exception): 56 """Base exception for ELT errors.""" 57 58 def exit_code(self): # noqa: D102 59 return ExitCode.FAIL 60 61 62 class ExtractError(Error): 63 """Error in the extraction process, like API errors.""" 64 65 def exit_code(self): # noqa: D102 66 return ExitCode.NO_RETRY 67 68 69 class AsyncSubprocessError(Exception): 70 """Happens when an async subprocess exits with a resultcode != 0.""" 71 72 def __init__( 73 self, 74 message: str, 75 process: Process, 76 stderr: str | None = None, 77 ): # noqa: DAR101 78 """Initialize AsyncSubprocessError.""" 79 self.process = process 80 self._stderr: str | StreamReader | None = stderr or process.stderr 81 super().__init__(message) 82 83 @property 84 async def stderr(self) -> str | None: 85 """Return the output of the process to stderr.""" 86 if not self._stderr: # noqa: DAR201 87 return None 88 elif not isinstance(self._stderr, str): 89 stream = await self._stderr.read() 90 self._stderr = stream.decode("utf-8") 91 92 return self._stderr 93 94 95 class PluginInstallError(Exception): 96 """Exception for when a plugin fails to install.""" 97 98 99 class PluginInstallWarning(Exception): 100 """Exception for when a plugin optional optional step fails to install.""" 101 102 103 class EmptyMeltanoFileException(Exception): 104 """Exception for empty meltano.yml file.""" 105 106 107 class MeltanoConfigurationError(MeltanoError): 108 """Exception for when Meltano is inproperly configured.""" 109 110 111 class ProjectNotFound(Error): 112 """A Project is instantiated outside of a meltano project structure.""" 113 114 def __init__(self, project: Project): 115 """Instantiate the error. 116 117 Args: 118 project: the name of the project which cannot be found 119 """ 120 super().__init__( 121 f"Cannot find `{project.meltanofile}`. Are you in a meltano project?", 122 ) 123 124 125 class ProjectReadonly(Error): 126 """Attempting to update a readonly project.""" 127 128 def __init__(self): 129 """Instantiate the error.""" 130 super().__init__("This Meltano project is deployed as read-only") 131 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/src/meltano/core/error.py b/src/meltano/core/error.py --- a/src/meltano/core/error.py +++ b/src/meltano/core/error.py @@ -100,9 +100,15 @@ """Exception for when a plugin optional optional step fails to install.""" -class EmptyMeltanoFileException(Exception): +class EmptyMeltanoFileException(MeltanoError): """Exception for empty meltano.yml file.""" + def __init__(self) -> None: + """Instantiate the error.""" + reason = "Your meltano.yml file is empty" + instruction = "Please update your meltano file with a valid configuration" + super().__init__(reason, instruction) + class MeltanoConfigurationError(MeltanoError): """Exception for when Meltano is inproperly configured."""
{"golden_diff": "diff --git a/src/meltano/core/error.py b/src/meltano/core/error.py\n--- a/src/meltano/core/error.py\n+++ b/src/meltano/core/error.py\n@@ -100,9 +100,15 @@\n \"\"\"Exception for when a plugin optional optional step fails to install.\"\"\"\n \n \n-class EmptyMeltanoFileException(Exception):\n+class EmptyMeltanoFileException(MeltanoError):\n \"\"\"Exception for empty meltano.yml file.\"\"\"\n \n+ def __init__(self) -> None:\n+ \"\"\"Instantiate the error.\"\"\"\n+ reason = \"Your meltano.yml file is empty\"\n+ instruction = \"Please update your meltano file with a valid configuration\"\n+ super().__init__(reason, instruction)\n+\n \n class MeltanoConfigurationError(MeltanoError):\n \"\"\"Exception for when Meltano is inproperly configured.\"\"\"\n", "issue": "bug: When meltano.yml is empty, no error message is printed. Rather, it just mentions to reach out to community\n### Meltano Version\r\n\r\n2.19.0\r\n\r\n### Python Version\r\n\r\n3.9\r\n\r\n### Bug scope\r\n\r\nCLI (options, error messages, logging, etc.)\r\n\r\n### Operating System\r\n\r\nWindows - WSL(Ubuntu)\r\n\r\n### Description\r\n\r\nwhen `meltano.yml` is empty, `cli`(`meltano.cli.__init__.py: 105`) raises `EmptyMeltanoFileException` exception whenever we try to run any command such as `meltano add` or `meltano ui`. But, since there's no exception message, it just prints the troubleshooting message and blank lines as follows\r\n\r\n```\r\nNeed help fixing this problem? Visit http://melta.no/ for troubleshooting steps, or to\r\njoin our friendly Slack community.\r\n\r\n```\r\n\r\n\r\n### Code\r\n\r\n_No response_\n", "before_files": [{"content": "\"\"\"Base Error classes.\"\"\"\n\nfrom __future__ import annotations\n\nimport typing as t\nfrom asyncio.streams import StreamReader\nfrom asyncio.subprocess import Process\nfrom enum import Enum\n\nif t.TYPE_CHECKING:\n from meltano.core.project import Project\n\n\nclass ExitCode(int, Enum): # noqa: D101\n OK = 0\n FAIL = 1\n NO_RETRY = 2\n\n\nclass MeltanoError(Exception):\n \"\"\"Base class for all user-facing errors.\"\"\"\n\n def __init__(\n self,\n reason: str,\n instruction: str | None = None,\n *args: t.Any,\n **kwargs: t.Any,\n ) -> None:\n \"\"\"Initialize a MeltanoError.\n\n Args:\n reason: A short explanation of the error.\n instruction: A short instruction on how to fix the error.\n args: Additional arguments to pass to the base exception class.\n kwargs: Keyword arguments to pass to the base exception class.\n \"\"\"\n self.reason = reason\n self.instruction = instruction\n super().__init__(reason, instruction, *args, **kwargs)\n\n def __str__(self) -> str:\n \"\"\"Return a string representation of the error.\n\n Returns:\n A string representation of the error.\n \"\"\"\n return (\n f\"{self.reason}. {self.instruction}.\"\n if self.instruction\n else f\"{self.reason}.\"\n )\n\n\nclass Error(Exception):\n \"\"\"Base exception for ELT errors.\"\"\"\n\n def exit_code(self): # noqa: D102\n return ExitCode.FAIL\n\n\nclass ExtractError(Error):\n \"\"\"Error in the extraction process, like API errors.\"\"\"\n\n def exit_code(self): # noqa: D102\n return ExitCode.NO_RETRY\n\n\nclass AsyncSubprocessError(Exception):\n \"\"\"Happens when an async subprocess exits with a resultcode != 0.\"\"\"\n\n def __init__(\n self,\n message: str,\n process: Process,\n stderr: str | None = None,\n ): # noqa: DAR101\n \"\"\"Initialize AsyncSubprocessError.\"\"\"\n self.process = process\n self._stderr: str | StreamReader | None = stderr or process.stderr\n super().__init__(message)\n\n @property\n async def stderr(self) -> str | None:\n \"\"\"Return the output of the process to stderr.\"\"\"\n if not self._stderr: # noqa: DAR201\n return None\n elif not isinstance(self._stderr, str):\n stream = await self._stderr.read()\n self._stderr = stream.decode(\"utf-8\")\n\n return self._stderr\n\n\nclass PluginInstallError(Exception):\n \"\"\"Exception for when a plugin fails to install.\"\"\"\n\n\nclass PluginInstallWarning(Exception):\n \"\"\"Exception for when a plugin optional optional step fails to install.\"\"\"\n\n\nclass EmptyMeltanoFileException(Exception):\n \"\"\"Exception for empty meltano.yml file.\"\"\"\n\n\nclass MeltanoConfigurationError(MeltanoError):\n \"\"\"Exception for when Meltano is inproperly configured.\"\"\"\n\n\nclass ProjectNotFound(Error):\n \"\"\"A Project is instantiated outside of a meltano project structure.\"\"\"\n\n def __init__(self, project: Project):\n \"\"\"Instantiate the error.\n\n Args:\n project: the name of the project which cannot be found\n \"\"\"\n super().__init__(\n f\"Cannot find `{project.meltanofile}`. Are you in a meltano project?\",\n )\n\n\nclass ProjectReadonly(Error):\n \"\"\"Attempting to update a readonly project.\"\"\"\n\n def __init__(self):\n \"\"\"Instantiate the error.\"\"\"\n super().__init__(\"This Meltano project is deployed as read-only\")\n", "path": "src/meltano/core/error.py"}], "after_files": [{"content": "\"\"\"Base Error classes.\"\"\"\n\nfrom __future__ import annotations\n\nimport typing as t\nfrom asyncio.streams import StreamReader\nfrom asyncio.subprocess import Process\nfrom enum import Enum\n\nif t.TYPE_CHECKING:\n from meltano.core.project import Project\n\n\nclass ExitCode(int, Enum): # noqa: D101\n OK = 0\n FAIL = 1\n NO_RETRY = 2\n\n\nclass MeltanoError(Exception):\n \"\"\"Base class for all user-facing errors.\"\"\"\n\n def __init__(\n self,\n reason: str,\n instruction: str | None = None,\n *args: t.Any,\n **kwargs: t.Any,\n ) -> None:\n \"\"\"Initialize a MeltanoError.\n\n Args:\n reason: A short explanation of the error.\n instruction: A short instruction on how to fix the error.\n args: Additional arguments to pass to the base exception class.\n kwargs: Keyword arguments to pass to the base exception class.\n \"\"\"\n self.reason = reason\n self.instruction = instruction\n super().__init__(reason, instruction, *args, **kwargs)\n\n def __str__(self) -> str:\n \"\"\"Return a string representation of the error.\n\n Returns:\n A string representation of the error.\n \"\"\"\n return (\n f\"{self.reason}. {self.instruction}.\"\n if self.instruction\n else f\"{self.reason}.\"\n )\n\n\nclass Error(Exception):\n \"\"\"Base exception for ELT errors.\"\"\"\n\n def exit_code(self): # noqa: D102\n return ExitCode.FAIL\n\n\nclass ExtractError(Error):\n \"\"\"Error in the extraction process, like API errors.\"\"\"\n\n def exit_code(self): # noqa: D102\n return ExitCode.NO_RETRY\n\n\nclass AsyncSubprocessError(Exception):\n \"\"\"Happens when an async subprocess exits with a resultcode != 0.\"\"\"\n\n def __init__(\n self,\n message: str,\n process: Process,\n stderr: str | None = None,\n ): # noqa: DAR101\n \"\"\"Initialize AsyncSubprocessError.\"\"\"\n self.process = process\n self._stderr: str | StreamReader | None = stderr or process.stderr\n super().__init__(message)\n\n @property\n async def stderr(self) -> str | None:\n \"\"\"Return the output of the process to stderr.\"\"\"\n if not self._stderr: # noqa: DAR201\n return None\n elif not isinstance(self._stderr, str):\n stream = await self._stderr.read()\n self._stderr = stream.decode(\"utf-8\")\n\n return self._stderr\n\n\nclass PluginInstallError(Exception):\n \"\"\"Exception for when a plugin fails to install.\"\"\"\n\n\nclass PluginInstallWarning(Exception):\n \"\"\"Exception for when a plugin optional optional step fails to install.\"\"\"\n\n\nclass EmptyMeltanoFileException(MeltanoError):\n \"\"\"Exception for empty meltano.yml file.\"\"\"\n\n def __init__(self) -> None:\n \"\"\"Instantiate the error.\"\"\"\n reason = \"Your meltano.yml file is empty\"\n instruction = \"Please update your meltano file with a valid configuration\"\n super().__init__(reason, instruction)\n\n\nclass MeltanoConfigurationError(MeltanoError):\n \"\"\"Exception for when Meltano is inproperly configured.\"\"\"\n\n\nclass ProjectNotFound(Error):\n \"\"\"A Project is instantiated outside of a meltano project structure.\"\"\"\n\n def __init__(self, project: Project):\n \"\"\"Instantiate the error.\n\n Args:\n project: the name of the project which cannot be found\n \"\"\"\n super().__init__(\n f\"Cannot find `{project.meltanofile}`. Are you in a meltano project?\",\n )\n\n\nclass ProjectReadonly(Error):\n \"\"\"Attempting to update a readonly project.\"\"\"\n\n def __init__(self):\n \"\"\"Instantiate the error.\"\"\"\n super().__init__(\"This Meltano project is deployed as read-only\")\n", "path": "src/meltano/core/error.py"}]}
1,540
187
gh_patches_debug_19026
rasdani/github-patches
git_diff
Kinto__kinto-135
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Missing CORS header on /v1/buckets/default/collections/tasks/records > 09:19:55,733 Blocage d'une requête multi-origines (Cross-Origin Request) : la politique « Same Origin » ne permet pas de consulter la ressource distante située sur http://0.0.0.0:8888/v1/buckets/default/collections/tasks/records?_since=1436512795672. Raison : l'en-tête CORS « Access-Control-Allow-Origin » est manquant.1 <inconnu> --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `kinto/views/buckets.py` Content: ``` 1 from six import text_type 2 from uuid import UUID 3 4 from pyramid.httpexceptions import HTTPForbidden, HTTPPreconditionFailed 5 from pyramid.security import NO_PERMISSION_REQUIRED 6 from pyramid.view import view_config 7 8 from cliquet import resource 9 from cliquet.utils import hmac_digest, build_request 10 11 from kinto.views import NameGenerator 12 13 14 def create_bucket(request, bucket_id): 15 """Create a bucket if it doesn't exists.""" 16 bucket_put = (request.method.lower() == 'put' and 17 request.path.endswith('buckets/default')) 18 19 if not bucket_put: 20 subrequest = build_request(request, { 21 'method': 'PUT', 22 'path': '/buckets/%s' % bucket_id, 23 'body': {"data": {}}, 24 'headers': {'If-None-Match': '*'.encode('utf-8')} 25 }) 26 27 try: 28 request.invoke_subrequest(subrequest) 29 except HTTPPreconditionFailed: 30 # The bucket already exists 31 pass 32 33 34 def create_collection(request, bucket_id): 35 subpath = request.matchdict['subpath'] 36 if subpath.startswith('/collections/'): 37 collection_id = subpath.split('/')[2] 38 collection_put = (request.method.lower() == 'put' and 39 request.path.endswith(collection_id)) 40 if not collection_put: 41 subrequest = build_request(request, { 42 'method': 'PUT', 43 'path': '/buckets/%s/collections/%s' % ( 44 bucket_id, collection_id), 45 'body': {"data": {}}, 46 'headers': {'If-None-Match': '*'.encode('utf-8')} 47 }) 48 try: 49 request.invoke_subrequest(subrequest) 50 except HTTPPreconditionFailed: 51 # The collection already exists 52 pass 53 54 55 @view_config(route_name='default_bucket', permission=NO_PERMISSION_REQUIRED) 56 def default_bucket(request): 57 if request.method.lower() == 'options': 58 path = request.path.replace('default', 'unknown') 59 subrequest = build_request(request, { 60 'method': 'OPTIONS', 61 'path': path 62 }) 63 return request.invoke_subrequest(subrequest) 64 65 if getattr(request, 'prefixed_userid', None) is None: 66 raise HTTPForbidden # Pass through the forbidden_view_config 67 68 settings = request.registry.settings 69 hmac_secret = settings['cliquet.userid_hmac_secret'] 70 # Build the user unguessable bucket_id UUID from its user_id 71 digest = hmac_digest(hmac_secret, request.prefixed_userid) 72 bucket_id = text_type(UUID(digest[:32])) 73 path = request.path.replace('default', bucket_id) 74 querystring = request.url[(request.url.index(request.path) + 75 len(request.path)):] 76 77 # Make sure bucket exists 78 create_bucket(request, bucket_id) 79 80 # Make sure the collection exists 81 create_collection(request, bucket_id) 82 83 subrequest = build_request(request, { 84 'method': request.method, 85 'path': path + querystring, 86 'body': request.body 87 }) 88 89 return request.invoke_subrequest(subrequest) 90 91 92 @resource.register(name='bucket', 93 collection_methods=('GET',), 94 collection_path='/buckets', 95 record_path='/buckets/{{id}}') 96 class Bucket(resource.ProtectedResource): 97 permissions = ('read', 'write', 'collection:create', 'group:create') 98 99 def __init__(self, *args, **kwargs): 100 super(Bucket, self).__init__(*args, **kwargs) 101 self.collection.id_generator = NameGenerator() 102 103 def get_parent_id(self, request): 104 # Buckets are not isolated by user, unlike Cliquet resources. 105 return '' 106 107 def delete(self): 108 result = super(Bucket, self).delete() 109 110 # Delete groups. 111 storage = self.collection.storage 112 parent_id = '/buckets/%s' % self.record_id 113 storage.delete_all(collection_id='group', parent_id=parent_id) 114 115 # Delete collections. 116 deleted = storage.delete_all(collection_id='collection', 117 parent_id=parent_id) 118 119 # Delete records. 120 id_field = self.collection.id_field 121 for collection in deleted: 122 parent_id = '/buckets/%s/collections/%s' % (self.record_id, 123 collection[id_field]) 124 storage.delete_all(collection_id='record', parent_id=parent_id) 125 126 return result 127 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/kinto/views/buckets.py b/kinto/views/buckets.py --- a/kinto/views/buckets.py +++ b/kinto/views/buckets.py @@ -1,12 +1,13 @@ from six import text_type from uuid import UUID -from pyramid.httpexceptions import HTTPForbidden, HTTPPreconditionFailed +from pyramid.httpexceptions import (HTTPForbidden, HTTPPreconditionFailed, + HTTPException) from pyramid.security import NO_PERMISSION_REQUIRED from pyramid.view import view_config from cliquet import resource -from cliquet.utils import hmac_digest, build_request +from cliquet.utils import hmac_digest, build_request, reapply_cors from kinto.views import NameGenerator @@ -86,7 +87,11 @@ 'body': request.body }) - return request.invoke_subrequest(subrequest) + try: + response = request.invoke_subrequest(subrequest) + except HTTPException as error: + response = reapply_cors(subrequest, error) + return response @resource.register(name='bucket',
{"golden_diff": "diff --git a/kinto/views/buckets.py b/kinto/views/buckets.py\n--- a/kinto/views/buckets.py\n+++ b/kinto/views/buckets.py\n@@ -1,12 +1,13 @@\n from six import text_type\n from uuid import UUID\n \n-from pyramid.httpexceptions import HTTPForbidden, HTTPPreconditionFailed\n+from pyramid.httpexceptions import (HTTPForbidden, HTTPPreconditionFailed,\n+ HTTPException)\n from pyramid.security import NO_PERMISSION_REQUIRED\n from pyramid.view import view_config\n \n from cliquet import resource\n-from cliquet.utils import hmac_digest, build_request\n+from cliquet.utils import hmac_digest, build_request, reapply_cors\n \n from kinto.views import NameGenerator\n \n@@ -86,7 +87,11 @@\n 'body': request.body\n })\n \n- return request.invoke_subrequest(subrequest)\n+ try:\n+ response = request.invoke_subrequest(subrequest)\n+ except HTTPException as error:\n+ response = reapply_cors(subrequest, error)\n+ return response\n \n \n @resource.register(name='bucket',\n", "issue": "Missing CORS header on /v1/buckets/default/collections/tasks/records\n> 09:19:55,733 Blocage d'une requ\u00eate multi-origines (Cross-Origin Request)\u00a0: la politique \u00ab\u00a0Same Origin\u00a0\u00bb ne permet pas de consulter la ressource distante situ\u00e9e sur http://0.0.0.0:8888/v1/buckets/default/collections/tasks/records?_since=1436512795672. Raison\u00a0: l'en-t\u00eate CORS \u00ab\u00a0Access-Control-Allow-Origin\u00a0\u00bb est manquant.1 <inconnu>\n\n", "before_files": [{"content": "from six import text_type\nfrom uuid import UUID\n\nfrom pyramid.httpexceptions import HTTPForbidden, HTTPPreconditionFailed\nfrom pyramid.security import NO_PERMISSION_REQUIRED\nfrom pyramid.view import view_config\n\nfrom cliquet import resource\nfrom cliquet.utils import hmac_digest, build_request\n\nfrom kinto.views import NameGenerator\n\n\ndef create_bucket(request, bucket_id):\n \"\"\"Create a bucket if it doesn't exists.\"\"\"\n bucket_put = (request.method.lower() == 'put' and\n request.path.endswith('buckets/default'))\n\n if not bucket_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s' % bucket_id,\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The bucket already exists\n pass\n\n\ndef create_collection(request, bucket_id):\n subpath = request.matchdict['subpath']\n if subpath.startswith('/collections/'):\n collection_id = subpath.split('/')[2]\n collection_put = (request.method.lower() == 'put' and\n request.path.endswith(collection_id))\n if not collection_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s/collections/%s' % (\n bucket_id, collection_id),\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The collection already exists\n pass\n\n\n@view_config(route_name='default_bucket', permission=NO_PERMISSION_REQUIRED)\ndef default_bucket(request):\n if request.method.lower() == 'options':\n path = request.path.replace('default', 'unknown')\n subrequest = build_request(request, {\n 'method': 'OPTIONS',\n 'path': path\n })\n return request.invoke_subrequest(subrequest)\n\n if getattr(request, 'prefixed_userid', None) is None:\n raise HTTPForbidden # Pass through the forbidden_view_config\n\n settings = request.registry.settings\n hmac_secret = settings['cliquet.userid_hmac_secret']\n # Build the user unguessable bucket_id UUID from its user_id\n digest = hmac_digest(hmac_secret, request.prefixed_userid)\n bucket_id = text_type(UUID(digest[:32]))\n path = request.path.replace('default', bucket_id)\n querystring = request.url[(request.url.index(request.path) +\n len(request.path)):]\n\n # Make sure bucket exists\n create_bucket(request, bucket_id)\n\n # Make sure the collection exists\n create_collection(request, bucket_id)\n\n subrequest = build_request(request, {\n 'method': request.method,\n 'path': path + querystring,\n 'body': request.body\n })\n\n return request.invoke_subrequest(subrequest)\n\n\[email protected](name='bucket',\n collection_methods=('GET',),\n collection_path='/buckets',\n record_path='/buckets/{{id}}')\nclass Bucket(resource.ProtectedResource):\n permissions = ('read', 'write', 'collection:create', 'group:create')\n\n def __init__(self, *args, **kwargs):\n super(Bucket, self).__init__(*args, **kwargs)\n self.collection.id_generator = NameGenerator()\n\n def get_parent_id(self, request):\n # Buckets are not isolated by user, unlike Cliquet resources.\n return ''\n\n def delete(self):\n result = super(Bucket, self).delete()\n\n # Delete groups.\n storage = self.collection.storage\n parent_id = '/buckets/%s' % self.record_id\n storage.delete_all(collection_id='group', parent_id=parent_id)\n\n # Delete collections.\n deleted = storage.delete_all(collection_id='collection',\n parent_id=parent_id)\n\n # Delete records.\n id_field = self.collection.id_field\n for collection in deleted:\n parent_id = '/buckets/%s/collections/%s' % (self.record_id,\n collection[id_field])\n storage.delete_all(collection_id='record', parent_id=parent_id)\n\n return result\n", "path": "kinto/views/buckets.py"}], "after_files": [{"content": "from six import text_type\nfrom uuid import UUID\n\nfrom pyramid.httpexceptions import (HTTPForbidden, HTTPPreconditionFailed,\n HTTPException)\nfrom pyramid.security import NO_PERMISSION_REQUIRED\nfrom pyramid.view import view_config\n\nfrom cliquet import resource\nfrom cliquet.utils import hmac_digest, build_request, reapply_cors\n\nfrom kinto.views import NameGenerator\n\n\ndef create_bucket(request, bucket_id):\n \"\"\"Create a bucket if it doesn't exists.\"\"\"\n bucket_put = (request.method.lower() == 'put' and\n request.path.endswith('buckets/default'))\n\n if not bucket_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s' % bucket_id,\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The bucket already exists\n pass\n\n\ndef create_collection(request, bucket_id):\n subpath = request.matchdict['subpath']\n if subpath.startswith('/collections/'):\n collection_id = subpath.split('/')[2]\n collection_put = (request.method.lower() == 'put' and\n request.path.endswith(collection_id))\n if not collection_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s/collections/%s' % (\n bucket_id, collection_id),\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The collection already exists\n pass\n\n\n@view_config(route_name='default_bucket', permission=NO_PERMISSION_REQUIRED)\ndef default_bucket(request):\n if request.method.lower() == 'options':\n path = request.path.replace('default', 'unknown')\n subrequest = build_request(request, {\n 'method': 'OPTIONS',\n 'path': path\n })\n return request.invoke_subrequest(subrequest)\n\n if getattr(request, 'prefixed_userid', None) is None:\n raise HTTPForbidden # Pass through the forbidden_view_config\n\n settings = request.registry.settings\n hmac_secret = settings['cliquet.userid_hmac_secret']\n # Build the user unguessable bucket_id UUID from its user_id\n digest = hmac_digest(hmac_secret, request.prefixed_userid)\n bucket_id = text_type(UUID(digest[:32]))\n path = request.path.replace('default', bucket_id)\n querystring = request.url[(request.url.index(request.path) +\n len(request.path)):]\n\n # Make sure bucket exists\n create_bucket(request, bucket_id)\n\n # Make sure the collection exists\n create_collection(request, bucket_id)\n\n subrequest = build_request(request, {\n 'method': request.method,\n 'path': path + querystring,\n 'body': request.body\n })\n\n try:\n response = request.invoke_subrequest(subrequest)\n except HTTPException as error:\n response = reapply_cors(subrequest, error)\n return response\n\n\[email protected](name='bucket',\n collection_methods=('GET',),\n collection_path='/buckets',\n record_path='/buckets/{{id}}')\nclass Bucket(resource.ProtectedResource):\n permissions = ('read', 'write', 'collection:create', 'group:create')\n\n def __init__(self, *args, **kwargs):\n super(Bucket, self).__init__(*args, **kwargs)\n self.collection.id_generator = NameGenerator()\n\n def get_parent_id(self, request):\n # Buckets are not isolated by user, unlike Cliquet resources.\n return ''\n\n def delete(self):\n result = super(Bucket, self).delete()\n\n # Delete groups.\n storage = self.collection.storage\n parent_id = '/buckets/%s' % self.record_id\n storage.delete_all(collection_id='group', parent_id=parent_id)\n\n # Delete collections.\n deleted = storage.delete_all(collection_id='collection',\n parent_id=parent_id)\n\n # Delete records.\n id_field = self.collection.id_field\n for collection in deleted:\n parent_id = '/buckets/%s/collections/%s' % (self.record_id,\n collection[id_field])\n storage.delete_all(collection_id='record', parent_id=parent_id)\n\n return result\n", "path": "kinto/views/buckets.py"}]}
1,590
231
gh_patches_debug_30468
rasdani/github-patches
git_diff
googleapis__python-bigquery-551
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- DB-API depends on pyarrow when decimal query parameters are used It seems we introduced a hard dependency on pyarrow to detect data types of query parameters in the DB-API module. While `pyarrow` is recommended as it's needed to use the BigQuery Storage API, I don't think we want to depend on it for basic functionality. I believe we added this hard dependency here: https://github.com/googleapis/python-bigquery/pull/527/files#diff-2221cd9b6b85f2ed1f943c474e8a8628e6ca28ec9a96c0aa5873cd6a66826bdeR193 We use `pyarrow` to disambiguate between `NUMERIC` and `BIGNUMERIC`. I'm not seeing a good way to check the precision / scale in the [decimal](https://docs.python.org/3/library/decimal.html) type, so in the case that `pyarrow` is not available, we should guess the type is `NUMERIC`. #### Stack trace ``` __________________________________________________________________________________ test_dml ___________________________________________________________________________________ test/test_sqlalchemy_bigquery.py:430: in test_dml engine.execute(table_dml.insert(ONE_ROW_CONTENTS_DML)) ../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/sqlalchemy/engine/base.py:2235: in execute return connection.execute(statement, *multiparams, **params) ../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/sqlalchemy/engine/base.py:1011: in execute return meth(self, multiparams, params) ../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/sqlalchemy/sql/elements.py:298: in _execute_on_connection return connection._execute_clauseelement(self, multiparams, params) ../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/sqlalchemy/engine/base.py:1124: in _execute_clauseelement ret = self._execute_context( ../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/sqlalchemy/engine/base.py:1316: in _execute_context self._handle_dbapi_exception( ../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/sqlalchemy/engine/base.py:1514: in _handle_dbapi_exception util.raise_(exc_info[1], with_traceback=exc_info[2]) ../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/sqlalchemy/util/compat.py:182: in raise_ raise exception ../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/sqlalchemy/engine/base.py:1276: in _execute_context self.dialect.do_execute( ../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/sqlalchemy/engine/default.py:608: in do_execute cursor.execute(statement, parameters) ../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/google/cloud/bigquery/dbapi/_helpers.py:263: in with_closed_check return method(self, *args, **kwargs) ../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/google/cloud/bigquery/dbapi/cursor.py:165: in execute query_parameters = _helpers.to_query_parameters(parameters) ../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/google/cloud/bigquery/dbapi/_helpers.py:168: in to_query_parameters return to_query_parameters_dict(parameters) ../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/google/cloud/bigquery/dbapi/_helpers.py:147: in to_query_parameters_dict param = scalar_to_query_parameter(value, name=name) ../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/google/cloud/bigquery/dbapi/_helpers.py:51: in scalar_to_query_parameter parameter_type = bigquery_scalar_type(value) ../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/google/cloud/bigquery/dbapi/_helpers.py:193: in bigquery_scalar_type scalar_object = pyarrow.scalar(value) E AttributeError: 'NoneType' object has no attribute 'scalar' ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `google/cloud/bigquery/dbapi/_helpers.py` Content: ``` 1 # Copyright 2017 Google LLC 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 16 from collections import abc as collections_abc 17 import datetime 18 import decimal 19 import functools 20 import numbers 21 22 try: 23 import pyarrow 24 except ImportError: # pragma: NO COVER 25 pyarrow = None 26 27 from google.cloud import bigquery 28 from google.cloud.bigquery import table 29 from google.cloud.bigquery.dbapi import exceptions 30 31 32 def scalar_to_query_parameter(value, name=None): 33 """Convert a scalar value into a query parameter. 34 35 Args: 36 value (Any): 37 A scalar value to convert into a query parameter. 38 39 name (str): 40 (Optional) Name of the query parameter. 41 42 Returns: 43 google.cloud.bigquery.ScalarQueryParameter: 44 A query parameter corresponding with the type and value of the plain 45 Python object. 46 47 Raises: 48 google.cloud.bigquery.dbapi.exceptions.ProgrammingError: 49 if the type cannot be determined. 50 """ 51 parameter_type = bigquery_scalar_type(value) 52 53 if parameter_type is None: 54 raise exceptions.ProgrammingError( 55 "encountered parameter {} with value {} of unexpected type".format( 56 name, value 57 ) 58 ) 59 return bigquery.ScalarQueryParameter(name, parameter_type, value) 60 61 62 def array_to_query_parameter(value, name=None): 63 """Convert an array-like value into a query parameter. 64 65 Args: 66 value (Sequence[Any]): The elements of the array (should not be a 67 string-like Sequence). 68 name (Optional[str]): Name of the query parameter. 69 70 Returns: 71 A query parameter corresponding with the type and value of the plain 72 Python object. 73 74 Raises: 75 google.cloud.bigquery.dbapi.exceptions.ProgrammingError: 76 if the type of array elements cannot be determined. 77 """ 78 if not array_like(value): 79 raise exceptions.ProgrammingError( 80 "The value of parameter {} must be a sequence that is " 81 "not string-like.".format(name) 82 ) 83 84 if not value: 85 raise exceptions.ProgrammingError( 86 "Encountered an empty array-like value of parameter {}, cannot " 87 "determine array elements type.".format(name) 88 ) 89 90 # Assume that all elements are of the same type, and let the backend handle 91 # any type incompatibilities among the array elements 92 array_type = bigquery_scalar_type(value[0]) 93 if array_type is None: 94 raise exceptions.ProgrammingError( 95 "Encountered unexpected first array element of parameter {}, " 96 "cannot determine array elements type.".format(name) 97 ) 98 99 return bigquery.ArrayQueryParameter(name, array_type, value) 100 101 102 def to_query_parameters_list(parameters): 103 """Converts a sequence of parameter values into query parameters. 104 105 Args: 106 parameters (Sequence[Any]): Sequence of query parameter values. 107 108 Returns: 109 List[google.cloud.bigquery.query._AbstractQueryParameter]: 110 A list of query parameters. 111 """ 112 result = [] 113 114 for value in parameters: 115 if isinstance(value, collections_abc.Mapping): 116 raise NotImplementedError("STRUCT-like parameter values are not supported.") 117 elif array_like(value): 118 param = array_to_query_parameter(value) 119 else: 120 param = scalar_to_query_parameter(value) 121 result.append(param) 122 123 return result 124 125 126 def to_query_parameters_dict(parameters): 127 """Converts a dictionary of parameter values into query parameters. 128 129 Args: 130 parameters (Mapping[str, Any]): Dictionary of query parameter values. 131 132 Returns: 133 List[google.cloud.bigquery.query._AbstractQueryParameter]: 134 A list of named query parameters. 135 """ 136 result = [] 137 138 for name, value in parameters.items(): 139 if isinstance(value, collections_abc.Mapping): 140 raise NotImplementedError( 141 "STRUCT-like parameter values are not supported " 142 "(parameter {}).".format(name) 143 ) 144 elif array_like(value): 145 param = array_to_query_parameter(value, name=name) 146 else: 147 param = scalar_to_query_parameter(value, name=name) 148 result.append(param) 149 150 return result 151 152 153 def to_query_parameters(parameters): 154 """Converts DB-API parameter values into query parameters. 155 156 Args: 157 parameters (Union[Mapping[str, Any], Sequence[Any]]): 158 A dictionary or sequence of query parameter values. 159 160 Returns: 161 List[google.cloud.bigquery.query._AbstractQueryParameter]: 162 A list of query parameters. 163 """ 164 if parameters is None: 165 return [] 166 167 if isinstance(parameters, collections_abc.Mapping): 168 return to_query_parameters_dict(parameters) 169 170 return to_query_parameters_list(parameters) 171 172 173 def bigquery_scalar_type(value): 174 """Return a BigQuery name of the scalar type that matches the given value. 175 176 If the scalar type name could not be determined (e.g. for non-scalar 177 values), ``None`` is returned. 178 179 Args: 180 value (Any) 181 182 Returns: 183 Optional[str]: The BigQuery scalar type name. 184 """ 185 if isinstance(value, bool): 186 return "BOOL" 187 elif isinstance(value, numbers.Integral): 188 return "INT64" 189 elif isinstance(value, numbers.Real): 190 return "FLOAT64" 191 elif isinstance(value, decimal.Decimal): 192 # We check for NUMERIC before BIGNUMERIC in order to support pyarrow < 3.0. 193 scalar_object = pyarrow.scalar(value) 194 if isinstance(scalar_object, pyarrow.Decimal128Scalar): 195 return "NUMERIC" 196 else: 197 return "BIGNUMERIC" 198 elif isinstance(value, str): 199 return "STRING" 200 elif isinstance(value, bytes): 201 return "BYTES" 202 elif isinstance(value, datetime.datetime): 203 return "DATETIME" if value.tzinfo is None else "TIMESTAMP" 204 elif isinstance(value, datetime.date): 205 return "DATE" 206 elif isinstance(value, datetime.time): 207 return "TIME" 208 209 return None 210 211 212 def array_like(value): 213 """Determine if the given value is array-like. 214 215 Examples of array-like values (as interpreted by this function) are 216 sequences such as ``list`` and ``tuple``, but not strings and other 217 iterables such as sets. 218 219 Args: 220 value (Any) 221 222 Returns: 223 bool: ``True`` if the value is considered array-like, ``False`` otherwise. 224 """ 225 return isinstance(value, collections_abc.Sequence) and not isinstance( 226 value, (str, bytes, bytearray) 227 ) 228 229 230 def to_bq_table_rows(rows_iterable): 231 """Convert table rows to BigQuery table Row instances. 232 233 Args: 234 rows_iterable (Iterable[Mapping]): 235 An iterable of row data items to convert to ``Row`` instances. 236 237 Returns: 238 Iterable[google.cloud.bigquery.table.Row] 239 """ 240 241 def to_table_row(row): 242 # NOTE: We fetch ARROW values, thus we need to convert them to Python 243 # objects with as_py(). 244 values = tuple(value.as_py() for value in row.values()) 245 keys_to_index = {key: i for i, key in enumerate(row.keys())} 246 return table.Row(values, keys_to_index) 247 248 return (to_table_row(row_data) for row_data in rows_iterable) 249 250 251 def raise_on_closed( 252 exc_msg, exc_class=exceptions.ProgrammingError, closed_attr_name="_closed" 253 ): 254 """Make public instance methods raise an error if the instance is closed.""" 255 256 def _raise_on_closed(method): 257 """Make a non-static method raise an error if its containing instance is closed. 258 """ 259 260 def with_closed_check(self, *args, **kwargs): 261 if getattr(self, closed_attr_name): 262 raise exc_class(exc_msg) 263 return method(self, *args, **kwargs) 264 265 functools.update_wrapper(with_closed_check, method) 266 return with_closed_check 267 268 def decorate_public_methods(klass): 269 """Apply ``_raise_on_closed()`` decorator to public instance methods. 270 """ 271 for name in dir(klass): 272 if name.startswith("_"): 273 continue 274 275 member = getattr(klass, name) 276 if not callable(member): 277 continue 278 279 # We need to check for class/static methods directly in the instance 280 # __dict__, not via the retrieved attribute (`member`), as the 281 # latter is already a callable *produced* by one of these descriptors. 282 if isinstance(klass.__dict__[name], (staticmethod, classmethod)): 283 continue 284 285 member = _raise_on_closed(member) 286 setattr(klass, name, member) 287 288 return klass 289 290 return decorate_public_methods 291 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/google/cloud/bigquery/dbapi/_helpers.py b/google/cloud/bigquery/dbapi/_helpers.py --- a/google/cloud/bigquery/dbapi/_helpers.py +++ b/google/cloud/bigquery/dbapi/_helpers.py @@ -19,16 +19,15 @@ import functools import numbers -try: - import pyarrow -except ImportError: # pragma: NO COVER - pyarrow = None - from google.cloud import bigquery from google.cloud.bigquery import table from google.cloud.bigquery.dbapi import exceptions +_NUMERIC_SERVER_MIN = decimal.Decimal("-9.9999999999999999999999999999999999999E+28") +_NUMERIC_SERVER_MAX = decimal.Decimal("9.9999999999999999999999999999999999999E+28") + + def scalar_to_query_parameter(value, name=None): """Convert a scalar value into a query parameter. @@ -189,12 +188,20 @@ elif isinstance(value, numbers.Real): return "FLOAT64" elif isinstance(value, decimal.Decimal): - # We check for NUMERIC before BIGNUMERIC in order to support pyarrow < 3.0. - scalar_object = pyarrow.scalar(value) - if isinstance(scalar_object, pyarrow.Decimal128Scalar): + vtuple = value.as_tuple() + # NUMERIC values have precision of 38 (number of digits) and scale of 9 (number + # of fractional digits), and their max absolute value must be strictly smaller + # than 1.0E+29. + # https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#decimal_types + if ( + len(vtuple.digits) <= 38 # max precision: 38 + and vtuple.exponent >= -9 # max scale: 9 + and _NUMERIC_SERVER_MIN <= value <= _NUMERIC_SERVER_MAX + ): return "NUMERIC" else: return "BIGNUMERIC" + elif isinstance(value, str): return "STRING" elif isinstance(value, bytes):
{"golden_diff": "diff --git a/google/cloud/bigquery/dbapi/_helpers.py b/google/cloud/bigquery/dbapi/_helpers.py\n--- a/google/cloud/bigquery/dbapi/_helpers.py\n+++ b/google/cloud/bigquery/dbapi/_helpers.py\n@@ -19,16 +19,15 @@\n import functools\n import numbers\n \n-try:\n- import pyarrow\n-except ImportError: # pragma: NO COVER\n- pyarrow = None\n-\n from google.cloud import bigquery\n from google.cloud.bigquery import table\n from google.cloud.bigquery.dbapi import exceptions\n \n \n+_NUMERIC_SERVER_MIN = decimal.Decimal(\"-9.9999999999999999999999999999999999999E+28\")\n+_NUMERIC_SERVER_MAX = decimal.Decimal(\"9.9999999999999999999999999999999999999E+28\")\n+\n+\n def scalar_to_query_parameter(value, name=None):\n \"\"\"Convert a scalar value into a query parameter.\n \n@@ -189,12 +188,20 @@\n elif isinstance(value, numbers.Real):\n return \"FLOAT64\"\n elif isinstance(value, decimal.Decimal):\n- # We check for NUMERIC before BIGNUMERIC in order to support pyarrow < 3.0.\n- scalar_object = pyarrow.scalar(value)\n- if isinstance(scalar_object, pyarrow.Decimal128Scalar):\n+ vtuple = value.as_tuple()\n+ # NUMERIC values have precision of 38 (number of digits) and scale of 9 (number\n+ # of fractional digits), and their max absolute value must be strictly smaller\n+ # than 1.0E+29.\n+ # https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#decimal_types\n+ if (\n+ len(vtuple.digits) <= 38 # max precision: 38\n+ and vtuple.exponent >= -9 # max scale: 9\n+ and _NUMERIC_SERVER_MIN <= value <= _NUMERIC_SERVER_MAX\n+ ):\n return \"NUMERIC\"\n else:\n return \"BIGNUMERIC\"\n+\n elif isinstance(value, str):\n return \"STRING\"\n elif isinstance(value, bytes):\n", "issue": "DB-API depends on pyarrow when decimal query parameters are used\nIt seems we introduced a hard dependency on pyarrow to detect data types of query parameters in the DB-API module. While `pyarrow` is recommended as it's needed to use the BigQuery Storage API, I don't think we want to depend on it for basic functionality.\r\n\r\nI believe we added this hard dependency here: https://github.com/googleapis/python-bigquery/pull/527/files#diff-2221cd9b6b85f2ed1f943c474e8a8628e6ca28ec9a96c0aa5873cd6a66826bdeR193\r\n\r\nWe use `pyarrow` to disambiguate between `NUMERIC` and `BIGNUMERIC`. I'm not seeing a good way to check the precision / scale in the [decimal](https://docs.python.org/3/library/decimal.html) type, so in the case that `pyarrow` is not available, we should guess the type is `NUMERIC`.\r\n\r\n\r\n#### Stack trace\r\n\r\n```\r\n__________________________________________________________________________________ test_dml ___________________________________________________________________________________\r\ntest/test_sqlalchemy_bigquery.py:430: in test_dml\r\n engine.execute(table_dml.insert(ONE_ROW_CONTENTS_DML))\r\n../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/sqlalchemy/engine/base.py:2235: in execute\r\n return connection.execute(statement, *multiparams, **params)\r\n../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/sqlalchemy/engine/base.py:1011: in execute\r\n return meth(self, multiparams, params)\r\n../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/sqlalchemy/sql/elements.py:298: in _execute_on_connection\r\n return connection._execute_clauseelement(self, multiparams, params)\r\n../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/sqlalchemy/engine/base.py:1124: in _execute_clauseelement\r\n ret = self._execute_context(\r\n../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/sqlalchemy/engine/base.py:1316: in _execute_context\r\n self._handle_dbapi_exception(\r\n../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/sqlalchemy/engine/base.py:1514: in _handle_dbapi_exception\r\n util.raise_(exc_info[1], with_traceback=exc_info[2])\r\n../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/sqlalchemy/util/compat.py:182: in raise_\r\n raise exception\r\n../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/sqlalchemy/engine/base.py:1276: in _execute_context\r\n self.dialect.do_execute(\r\n../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/sqlalchemy/engine/default.py:608: in do_execute\r\n cursor.execute(statement, parameters)\r\n../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/google/cloud/bigquery/dbapi/_helpers.py:263: in with_closed_check\r\n return method(self, *args, **kwargs)\r\n../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/google/cloud/bigquery/dbapi/cursor.py:165: in execute\r\n query_parameters = _helpers.to_query_parameters(parameters)\r\n../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/google/cloud/bigquery/dbapi/_helpers.py:168: in to_query_parameters\r\n return to_query_parameters_dict(parameters)\r\n../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/google/cloud/bigquery/dbapi/_helpers.py:147: in to_query_parameters_dict\r\n param = scalar_to_query_parameter(value, name=name)\r\n../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/google/cloud/bigquery/dbapi/_helpers.py:51: in scalar_to_query_parameter\r\n parameter_type = bigquery_scalar_type(value)\r\n../../miniconda3/envs/pybigquery/lib/python3.9/site-packages/google/cloud/bigquery/dbapi/_helpers.py:193: in bigquery_scalar_type\r\n scalar_object = pyarrow.scalar(value)\r\nE AttributeError: 'NoneType' object has no attribute 'scalar'\r\n```\r\n\r\n\n", "before_files": [{"content": "# Copyright 2017 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nfrom collections import abc as collections_abc\nimport datetime\nimport decimal\nimport functools\nimport numbers\n\ntry:\n import pyarrow\nexcept ImportError: # pragma: NO COVER\n pyarrow = None\n\nfrom google.cloud import bigquery\nfrom google.cloud.bigquery import table\nfrom google.cloud.bigquery.dbapi import exceptions\n\n\ndef scalar_to_query_parameter(value, name=None):\n \"\"\"Convert a scalar value into a query parameter.\n\n Args:\n value (Any):\n A scalar value to convert into a query parameter.\n\n name (str):\n (Optional) Name of the query parameter.\n\n Returns:\n google.cloud.bigquery.ScalarQueryParameter:\n A query parameter corresponding with the type and value of the plain\n Python object.\n\n Raises:\n google.cloud.bigquery.dbapi.exceptions.ProgrammingError:\n if the type cannot be determined.\n \"\"\"\n parameter_type = bigquery_scalar_type(value)\n\n if parameter_type is None:\n raise exceptions.ProgrammingError(\n \"encountered parameter {} with value {} of unexpected type\".format(\n name, value\n )\n )\n return bigquery.ScalarQueryParameter(name, parameter_type, value)\n\n\ndef array_to_query_parameter(value, name=None):\n \"\"\"Convert an array-like value into a query parameter.\n\n Args:\n value (Sequence[Any]): The elements of the array (should not be a\n string-like Sequence).\n name (Optional[str]): Name of the query parameter.\n\n Returns:\n A query parameter corresponding with the type and value of the plain\n Python object.\n\n Raises:\n google.cloud.bigquery.dbapi.exceptions.ProgrammingError:\n if the type of array elements cannot be determined.\n \"\"\"\n if not array_like(value):\n raise exceptions.ProgrammingError(\n \"The value of parameter {} must be a sequence that is \"\n \"not string-like.\".format(name)\n )\n\n if not value:\n raise exceptions.ProgrammingError(\n \"Encountered an empty array-like value of parameter {}, cannot \"\n \"determine array elements type.\".format(name)\n )\n\n # Assume that all elements are of the same type, and let the backend handle\n # any type incompatibilities among the array elements\n array_type = bigquery_scalar_type(value[0])\n if array_type is None:\n raise exceptions.ProgrammingError(\n \"Encountered unexpected first array element of parameter {}, \"\n \"cannot determine array elements type.\".format(name)\n )\n\n return bigquery.ArrayQueryParameter(name, array_type, value)\n\n\ndef to_query_parameters_list(parameters):\n \"\"\"Converts a sequence of parameter values into query parameters.\n\n Args:\n parameters (Sequence[Any]): Sequence of query parameter values.\n\n Returns:\n List[google.cloud.bigquery.query._AbstractQueryParameter]:\n A list of query parameters.\n \"\"\"\n result = []\n\n for value in parameters:\n if isinstance(value, collections_abc.Mapping):\n raise NotImplementedError(\"STRUCT-like parameter values are not supported.\")\n elif array_like(value):\n param = array_to_query_parameter(value)\n else:\n param = scalar_to_query_parameter(value)\n result.append(param)\n\n return result\n\n\ndef to_query_parameters_dict(parameters):\n \"\"\"Converts a dictionary of parameter values into query parameters.\n\n Args:\n parameters (Mapping[str, Any]): Dictionary of query parameter values.\n\n Returns:\n List[google.cloud.bigquery.query._AbstractQueryParameter]:\n A list of named query parameters.\n \"\"\"\n result = []\n\n for name, value in parameters.items():\n if isinstance(value, collections_abc.Mapping):\n raise NotImplementedError(\n \"STRUCT-like parameter values are not supported \"\n \"(parameter {}).\".format(name)\n )\n elif array_like(value):\n param = array_to_query_parameter(value, name=name)\n else:\n param = scalar_to_query_parameter(value, name=name)\n result.append(param)\n\n return result\n\n\ndef to_query_parameters(parameters):\n \"\"\"Converts DB-API parameter values into query parameters.\n\n Args:\n parameters (Union[Mapping[str, Any], Sequence[Any]]):\n A dictionary or sequence of query parameter values.\n\n Returns:\n List[google.cloud.bigquery.query._AbstractQueryParameter]:\n A list of query parameters.\n \"\"\"\n if parameters is None:\n return []\n\n if isinstance(parameters, collections_abc.Mapping):\n return to_query_parameters_dict(parameters)\n\n return to_query_parameters_list(parameters)\n\n\ndef bigquery_scalar_type(value):\n \"\"\"Return a BigQuery name of the scalar type that matches the given value.\n\n If the scalar type name could not be determined (e.g. for non-scalar\n values), ``None`` is returned.\n\n Args:\n value (Any)\n\n Returns:\n Optional[str]: The BigQuery scalar type name.\n \"\"\"\n if isinstance(value, bool):\n return \"BOOL\"\n elif isinstance(value, numbers.Integral):\n return \"INT64\"\n elif isinstance(value, numbers.Real):\n return \"FLOAT64\"\n elif isinstance(value, decimal.Decimal):\n # We check for NUMERIC before BIGNUMERIC in order to support pyarrow < 3.0.\n scalar_object = pyarrow.scalar(value)\n if isinstance(scalar_object, pyarrow.Decimal128Scalar):\n return \"NUMERIC\"\n else:\n return \"BIGNUMERIC\"\n elif isinstance(value, str):\n return \"STRING\"\n elif isinstance(value, bytes):\n return \"BYTES\"\n elif isinstance(value, datetime.datetime):\n return \"DATETIME\" if value.tzinfo is None else \"TIMESTAMP\"\n elif isinstance(value, datetime.date):\n return \"DATE\"\n elif isinstance(value, datetime.time):\n return \"TIME\"\n\n return None\n\n\ndef array_like(value):\n \"\"\"Determine if the given value is array-like.\n\n Examples of array-like values (as interpreted by this function) are\n sequences such as ``list`` and ``tuple``, but not strings and other\n iterables such as sets.\n\n Args:\n value (Any)\n\n Returns:\n bool: ``True`` if the value is considered array-like, ``False`` otherwise.\n \"\"\"\n return isinstance(value, collections_abc.Sequence) and not isinstance(\n value, (str, bytes, bytearray)\n )\n\n\ndef to_bq_table_rows(rows_iterable):\n \"\"\"Convert table rows to BigQuery table Row instances.\n\n Args:\n rows_iterable (Iterable[Mapping]):\n An iterable of row data items to convert to ``Row`` instances.\n\n Returns:\n Iterable[google.cloud.bigquery.table.Row]\n \"\"\"\n\n def to_table_row(row):\n # NOTE: We fetch ARROW values, thus we need to convert them to Python\n # objects with as_py().\n values = tuple(value.as_py() for value in row.values())\n keys_to_index = {key: i for i, key in enumerate(row.keys())}\n return table.Row(values, keys_to_index)\n\n return (to_table_row(row_data) for row_data in rows_iterable)\n\n\ndef raise_on_closed(\n exc_msg, exc_class=exceptions.ProgrammingError, closed_attr_name=\"_closed\"\n):\n \"\"\"Make public instance methods raise an error if the instance is closed.\"\"\"\n\n def _raise_on_closed(method):\n \"\"\"Make a non-static method raise an error if its containing instance is closed.\n \"\"\"\n\n def with_closed_check(self, *args, **kwargs):\n if getattr(self, closed_attr_name):\n raise exc_class(exc_msg)\n return method(self, *args, **kwargs)\n\n functools.update_wrapper(with_closed_check, method)\n return with_closed_check\n\n def decorate_public_methods(klass):\n \"\"\"Apply ``_raise_on_closed()`` decorator to public instance methods.\n \"\"\"\n for name in dir(klass):\n if name.startswith(\"_\"):\n continue\n\n member = getattr(klass, name)\n if not callable(member):\n continue\n\n # We need to check for class/static methods directly in the instance\n # __dict__, not via the retrieved attribute (`member`), as the\n # latter is already a callable *produced* by one of these descriptors.\n if isinstance(klass.__dict__[name], (staticmethod, classmethod)):\n continue\n\n member = _raise_on_closed(member)\n setattr(klass, name, member)\n\n return klass\n\n return decorate_public_methods\n", "path": "google/cloud/bigquery/dbapi/_helpers.py"}], "after_files": [{"content": "# Copyright 2017 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nfrom collections import abc as collections_abc\nimport datetime\nimport decimal\nimport functools\nimport numbers\n\nfrom google.cloud import bigquery\nfrom google.cloud.bigquery import table\nfrom google.cloud.bigquery.dbapi import exceptions\n\n\n_NUMERIC_SERVER_MIN = decimal.Decimal(\"-9.9999999999999999999999999999999999999E+28\")\n_NUMERIC_SERVER_MAX = decimal.Decimal(\"9.9999999999999999999999999999999999999E+28\")\n\n\ndef scalar_to_query_parameter(value, name=None):\n \"\"\"Convert a scalar value into a query parameter.\n\n Args:\n value (Any):\n A scalar value to convert into a query parameter.\n\n name (str):\n (Optional) Name of the query parameter.\n\n Returns:\n google.cloud.bigquery.ScalarQueryParameter:\n A query parameter corresponding with the type and value of the plain\n Python object.\n\n Raises:\n google.cloud.bigquery.dbapi.exceptions.ProgrammingError:\n if the type cannot be determined.\n \"\"\"\n parameter_type = bigquery_scalar_type(value)\n\n if parameter_type is None:\n raise exceptions.ProgrammingError(\n \"encountered parameter {} with value {} of unexpected type\".format(\n name, value\n )\n )\n return bigquery.ScalarQueryParameter(name, parameter_type, value)\n\n\ndef array_to_query_parameter(value, name=None):\n \"\"\"Convert an array-like value into a query parameter.\n\n Args:\n value (Sequence[Any]): The elements of the array (should not be a\n string-like Sequence).\n name (Optional[str]): Name of the query parameter.\n\n Returns:\n A query parameter corresponding with the type and value of the plain\n Python object.\n\n Raises:\n google.cloud.bigquery.dbapi.exceptions.ProgrammingError:\n if the type of array elements cannot be determined.\n \"\"\"\n if not array_like(value):\n raise exceptions.ProgrammingError(\n \"The value of parameter {} must be a sequence that is \"\n \"not string-like.\".format(name)\n )\n\n if not value:\n raise exceptions.ProgrammingError(\n \"Encountered an empty array-like value of parameter {}, cannot \"\n \"determine array elements type.\".format(name)\n )\n\n # Assume that all elements are of the same type, and let the backend handle\n # any type incompatibilities among the array elements\n array_type = bigquery_scalar_type(value[0])\n if array_type is None:\n raise exceptions.ProgrammingError(\n \"Encountered unexpected first array element of parameter {}, \"\n \"cannot determine array elements type.\".format(name)\n )\n\n return bigquery.ArrayQueryParameter(name, array_type, value)\n\n\ndef to_query_parameters_list(parameters):\n \"\"\"Converts a sequence of parameter values into query parameters.\n\n Args:\n parameters (Sequence[Any]): Sequence of query parameter values.\n\n Returns:\n List[google.cloud.bigquery.query._AbstractQueryParameter]:\n A list of query parameters.\n \"\"\"\n result = []\n\n for value in parameters:\n if isinstance(value, collections_abc.Mapping):\n raise NotImplementedError(\"STRUCT-like parameter values are not supported.\")\n elif array_like(value):\n param = array_to_query_parameter(value)\n else:\n param = scalar_to_query_parameter(value)\n result.append(param)\n\n return result\n\n\ndef to_query_parameters_dict(parameters):\n \"\"\"Converts a dictionary of parameter values into query parameters.\n\n Args:\n parameters (Mapping[str, Any]): Dictionary of query parameter values.\n\n Returns:\n List[google.cloud.bigquery.query._AbstractQueryParameter]:\n A list of named query parameters.\n \"\"\"\n result = []\n\n for name, value in parameters.items():\n if isinstance(value, collections_abc.Mapping):\n raise NotImplementedError(\n \"STRUCT-like parameter values are not supported \"\n \"(parameter {}).\".format(name)\n )\n elif array_like(value):\n param = array_to_query_parameter(value, name=name)\n else:\n param = scalar_to_query_parameter(value, name=name)\n result.append(param)\n\n return result\n\n\ndef to_query_parameters(parameters):\n \"\"\"Converts DB-API parameter values into query parameters.\n\n Args:\n parameters (Union[Mapping[str, Any], Sequence[Any]]):\n A dictionary or sequence of query parameter values.\n\n Returns:\n List[google.cloud.bigquery.query._AbstractQueryParameter]:\n A list of query parameters.\n \"\"\"\n if parameters is None:\n return []\n\n if isinstance(parameters, collections_abc.Mapping):\n return to_query_parameters_dict(parameters)\n\n return to_query_parameters_list(parameters)\n\n\ndef bigquery_scalar_type(value):\n \"\"\"Return a BigQuery name of the scalar type that matches the given value.\n\n If the scalar type name could not be determined (e.g. for non-scalar\n values), ``None`` is returned.\n\n Args:\n value (Any)\n\n Returns:\n Optional[str]: The BigQuery scalar type name.\n \"\"\"\n if isinstance(value, bool):\n return \"BOOL\"\n elif isinstance(value, numbers.Integral):\n return \"INT64\"\n elif isinstance(value, numbers.Real):\n return \"FLOAT64\"\n elif isinstance(value, decimal.Decimal):\n vtuple = value.as_tuple()\n # NUMERIC values have precision of 38 (number of digits) and scale of 9 (number\n # of fractional digits), and their max absolute value must be strictly smaller\n # than 1.0E+29.\n # https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#decimal_types\n if (\n len(vtuple.digits) <= 38 # max precision: 38\n and vtuple.exponent >= -9 # max scale: 9\n and _NUMERIC_SERVER_MIN <= value <= _NUMERIC_SERVER_MAX\n ):\n return \"NUMERIC\"\n else:\n return \"BIGNUMERIC\"\n\n elif isinstance(value, str):\n return \"STRING\"\n elif isinstance(value, bytes):\n return \"BYTES\"\n elif isinstance(value, datetime.datetime):\n return \"DATETIME\" if value.tzinfo is None else \"TIMESTAMP\"\n elif isinstance(value, datetime.date):\n return \"DATE\"\n elif isinstance(value, datetime.time):\n return \"TIME\"\n\n return None\n\n\ndef array_like(value):\n \"\"\"Determine if the given value is array-like.\n\n Examples of array-like values (as interpreted by this function) are\n sequences such as ``list`` and ``tuple``, but not strings and other\n iterables such as sets.\n\n Args:\n value (Any)\n\n Returns:\n bool: ``True`` if the value is considered array-like, ``False`` otherwise.\n \"\"\"\n return isinstance(value, collections_abc.Sequence) and not isinstance(\n value, (str, bytes, bytearray)\n )\n\n\ndef to_bq_table_rows(rows_iterable):\n \"\"\"Convert table rows to BigQuery table Row instances.\n\n Args:\n rows_iterable (Iterable[Mapping]):\n An iterable of row data items to convert to ``Row`` instances.\n\n Returns:\n Iterable[google.cloud.bigquery.table.Row]\n \"\"\"\n\n def to_table_row(row):\n # NOTE: We fetch ARROW values, thus we need to convert them to Python\n # objects with as_py().\n values = tuple(value.as_py() for value in row.values())\n keys_to_index = {key: i for i, key in enumerate(row.keys())}\n return table.Row(values, keys_to_index)\n\n return (to_table_row(row_data) for row_data in rows_iterable)\n\n\ndef raise_on_closed(\n exc_msg, exc_class=exceptions.ProgrammingError, closed_attr_name=\"_closed\"\n):\n \"\"\"Make public instance methods raise an error if the instance is closed.\"\"\"\n\n def _raise_on_closed(method):\n \"\"\"Make a non-static method raise an error if its containing instance is closed.\n \"\"\"\n\n def with_closed_check(self, *args, **kwargs):\n if getattr(self, closed_attr_name):\n raise exc_class(exc_msg)\n return method(self, *args, **kwargs)\n\n functools.update_wrapper(with_closed_check, method)\n return with_closed_check\n\n def decorate_public_methods(klass):\n \"\"\"Apply ``_raise_on_closed()`` decorator to public instance methods.\n \"\"\"\n for name in dir(klass):\n if name.startswith(\"_\"):\n continue\n\n member = getattr(klass, name)\n if not callable(member):\n continue\n\n # We need to check for class/static methods directly in the instance\n # __dict__, not via the retrieved attribute (`member`), as the\n # latter is already a callable *produced* by one of these descriptors.\n if isinstance(klass.__dict__[name], (staticmethod, classmethod)):\n continue\n\n member = _raise_on_closed(member)\n setattr(klass, name, member)\n\n return klass\n\n return decorate_public_methods\n", "path": "google/cloud/bigquery/dbapi/_helpers.py"}]}
3,924
528
gh_patches_debug_25220
rasdani/github-patches
git_diff
pytorch__examples-189
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [super_resolution] def _get_orthogonal_init_weights(weights): fan_out = weights.size(0) fan_in = weights.size(1) * weights.size(2) * weights.size(3) u, _, v = svd(normal(0.0, 1.0, (fan_out, fan_in)), full_matrices=False) if u.shape == (fan_out, fan_in): return torch.Tensor(u.reshape(weights.size())) else: return torch.Tensor(v.reshape(weights.size())) Why do the above operation? --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `super_resolution/model.py` Content: ``` 1 import torch 2 import torch.nn as nn 3 from numpy.random import normal 4 from numpy.linalg import svd 5 from math import sqrt 6 7 8 def _get_orthogonal_init_weights(weights): 9 fan_out = weights.size(0) 10 fan_in = weights.size(1) * weights.size(2) * weights.size(3) 11 12 u, _, v = svd(normal(0.0, 1.0, (fan_out, fan_in)), full_matrices=False) 13 14 if u.shape == (fan_out, fan_in): 15 return torch.Tensor(u.reshape(weights.size())) 16 else: 17 return torch.Tensor(v.reshape(weights.size())) 18 19 20 class Net(nn.Module): 21 def __init__(self, upscale_factor): 22 super(Net, self).__init__() 23 24 self.relu = nn.ReLU() 25 self.conv1 = nn.Conv2d(1, 64, (5, 5), (1, 1), (2, 2)) 26 self.conv2 = nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1)) 27 self.conv3 = nn.Conv2d(64, 32, (3, 3), (1, 1), (1, 1)) 28 self.conv4 = nn.Conv2d(32, upscale_factor ** 2, (3, 3), (1, 1), (1, 1)) 29 self.pixel_shuffle = nn.PixelShuffle(upscale_factor) 30 31 self._initialize_weights() 32 33 def forward(self, x): 34 x = self.relu(self.conv1(x)) 35 x = self.relu(self.conv2(x)) 36 x = self.relu(self.conv3(x)) 37 x = self.pixel_shuffle(self.conv4(x)) 38 return x 39 40 def _initialize_weights(self): 41 self.conv1.weight.data.copy_(_get_orthogonal_init_weights(self.conv1.weight) * sqrt(2)) 42 self.conv2.weight.data.copy_(_get_orthogonal_init_weights(self.conv2.weight) * sqrt(2)) 43 self.conv3.weight.data.copy_(_get_orthogonal_init_weights(self.conv3.weight) * sqrt(2)) 44 self.conv4.weight.data.copy_(_get_orthogonal_init_weights(self.conv4.weight)) 45 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/super_resolution/model.py b/super_resolution/model.py --- a/super_resolution/model.py +++ b/super_resolution/model.py @@ -1,22 +1,11 @@ import torch import torch.nn as nn +import torch.nn.init as init from numpy.random import normal from numpy.linalg import svd from math import sqrt -def _get_orthogonal_init_weights(weights): - fan_out = weights.size(0) - fan_in = weights.size(1) * weights.size(2) * weights.size(3) - - u, _, v = svd(normal(0.0, 1.0, (fan_out, fan_in)), full_matrices=False) - - if u.shape == (fan_out, fan_in): - return torch.Tensor(u.reshape(weights.size())) - else: - return torch.Tensor(v.reshape(weights.size())) - - class Net(nn.Module): def __init__(self, upscale_factor): super(Net, self).__init__() @@ -38,7 +27,7 @@ return x def _initialize_weights(self): - self.conv1.weight.data.copy_(_get_orthogonal_init_weights(self.conv1.weight) * sqrt(2)) - self.conv2.weight.data.copy_(_get_orthogonal_init_weights(self.conv2.weight) * sqrt(2)) - self.conv3.weight.data.copy_(_get_orthogonal_init_weights(self.conv3.weight) * sqrt(2)) - self.conv4.weight.data.copy_(_get_orthogonal_init_weights(self.conv4.weight)) + init.orthogonal(self.conv1.weight, init.gain('relu')) + init.orthogonal(self.conv2.weight, init.gain('relu')) + init.orthogonal(self.conv3.weight, init.gain('relu')) + init.orthogonal(self.conv4.weight)
{"golden_diff": "diff --git a/super_resolution/model.py b/super_resolution/model.py\n--- a/super_resolution/model.py\n+++ b/super_resolution/model.py\n@@ -1,22 +1,11 @@\n import torch\n import torch.nn as nn\n+import torch.nn.init as init\n from numpy.random import normal\n from numpy.linalg import svd\n from math import sqrt\n \n \n-def _get_orthogonal_init_weights(weights):\n- fan_out = weights.size(0)\n- fan_in = weights.size(1) * weights.size(2) * weights.size(3)\n-\n- u, _, v = svd(normal(0.0, 1.0, (fan_out, fan_in)), full_matrices=False)\n-\n- if u.shape == (fan_out, fan_in):\n- return torch.Tensor(u.reshape(weights.size()))\n- else:\n- return torch.Tensor(v.reshape(weights.size()))\n-\n-\n class Net(nn.Module):\n def __init__(self, upscale_factor):\n super(Net, self).__init__()\n@@ -38,7 +27,7 @@\n return x\n \n def _initialize_weights(self):\n- self.conv1.weight.data.copy_(_get_orthogonal_init_weights(self.conv1.weight) * sqrt(2))\n- self.conv2.weight.data.copy_(_get_orthogonal_init_weights(self.conv2.weight) * sqrt(2))\n- self.conv3.weight.data.copy_(_get_orthogonal_init_weights(self.conv3.weight) * sqrt(2))\n- self.conv4.weight.data.copy_(_get_orthogonal_init_weights(self.conv4.weight))\n+ init.orthogonal(self.conv1.weight, init.gain('relu'))\n+ init.orthogonal(self.conv2.weight, init.gain('relu'))\n+ init.orthogonal(self.conv3.weight, init.gain('relu'))\n+ init.orthogonal(self.conv4.weight)\n", "issue": "[super_resolution]\ndef _get_orthogonal_init_weights(weights):\r\n fan_out = weights.size(0)\r\n fan_in = weights.size(1) * weights.size(2) * weights.size(3)\r\n u, _, v = svd(normal(0.0, 1.0, (fan_out, fan_in)), full_matrices=False)\r\n if u.shape == (fan_out, fan_in):\r\n return torch.Tensor(u.reshape(weights.size()))\r\n else:\r\n return torch.Tensor(v.reshape(weights.size()))\r\n\r\nWhy do the above operation\uff1f\n", "before_files": [{"content": "import torch\nimport torch.nn as nn\nfrom numpy.random import normal\nfrom numpy.linalg import svd\nfrom math import sqrt\n\n\ndef _get_orthogonal_init_weights(weights):\n fan_out = weights.size(0)\n fan_in = weights.size(1) * weights.size(2) * weights.size(3)\n\n u, _, v = svd(normal(0.0, 1.0, (fan_out, fan_in)), full_matrices=False)\n\n if u.shape == (fan_out, fan_in):\n return torch.Tensor(u.reshape(weights.size()))\n else:\n return torch.Tensor(v.reshape(weights.size()))\n\n\nclass Net(nn.Module):\n def __init__(self, upscale_factor):\n super(Net, self).__init__()\n\n self.relu = nn.ReLU()\n self.conv1 = nn.Conv2d(1, 64, (5, 5), (1, 1), (2, 2))\n self.conv2 = nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1))\n self.conv3 = nn.Conv2d(64, 32, (3, 3), (1, 1), (1, 1))\n self.conv4 = nn.Conv2d(32, upscale_factor ** 2, (3, 3), (1, 1), (1, 1))\n self.pixel_shuffle = nn.PixelShuffle(upscale_factor)\n\n self._initialize_weights()\n\n def forward(self, x):\n x = self.relu(self.conv1(x))\n x = self.relu(self.conv2(x))\n x = self.relu(self.conv3(x))\n x = self.pixel_shuffle(self.conv4(x))\n return x\n\n def _initialize_weights(self):\n self.conv1.weight.data.copy_(_get_orthogonal_init_weights(self.conv1.weight) * sqrt(2))\n self.conv2.weight.data.copy_(_get_orthogonal_init_weights(self.conv2.weight) * sqrt(2))\n self.conv3.weight.data.copy_(_get_orthogonal_init_weights(self.conv3.weight) * sqrt(2))\n self.conv4.weight.data.copy_(_get_orthogonal_init_weights(self.conv4.weight))\n", "path": "super_resolution/model.py"}], "after_files": [{"content": "import torch\nimport torch.nn as nn\nimport torch.nn.init as init\nfrom numpy.random import normal\nfrom numpy.linalg import svd\nfrom math import sqrt\n\n\nclass Net(nn.Module):\n def __init__(self, upscale_factor):\n super(Net, self).__init__()\n\n self.relu = nn.ReLU()\n self.conv1 = nn.Conv2d(1, 64, (5, 5), (1, 1), (2, 2))\n self.conv2 = nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1))\n self.conv3 = nn.Conv2d(64, 32, (3, 3), (1, 1), (1, 1))\n self.conv4 = nn.Conv2d(32, upscale_factor ** 2, (3, 3), (1, 1), (1, 1))\n self.pixel_shuffle = nn.PixelShuffle(upscale_factor)\n\n self._initialize_weights()\n\n def forward(self, x):\n x = self.relu(self.conv1(x))\n x = self.relu(self.conv2(x))\n x = self.relu(self.conv3(x))\n x = self.pixel_shuffle(self.conv4(x))\n return x\n\n def _initialize_weights(self):\n init.orthogonal(self.conv1.weight, init.gain('relu'))\n init.orthogonal(self.conv2.weight, init.gain('relu'))\n init.orthogonal(self.conv3.weight, init.gain('relu'))\n init.orthogonal(self.conv4.weight)\n", "path": "super_resolution/model.py"}]}
939
402
gh_patches_debug_1214
rasdani/github-patches
git_diff
opsdroid__opsdroid-1241
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Exiting opsdroid with ctrl+c fails with exception <!-- Before you post an issue or if you are unsure about something join our matrix channel https://riot.im/app/#/room/#opsdroid-general:matrix.org and ask away! We are more than happy to help you. --> # Description I am trying to build a Slack bot using Opsdroid (master branch). When pressing `ctrl+c` to exit opsdroid, the process does not stop and throws an error. ## Steps to Reproduce 1. Start opsdroid and wait for it to run ``` opsdroid start ``` 2. Press `ctrl+c` to exit the process ## Expected Functionality The opsdroid process should exit on pressing `ctrl+c`. ## Experienced Functionality The opsdroid process fails to exit with an exception. The debug log is as follows: ``` INFO opsdroid.logging: ======================================== INFO opsdroid.logging: Started opsdroid v0.16.0+82.g4c55e97 INFO opsdroid: ======================================== INFO opsdroid: You can customise your opsdroid by modifying your configuration.yaml INFO opsdroid: Read more at: http://opsdroid.readthedocs.io/#configuration INFO opsdroid: Watch the Get Started Videos at: http://bit.ly/2fnC0Fh INFO opsdroid: Install Opsdroid Desktop at: https://github.com/opsdroid/opsdroid-desktop/releases INFO opsdroid: ======================================== WARNING opsdroid.loader: No databases in configuration.This will cause skills which store things in memory to lose data when opsdroid is restarted. INFO opsdroid.connector.slack: Connecting to Slack INFO opsdroid.connector.slack: Connected successfully INFO opsdroid.web: Started web server on http://0.0.0.0:8080 INFO opsdroid.core: Opsdroid is now running, press ctrl+c to exit. ^CINFO opsdroid.core: Received stop signal, exiting. INFO opsdroid.core: Removing skills... INFO opsdroid.core: Removed hello INFO opsdroid.core: Removed seen INFO opsdroid.core: Removed help INFO opsdroid.core: Stopping connector slack... ERROR: Unhandled exception in opsdroid, exiting... Caught exception {'message': 'Task exception was never retrieved', 'exception': TypeError("object NoneType can't be used in 'await' expression",), 'future': <Task finished coro=<OpsDroid.handle_signal() done, defined at /home/daniccan/c8/OpsDroid/c8-alertbot/env/lib/python3.6/site-packages/opsdroid/core.py:147> exception=TypeError("object NoneType can't be used in 'await' expression",)>} WARNING slack.rtm.client: Websocket was closed. ``` ## Versions - **Opsdroid version:** master branch in git - **Python version:** 3.6.8 - **OS/Docker version:** Ubuntu 18.04 LTS ## Configuration File Please include your version of the configuration file below. ```yaml # Your code goes here. welcome-message: true connectors: - name: slack api-token: "<Bot OAuth Token>" skills: - name: hello - name: seen - name: help ``` ## Additional Details Any other details you wish to include such as screenshots, console messages, etc. <!-- Love opsdroid? Please consider supporting our collective: +👉 https://opencollective.com/opsdroid/donate --> --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `opsdroid/connector/slack/__init__.py` Content: ``` 1 """A connector for Slack.""" 2 import logging 3 import re 4 import ssl 5 import certifi 6 7 import slack 8 from emoji import demojize 9 10 from opsdroid.connector import Connector, register_event 11 from opsdroid.events import Message, Reaction 12 from opsdroid.connector.slack.events import Blocks 13 14 15 _LOGGER = logging.getLogger(__name__) 16 17 18 class ConnectorSlack(Connector): 19 """A connector for Slack.""" 20 21 def __init__(self, config, opsdroid=None): 22 """Create the connector.""" 23 super().__init__(config, opsdroid=opsdroid) 24 _LOGGER.debug(_("Starting Slack connector")) 25 self.name = "slack" 26 self.default_target = config.get("default-room", "#general") 27 self.icon_emoji = config.get("icon-emoji", ":robot_face:") 28 self.token = config["api-token"] 29 self.timeout = config.get("connect-timeout", 10) 30 self.ssl_context = ssl.create_default_context(cafile=certifi.where()) 31 self.slack = slack.WebClient( 32 token=self.token, run_async=True, ssl=self.ssl_context 33 ) 34 self.slack_rtm = slack.RTMClient( 35 token=self.token, run_async=True, ssl=self.ssl_context 36 ) 37 self.websocket = None 38 self.bot_name = config.get("bot-name", "opsdroid") 39 self.auth_info = None 40 self.user_info = None 41 self.bot_id = None 42 self.known_users = {} 43 self.keepalive = None 44 self.reconnecting = False 45 self.listening = True 46 self._message_id = 0 47 48 # Register callbacks 49 slack.RTMClient.on(event="message", callback=self.process_message) 50 51 async def connect(self): 52 """Connect to the chat service.""" 53 _LOGGER.info(_("Connecting to Slack")) 54 55 try: 56 # The slack library recommends you call `self.slack_rtm.start()`` here but it 57 # seems to mess with the event loop's signal handlers which breaks opsdroid. 58 # Therefore we need to directly call the private `_connect_and_read` method 59 # instead. This method also blocks so we need to dispatch it to the loop as a task. 60 self.opsdroid.eventloop.create_task(self.slack_rtm._connect_and_read()) 61 62 self.auth_info = (await self.slack.api_call("auth.test")).data 63 self.user_info = ( 64 await self.slack.api_call( 65 "users.info", 66 http_verb="GET", 67 params={"user": self.auth_info["user_id"]}, 68 ) 69 ).data 70 self.bot_id = self.user_info["user"]["profile"]["bot_id"] 71 72 _LOGGER.debug(_("Connected as %s"), self.bot_name) 73 _LOGGER.debug(_("Using icon %s"), self.icon_emoji) 74 _LOGGER.debug(_("Default room is %s"), self.default_target) 75 _LOGGER.info(_("Connected successfully")) 76 except slack.errors.SlackApiError as error: 77 _LOGGER.error( 78 _( 79 "Unable to connect to Slack due to %s - " 80 "The Slack Connector will not be available." 81 ), 82 error, 83 ) 84 except Exception: 85 await self.disconnect() 86 raise 87 88 async def disconnect(self): 89 """Disconnect from Slack.""" 90 await self.slack_rtm.stop() 91 self.listening = False 92 93 async def listen(self): 94 """Listen for and parse new messages.""" 95 96 async def process_message(self, **payload): 97 """Process a raw message and pass it to the parser.""" 98 message = payload["data"] 99 100 # Ignore message edits 101 if "subtype" in message and message["subtype"] == "message_changed": 102 return 103 104 # Ignore own messages 105 if ( 106 "subtype" in message 107 and message["subtype"] == "bot_message" 108 and message["bot_id"] == self.bot_id 109 ): 110 return 111 112 # Lookup username 113 _LOGGER.debug(_("Looking up sender username")) 114 try: 115 user_info = await self.lookup_username(message["user"]) 116 except ValueError: 117 return 118 119 # Replace usernames in the message 120 _LOGGER.debug(_("Replacing userids in message with usernames")) 121 message["text"] = await self.replace_usernames(message["text"]) 122 123 await self.opsdroid.parse( 124 Message( 125 message["text"], 126 user_info["name"], 127 message["channel"], 128 self, 129 raw_event=message, 130 ) 131 ) 132 133 @register_event(Message) 134 async def send_message(self, message): 135 """Respond with a message.""" 136 _LOGGER.debug( 137 _("Responding with: '%s' in room %s"), message.text, message.target 138 ) 139 await self.slack.api_call( 140 "chat.postMessage", 141 data={ 142 "channel": message.target, 143 "text": message.text, 144 "as_user": False, 145 "username": self.bot_name, 146 "icon_emoji": self.icon_emoji, 147 }, 148 ) 149 150 @register_event(Blocks) 151 async def send_blocks(self, blocks): 152 """Respond with structured blocks.""" 153 _LOGGER.debug( 154 _("Responding with interactive blocks in room %s"), blocks.target 155 ) 156 await self.slack.api_call( 157 "chat.postMessage", 158 data={ 159 "channel": blocks.target, 160 "username": self.bot_name, 161 "blocks": blocks.blocks, 162 "icon_emoji": self.icon_emoji, 163 }, 164 ) 165 166 @register_event(Reaction) 167 async def send_reaction(self, reaction): 168 """React to a message.""" 169 emoji = demojize(reaction.emoji).replace(":", "") 170 _LOGGER.debug(_("Reacting with: %s"), emoji) 171 try: 172 await self.slack.api_call( 173 "reactions.add", 174 data={ 175 "name": emoji, 176 "channel": reaction.target, 177 "timestamp": reaction.linked_event.raw_event["ts"], 178 }, 179 ) 180 except slack.errors.SlackApiError as error: 181 if "invalid_name" in str(error): 182 _LOGGER.warning(_("Slack does not support the emoji %s"), emoji) 183 else: 184 raise 185 186 async def lookup_username(self, userid): 187 """Lookup a username and cache it.""" 188 if userid in self.known_users: 189 user_info = self.known_users[userid] 190 else: 191 response = await self.slack.users_info(user=userid) 192 user_info = response.data["user"] 193 if isinstance(user_info, dict): 194 self.known_users[userid] = user_info 195 else: 196 raise ValueError("Returned user is not a dict.") 197 return user_info 198 199 async def replace_usernames(self, message): 200 """Replace User ID with username in message text.""" 201 userids = re.findall(r"\<\@([A-Z0-9]+)(?:\|.+)?\>", message) 202 for userid in userids: 203 user_info = await self.lookup_username(userid) 204 message = message.replace( 205 "<@{userid}>".format(userid=userid), user_info["name"] 206 ) 207 return message 208 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/opsdroid/connector/slack/__init__.py b/opsdroid/connector/slack/__init__.py --- a/opsdroid/connector/slack/__init__.py +++ b/opsdroid/connector/slack/__init__.py @@ -87,7 +87,7 @@ async def disconnect(self): """Disconnect from Slack.""" - await self.slack_rtm.stop() + self.slack_rtm.stop() self.listening = False async def listen(self):
{"golden_diff": "diff --git a/opsdroid/connector/slack/__init__.py b/opsdroid/connector/slack/__init__.py\n--- a/opsdroid/connector/slack/__init__.py\n+++ b/opsdroid/connector/slack/__init__.py\n@@ -87,7 +87,7 @@\n \n async def disconnect(self):\n \"\"\"Disconnect from Slack.\"\"\"\n- await self.slack_rtm.stop()\n+ self.slack_rtm.stop()\n self.listening = False\n \n async def listen(self):\n", "issue": "Exiting opsdroid with ctrl+c fails with exception\n<!-- Before you post an issue or if you are unsure about something join our matrix channel https://riot.im/app/#/room/#opsdroid-general:matrix.org and ask away! We are more than happy to help you. -->\r\n# Description\r\nI am trying to build a Slack bot using Opsdroid (master branch). When pressing `ctrl+c` to exit opsdroid, the process does not stop and throws an error.\r\n\r\n\r\n## Steps to Reproduce\r\n1. Start opsdroid and wait for it to run\r\n\r\n```\r\nopsdroid start\r\n```\r\n\r\n2. Press `ctrl+c` to exit the process\r\n\r\n\r\n## Expected Functionality\r\nThe opsdroid process should exit on pressing `ctrl+c`.\r\n\r\n\r\n## Experienced Functionality\r\nThe opsdroid process fails to exit with an exception. The debug log is as follows:\r\n\r\n```\r\nINFO opsdroid.logging: ========================================\r\nINFO opsdroid.logging: Started opsdroid v0.16.0+82.g4c55e97\r\nINFO opsdroid: ========================================\r\nINFO opsdroid: You can customise your opsdroid by modifying your configuration.yaml\r\nINFO opsdroid: Read more at: http://opsdroid.readthedocs.io/#configuration\r\nINFO opsdroid: Watch the Get Started Videos at: http://bit.ly/2fnC0Fh\r\nINFO opsdroid: Install Opsdroid Desktop at: \r\nhttps://github.com/opsdroid/opsdroid-desktop/releases\r\nINFO opsdroid: ========================================\r\nWARNING opsdroid.loader: No databases in configuration.This will cause skills which store things in memory to lose data when opsdroid is restarted.\r\nINFO opsdroid.connector.slack: Connecting to Slack\r\nINFO opsdroid.connector.slack: Connected successfully\r\nINFO opsdroid.web: Started web server on http://0.0.0.0:8080\r\nINFO opsdroid.core: Opsdroid is now running, press ctrl+c to exit.\r\n^CINFO opsdroid.core: Received stop signal, exiting.\r\nINFO opsdroid.core: Removing skills...\r\nINFO opsdroid.core: Removed hello\r\nINFO opsdroid.core: Removed seen\r\nINFO opsdroid.core: Removed help\r\nINFO opsdroid.core: Stopping connector slack...\r\nERROR: Unhandled exception in opsdroid, exiting...\r\nCaught exception\r\n{'message': 'Task exception was never retrieved', 'exception': TypeError(\"object NoneType can't be used in 'await' expression\",), 'future': <Task finished coro=<OpsDroid.handle_signal() done, defined at /home/daniccan/c8/OpsDroid/c8-alertbot/env/lib/python3.6/site-packages/opsdroid/core.py:147> exception=TypeError(\"object NoneType can't be used in 'await' expression\",)>}\r\nWARNING slack.rtm.client: Websocket was closed.\r\n```\r\n\r\n## Versions\r\n- **Opsdroid version:** master branch in git\r\n- **Python version:** 3.6.8\r\n- **OS/Docker version:** Ubuntu 18.04 LTS\r\n\r\n## Configuration File\r\nPlease include your version of the configuration file below.\r\n\r\n```yaml\r\n# Your code goes here.\r\nwelcome-message: true\r\n\r\nconnectors:\r\n - name: slack\r\n api-token: \"<Bot OAuth Token>\"\r\n\r\nskills:\r\n - name: hello\r\n - name: seen\r\n - name: help\r\n```\r\n\r\n## Additional Details\r\nAny other details you wish to include such as screenshots, console messages, etc.\r\n\r\n\r\n<!-- Love opsdroid? Please consider supporting our collective:\r\n +\ud83d\udc49 https://opencollective.com/opsdroid/donate -->\r\n\n", "before_files": [{"content": "\"\"\"A connector for Slack.\"\"\"\nimport logging\nimport re\nimport ssl\nimport certifi\n\nimport slack\nfrom emoji import demojize\n\nfrom opsdroid.connector import Connector, register_event\nfrom opsdroid.events import Message, Reaction\nfrom opsdroid.connector.slack.events import Blocks\n\n\n_LOGGER = logging.getLogger(__name__)\n\n\nclass ConnectorSlack(Connector):\n \"\"\"A connector for Slack.\"\"\"\n\n def __init__(self, config, opsdroid=None):\n \"\"\"Create the connector.\"\"\"\n super().__init__(config, opsdroid=opsdroid)\n _LOGGER.debug(_(\"Starting Slack connector\"))\n self.name = \"slack\"\n self.default_target = config.get(\"default-room\", \"#general\")\n self.icon_emoji = config.get(\"icon-emoji\", \":robot_face:\")\n self.token = config[\"api-token\"]\n self.timeout = config.get(\"connect-timeout\", 10)\n self.ssl_context = ssl.create_default_context(cafile=certifi.where())\n self.slack = slack.WebClient(\n token=self.token, run_async=True, ssl=self.ssl_context\n )\n self.slack_rtm = slack.RTMClient(\n token=self.token, run_async=True, ssl=self.ssl_context\n )\n self.websocket = None\n self.bot_name = config.get(\"bot-name\", \"opsdroid\")\n self.auth_info = None\n self.user_info = None\n self.bot_id = None\n self.known_users = {}\n self.keepalive = None\n self.reconnecting = False\n self.listening = True\n self._message_id = 0\n\n # Register callbacks\n slack.RTMClient.on(event=\"message\", callback=self.process_message)\n\n async def connect(self):\n \"\"\"Connect to the chat service.\"\"\"\n _LOGGER.info(_(\"Connecting to Slack\"))\n\n try:\n # The slack library recommends you call `self.slack_rtm.start()`` here but it\n # seems to mess with the event loop's signal handlers which breaks opsdroid.\n # Therefore we need to directly call the private `_connect_and_read` method\n # instead. This method also blocks so we need to dispatch it to the loop as a task.\n self.opsdroid.eventloop.create_task(self.slack_rtm._connect_and_read())\n\n self.auth_info = (await self.slack.api_call(\"auth.test\")).data\n self.user_info = (\n await self.slack.api_call(\n \"users.info\",\n http_verb=\"GET\",\n params={\"user\": self.auth_info[\"user_id\"]},\n )\n ).data\n self.bot_id = self.user_info[\"user\"][\"profile\"][\"bot_id\"]\n\n _LOGGER.debug(_(\"Connected as %s\"), self.bot_name)\n _LOGGER.debug(_(\"Using icon %s\"), self.icon_emoji)\n _LOGGER.debug(_(\"Default room is %s\"), self.default_target)\n _LOGGER.info(_(\"Connected successfully\"))\n except slack.errors.SlackApiError as error:\n _LOGGER.error(\n _(\n \"Unable to connect to Slack due to %s - \"\n \"The Slack Connector will not be available.\"\n ),\n error,\n )\n except Exception:\n await self.disconnect()\n raise\n\n async def disconnect(self):\n \"\"\"Disconnect from Slack.\"\"\"\n await self.slack_rtm.stop()\n self.listening = False\n\n async def listen(self):\n \"\"\"Listen for and parse new messages.\"\"\"\n\n async def process_message(self, **payload):\n \"\"\"Process a raw message and pass it to the parser.\"\"\"\n message = payload[\"data\"]\n\n # Ignore message edits\n if \"subtype\" in message and message[\"subtype\"] == \"message_changed\":\n return\n\n # Ignore own messages\n if (\n \"subtype\" in message\n and message[\"subtype\"] == \"bot_message\"\n and message[\"bot_id\"] == self.bot_id\n ):\n return\n\n # Lookup username\n _LOGGER.debug(_(\"Looking up sender username\"))\n try:\n user_info = await self.lookup_username(message[\"user\"])\n except ValueError:\n return\n\n # Replace usernames in the message\n _LOGGER.debug(_(\"Replacing userids in message with usernames\"))\n message[\"text\"] = await self.replace_usernames(message[\"text\"])\n\n await self.opsdroid.parse(\n Message(\n message[\"text\"],\n user_info[\"name\"],\n message[\"channel\"],\n self,\n raw_event=message,\n )\n )\n\n @register_event(Message)\n async def send_message(self, message):\n \"\"\"Respond with a message.\"\"\"\n _LOGGER.debug(\n _(\"Responding with: '%s' in room %s\"), message.text, message.target\n )\n await self.slack.api_call(\n \"chat.postMessage\",\n data={\n \"channel\": message.target,\n \"text\": message.text,\n \"as_user\": False,\n \"username\": self.bot_name,\n \"icon_emoji\": self.icon_emoji,\n },\n )\n\n @register_event(Blocks)\n async def send_blocks(self, blocks):\n \"\"\"Respond with structured blocks.\"\"\"\n _LOGGER.debug(\n _(\"Responding with interactive blocks in room %s\"), blocks.target\n )\n await self.slack.api_call(\n \"chat.postMessage\",\n data={\n \"channel\": blocks.target,\n \"username\": self.bot_name,\n \"blocks\": blocks.blocks,\n \"icon_emoji\": self.icon_emoji,\n },\n )\n\n @register_event(Reaction)\n async def send_reaction(self, reaction):\n \"\"\"React to a message.\"\"\"\n emoji = demojize(reaction.emoji).replace(\":\", \"\")\n _LOGGER.debug(_(\"Reacting with: %s\"), emoji)\n try:\n await self.slack.api_call(\n \"reactions.add\",\n data={\n \"name\": emoji,\n \"channel\": reaction.target,\n \"timestamp\": reaction.linked_event.raw_event[\"ts\"],\n },\n )\n except slack.errors.SlackApiError as error:\n if \"invalid_name\" in str(error):\n _LOGGER.warning(_(\"Slack does not support the emoji %s\"), emoji)\n else:\n raise\n\n async def lookup_username(self, userid):\n \"\"\"Lookup a username and cache it.\"\"\"\n if userid in self.known_users:\n user_info = self.known_users[userid]\n else:\n response = await self.slack.users_info(user=userid)\n user_info = response.data[\"user\"]\n if isinstance(user_info, dict):\n self.known_users[userid] = user_info\n else:\n raise ValueError(\"Returned user is not a dict.\")\n return user_info\n\n async def replace_usernames(self, message):\n \"\"\"Replace User ID with username in message text.\"\"\"\n userids = re.findall(r\"\\<\\@([A-Z0-9]+)(?:\\|.+)?\\>\", message)\n for userid in userids:\n user_info = await self.lookup_username(userid)\n message = message.replace(\n \"<@{userid}>\".format(userid=userid), user_info[\"name\"]\n )\n return message\n", "path": "opsdroid/connector/slack/__init__.py"}], "after_files": [{"content": "\"\"\"A connector for Slack.\"\"\"\nimport logging\nimport re\nimport ssl\nimport certifi\n\nimport slack\nfrom emoji import demojize\n\nfrom opsdroid.connector import Connector, register_event\nfrom opsdroid.events import Message, Reaction\nfrom opsdroid.connector.slack.events import Blocks\n\n\n_LOGGER = logging.getLogger(__name__)\n\n\nclass ConnectorSlack(Connector):\n \"\"\"A connector for Slack.\"\"\"\n\n def __init__(self, config, opsdroid=None):\n \"\"\"Create the connector.\"\"\"\n super().__init__(config, opsdroid=opsdroid)\n _LOGGER.debug(_(\"Starting Slack connector\"))\n self.name = \"slack\"\n self.default_target = config.get(\"default-room\", \"#general\")\n self.icon_emoji = config.get(\"icon-emoji\", \":robot_face:\")\n self.token = config[\"api-token\"]\n self.timeout = config.get(\"connect-timeout\", 10)\n self.ssl_context = ssl.create_default_context(cafile=certifi.where())\n self.slack = slack.WebClient(\n token=self.token, run_async=True, ssl=self.ssl_context\n )\n self.slack_rtm = slack.RTMClient(\n token=self.token, run_async=True, ssl=self.ssl_context\n )\n self.websocket = None\n self.bot_name = config.get(\"bot-name\", \"opsdroid\")\n self.auth_info = None\n self.user_info = None\n self.bot_id = None\n self.known_users = {}\n self.keepalive = None\n self.reconnecting = False\n self.listening = True\n self._message_id = 0\n\n # Register callbacks\n slack.RTMClient.on(event=\"message\", callback=self.process_message)\n\n async def connect(self):\n \"\"\"Connect to the chat service.\"\"\"\n _LOGGER.info(_(\"Connecting to Slack\"))\n\n try:\n # The slack library recommends you call `self.slack_rtm.start()`` here but it\n # seems to mess with the event loop's signal handlers which breaks opsdroid.\n # Therefore we need to directly call the private `_connect_and_read` method\n # instead. This method also blocks so we need to dispatch it to the loop as a task.\n self.opsdroid.eventloop.create_task(self.slack_rtm._connect_and_read())\n\n self.auth_info = (await self.slack.api_call(\"auth.test\")).data\n self.user_info = (\n await self.slack.api_call(\n \"users.info\",\n http_verb=\"GET\",\n params={\"user\": self.auth_info[\"user_id\"]},\n )\n ).data\n self.bot_id = self.user_info[\"user\"][\"profile\"][\"bot_id\"]\n\n _LOGGER.debug(_(\"Connected as %s\"), self.bot_name)\n _LOGGER.debug(_(\"Using icon %s\"), self.icon_emoji)\n _LOGGER.debug(_(\"Default room is %s\"), self.default_target)\n _LOGGER.info(_(\"Connected successfully\"))\n except slack.errors.SlackApiError as error:\n _LOGGER.error(\n _(\n \"Unable to connect to Slack due to %s - \"\n \"The Slack Connector will not be available.\"\n ),\n error,\n )\n except Exception:\n await self.disconnect()\n raise\n\n async def disconnect(self):\n \"\"\"Disconnect from Slack.\"\"\"\n self.slack_rtm.stop()\n self.listening = False\n\n async def listen(self):\n \"\"\"Listen for and parse new messages.\"\"\"\n\n async def process_message(self, **payload):\n \"\"\"Process a raw message and pass it to the parser.\"\"\"\n message = payload[\"data\"]\n\n # Ignore message edits\n if \"subtype\" in message and message[\"subtype\"] == \"message_changed\":\n return\n\n # Ignore own messages\n if (\n \"subtype\" in message\n and message[\"subtype\"] == \"bot_message\"\n and message[\"bot_id\"] == self.bot_id\n ):\n return\n\n # Lookup username\n _LOGGER.debug(_(\"Looking up sender username\"))\n try:\n user_info = await self.lookup_username(message[\"user\"])\n except ValueError:\n return\n\n # Replace usernames in the message\n _LOGGER.debug(_(\"Replacing userids in message with usernames\"))\n message[\"text\"] = await self.replace_usernames(message[\"text\"])\n\n await self.opsdroid.parse(\n Message(\n message[\"text\"],\n user_info[\"name\"],\n message[\"channel\"],\n self,\n raw_event=message,\n )\n )\n\n @register_event(Message)\n async def send_message(self, message):\n \"\"\"Respond with a message.\"\"\"\n _LOGGER.debug(\n _(\"Responding with: '%s' in room %s\"), message.text, message.target\n )\n await self.slack.api_call(\n \"chat.postMessage\",\n data={\n \"channel\": message.target,\n \"text\": message.text,\n \"as_user\": False,\n \"username\": self.bot_name,\n \"icon_emoji\": self.icon_emoji,\n },\n )\n\n @register_event(Blocks)\n async def send_blocks(self, blocks):\n \"\"\"Respond with structured blocks.\"\"\"\n _LOGGER.debug(\n _(\"Responding with interactive blocks in room %s\"), blocks.target\n )\n await self.slack.api_call(\n \"chat.postMessage\",\n data={\n \"channel\": blocks.target,\n \"username\": self.bot_name,\n \"blocks\": blocks.blocks,\n \"icon_emoji\": self.icon_emoji,\n },\n )\n\n @register_event(Reaction)\n async def send_reaction(self, reaction):\n \"\"\"React to a message.\"\"\"\n emoji = demojize(reaction.emoji).replace(\":\", \"\")\n _LOGGER.debug(_(\"Reacting with: %s\"), emoji)\n try:\n await self.slack.api_call(\n \"reactions.add\",\n data={\n \"name\": emoji,\n \"channel\": reaction.target,\n \"timestamp\": reaction.linked_event.raw_event[\"ts\"],\n },\n )\n except slack.errors.SlackApiError as error:\n if \"invalid_name\" in str(error):\n _LOGGER.warning(_(\"Slack does not support the emoji %s\"), emoji)\n else:\n raise\n\n async def lookup_username(self, userid):\n \"\"\"Lookup a username and cache it.\"\"\"\n if userid in self.known_users:\n user_info = self.known_users[userid]\n else:\n response = await self.slack.users_info(user=userid)\n user_info = response.data[\"user\"]\n if isinstance(user_info, dict):\n self.known_users[userid] = user_info\n else:\n raise ValueError(\"Returned user is not a dict.\")\n return user_info\n\n async def replace_usernames(self, message):\n \"\"\"Replace User ID with username in message text.\"\"\"\n userids = re.findall(r\"\\<\\@([A-Z0-9]+)(?:\\|.+)?\\>\", message)\n for userid in userids:\n user_info = await self.lookup_username(userid)\n message = message.replace(\n \"<@{userid}>\".format(userid=userid), user_info[\"name\"]\n )\n return message\n", "path": "opsdroid/connector/slack/__init__.py"}]}
3,069
117
gh_patches_debug_10562
rasdani/github-patches
git_diff
plotly__plotly.py-2132
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- plotly.express import raises ModuleNotFound in environment without pandas. Importing plotly.express when pandas is not available raises `ModuleNotFoundError: No module named 'pandas'`, instead of the intended `ImportError: Plotly express requires pandas to be installed.` This happens on `from ._imshow import imshow`. Perhaps this import should be moved below the code that will output a more helpful message? --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `packages/python/plotly/plotly/express/__init__.py` Content: ``` 1 """ 2 `plotly.express` is a terse, consistent, high-level wrapper around `plotly.graph_objects` 3 for rapid data exploration and figure generation. Learn more at https://plotly.express/ 4 """ 5 from __future__ import absolute_import 6 from plotly import optional_imports 7 from ._imshow import imshow 8 9 pd = optional_imports.get_module("pandas") 10 if pd is None: 11 raise ImportError( 12 """\ 13 Plotly express requires pandas to be installed.""" 14 ) 15 16 from ._chart_types import ( # noqa: F401 17 scatter, 18 scatter_3d, 19 scatter_polar, 20 scatter_ternary, 21 scatter_mapbox, 22 scatter_geo, 23 line, 24 line_3d, 25 line_polar, 26 line_ternary, 27 line_mapbox, 28 line_geo, 29 area, 30 bar, 31 bar_polar, 32 violin, 33 box, 34 strip, 35 histogram, 36 scatter_matrix, 37 parallel_coordinates, 38 parallel_categories, 39 choropleth, 40 density_contour, 41 density_heatmap, 42 pie, 43 sunburst, 44 treemap, 45 funnel, 46 funnel_area, 47 choropleth_mapbox, 48 density_mapbox, 49 ) 50 51 52 from ._core import ( # noqa: F401 53 set_mapbox_access_token, 54 defaults, 55 get_trendline_results, 56 ) 57 58 from . import data, colors # noqa: F401 59 60 __all__ = [ 61 "scatter", 62 "scatter_3d", 63 "scatter_polar", 64 "scatter_ternary", 65 "scatter_mapbox", 66 "scatter_geo", 67 "scatter_matrix", 68 "density_contour", 69 "density_heatmap", 70 "density_mapbox", 71 "line", 72 "line_3d", 73 "line_polar", 74 "line_ternary", 75 "line_mapbox", 76 "line_geo", 77 "parallel_coordinates", 78 "parallel_categories", 79 "area", 80 "bar", 81 "bar_polar", 82 "violin", 83 "box", 84 "strip", 85 "histogram", 86 "choropleth", 87 "choropleth_mapbox", 88 "pie", 89 "sunburst", 90 "treemap", 91 "funnel", 92 "funnel_area", 93 "imshow", 94 "data", 95 "colors", 96 "set_mapbox_access_token", 97 "get_trendline_results", 98 ] 99 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/packages/python/plotly/plotly/express/__init__.py b/packages/python/plotly/plotly/express/__init__.py --- a/packages/python/plotly/plotly/express/__init__.py +++ b/packages/python/plotly/plotly/express/__init__.py @@ -4,7 +4,6 @@ """ from __future__ import absolute_import from plotly import optional_imports -from ._imshow import imshow pd = optional_imports.get_module("pandas") if pd is None: @@ -13,6 +12,7 @@ Plotly express requires pandas to be installed.""" ) +from ._imshow import imshow from ._chart_types import ( # noqa: F401 scatter, scatter_3d,
{"golden_diff": "diff --git a/packages/python/plotly/plotly/express/__init__.py b/packages/python/plotly/plotly/express/__init__.py\n--- a/packages/python/plotly/plotly/express/__init__.py\n+++ b/packages/python/plotly/plotly/express/__init__.py\n@@ -4,7 +4,6 @@\n \"\"\"\n from __future__ import absolute_import\n from plotly import optional_imports\n-from ._imshow import imshow\n \n pd = optional_imports.get_module(\"pandas\")\n if pd is None:\n@@ -13,6 +12,7 @@\n Plotly express requires pandas to be installed.\"\"\"\n )\n \n+from ._imshow import imshow\n from ._chart_types import ( # noqa: F401\n scatter,\n scatter_3d,\n", "issue": "plotly.express import raises ModuleNotFound in environment without pandas.\nImporting plotly.express when pandas is not available raises `ModuleNotFoundError: No module named 'pandas'`, instead of the intended `ImportError: Plotly express requires pandas to be installed.`\r\nThis happens on `from ._imshow import imshow`.\r\nPerhaps this import should be moved below the code that will output a more helpful message?\n", "before_files": [{"content": "\"\"\"\n`plotly.express` is a terse, consistent, high-level wrapper around `plotly.graph_objects`\nfor rapid data exploration and figure generation. Learn more at https://plotly.express/\n\"\"\"\nfrom __future__ import absolute_import\nfrom plotly import optional_imports\nfrom ._imshow import imshow\n\npd = optional_imports.get_module(\"pandas\")\nif pd is None:\n raise ImportError(\n \"\"\"\\\nPlotly express requires pandas to be installed.\"\"\"\n )\n\nfrom ._chart_types import ( # noqa: F401\n scatter,\n scatter_3d,\n scatter_polar,\n scatter_ternary,\n scatter_mapbox,\n scatter_geo,\n line,\n line_3d,\n line_polar,\n line_ternary,\n line_mapbox,\n line_geo,\n area,\n bar,\n bar_polar,\n violin,\n box,\n strip,\n histogram,\n scatter_matrix,\n parallel_coordinates,\n parallel_categories,\n choropleth,\n density_contour,\n density_heatmap,\n pie,\n sunburst,\n treemap,\n funnel,\n funnel_area,\n choropleth_mapbox,\n density_mapbox,\n)\n\n\nfrom ._core import ( # noqa: F401\n set_mapbox_access_token,\n defaults,\n get_trendline_results,\n)\n\nfrom . import data, colors # noqa: F401\n\n__all__ = [\n \"scatter\",\n \"scatter_3d\",\n \"scatter_polar\",\n \"scatter_ternary\",\n \"scatter_mapbox\",\n \"scatter_geo\",\n \"scatter_matrix\",\n \"density_contour\",\n \"density_heatmap\",\n \"density_mapbox\",\n \"line\",\n \"line_3d\",\n \"line_polar\",\n \"line_ternary\",\n \"line_mapbox\",\n \"line_geo\",\n \"parallel_coordinates\",\n \"parallel_categories\",\n \"area\",\n \"bar\",\n \"bar_polar\",\n \"violin\",\n \"box\",\n \"strip\",\n \"histogram\",\n \"choropleth\",\n \"choropleth_mapbox\",\n \"pie\",\n \"sunburst\",\n \"treemap\",\n \"funnel\",\n \"funnel_area\",\n \"imshow\",\n \"data\",\n \"colors\",\n \"set_mapbox_access_token\",\n \"get_trendline_results\",\n]\n", "path": "packages/python/plotly/plotly/express/__init__.py"}], "after_files": [{"content": "\"\"\"\n`plotly.express` is a terse, consistent, high-level wrapper around `plotly.graph_objects`\nfor rapid data exploration and figure generation. Learn more at https://plotly.express/\n\"\"\"\nfrom __future__ import absolute_import\nfrom plotly import optional_imports\n\npd = optional_imports.get_module(\"pandas\")\nif pd is None:\n raise ImportError(\n \"\"\"\\\nPlotly express requires pandas to be installed.\"\"\"\n )\n\nfrom ._imshow import imshow\nfrom ._chart_types import ( # noqa: F401\n scatter,\n scatter_3d,\n scatter_polar,\n scatter_ternary,\n scatter_mapbox,\n scatter_geo,\n line,\n line_3d,\n line_polar,\n line_ternary,\n line_mapbox,\n line_geo,\n area,\n bar,\n bar_polar,\n violin,\n box,\n strip,\n histogram,\n scatter_matrix,\n parallel_coordinates,\n parallel_categories,\n choropleth,\n density_contour,\n density_heatmap,\n pie,\n sunburst,\n treemap,\n funnel,\n funnel_area,\n choropleth_mapbox,\n density_mapbox,\n)\n\n\nfrom ._core import ( # noqa: F401\n set_mapbox_access_token,\n defaults,\n get_trendline_results,\n)\n\nfrom . import data, colors # noqa: F401\n\n__all__ = [\n \"scatter\",\n \"scatter_3d\",\n \"scatter_polar\",\n \"scatter_ternary\",\n \"scatter_mapbox\",\n \"scatter_geo\",\n \"scatter_matrix\",\n \"density_contour\",\n \"density_heatmap\",\n \"density_mapbox\",\n \"line\",\n \"line_3d\",\n \"line_polar\",\n \"line_ternary\",\n \"line_mapbox\",\n \"line_geo\",\n \"parallel_coordinates\",\n \"parallel_categories\",\n \"area\",\n \"bar\",\n \"bar_polar\",\n \"violin\",\n \"box\",\n \"strip\",\n \"histogram\",\n \"choropleth\",\n \"choropleth_mapbox\",\n \"pie\",\n \"sunburst\",\n \"treemap\",\n \"funnel\",\n \"funnel_area\",\n \"imshow\",\n \"data\",\n \"colors\",\n \"set_mapbox_access_token\",\n \"get_trendline_results\",\n]\n", "path": "packages/python/plotly/plotly/express/__init__.py"}]}
1,053
172
gh_patches_debug_14247
rasdani/github-patches
git_diff
HypothesisWorks__hypothesis-1025
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Importing hypothesis mutates global warnings state `hypothesis.errors` mutates the global warnings state: https://github.com/HypothesisWorks/hypothesis-python/blob/master/src/hypothesis/errors.py#L182 This causes hypothesis to override any warnings settings that have already been applied. E.g., setting PYTHONWARNINGS="error" will not be respected, because hypothesis will change HypothesisDeprecationWarnings to be instead printed. The filter there should presumably not do anything if the user has already modified any warnings defaults. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `src/hypothesis/errors.py` Content: ``` 1 # coding=utf-8 2 # 3 # This file is part of Hypothesis, which may be found at 4 # https://github.com/HypothesisWorks/hypothesis-python 5 # 6 # Most of this work is copyright (C) 2013-2017 David R. MacIver 7 # ([email protected]), but it contains contributions by others. See 8 # CONTRIBUTING.rst for a full list of people who may hold copyright, and 9 # consult the git log if you need to determine who owns an individual 10 # contribution. 11 # 12 # This Source Code Form is subject to the terms of the Mozilla Public License, 13 # v. 2.0. If a copy of the MPL was not distributed with this file, You can 14 # obtain one at http://mozilla.org/MPL/2.0/. 15 # 16 # END HEADER 17 18 from __future__ import division, print_function, absolute_import 19 20 import warnings 21 22 23 class HypothesisException(Exception): 24 25 """Generic parent class for exceptions thrown by Hypothesis.""" 26 27 28 class CleanupFailed(HypothesisException): 29 30 """At least one cleanup task failed and no other exception was raised.""" 31 32 33 class UnsatisfiedAssumption(HypothesisException): 34 35 """An internal error raised by assume. 36 37 If you're seeing this error something has gone wrong. 38 39 """ 40 41 42 class BadTemplateDraw(HypothesisException): 43 44 """An internal error raised when something unfortunate happened during 45 template generation and you should restart the draw, preferably with a new 46 parameter. 47 48 This is not an error condition internally, but if you ever see this 49 in your code it's probably a Hypothesis bug 50 51 """ 52 53 54 class NoSuchExample(HypothesisException): 55 56 """The condition we have been asked to satisfy appears to be always false. 57 58 This does not guarantee that no example exists, only that we were 59 unable to find one. 60 61 """ 62 63 def __init__(self, condition_string, extra=''): 64 super(NoSuchExample, self).__init__( 65 'No examples found of condition %s%s' % ( 66 condition_string, extra) 67 ) 68 69 70 class DefinitelyNoSuchExample(NoSuchExample): # pragma: no cover 71 """Hypothesis used to be able to detect exhaustive coverage of a search 72 space and no longer can. 73 74 This exception remains for compatibility reasons for now but can 75 never actually be thrown. 76 77 """ 78 79 80 class NoExamples(HypothesisException): 81 82 """Raised when example() is called on a strategy but we cannot find any 83 examples after enough tries that we really should have been able to if this 84 was ever going to work.""" 85 86 87 class Unsatisfiable(HypothesisException): 88 89 """We ran out of time or examples before we could find enough examples 90 which satisfy the assumptions of this hypothesis. 91 92 This could be because the function is too slow. If so, try upping 93 the timeout. It could also be because the function is using assume 94 in a way that is too hard to satisfy. If so, try writing a custom 95 strategy or using a better starting point (e.g if you are requiring 96 a list has unique values you could instead filter out all duplicate 97 values from the list) 98 99 """ 100 101 102 class Flaky(HypothesisException): 103 104 """This function appears to fail non-deterministically: We have seen it 105 fail when passed this example at least once, but a subsequent invocation 106 did not fail. 107 108 Common causes for this problem are: 109 1. The function depends on external state. e.g. it uses an external 110 random number generator. Try to make a version that passes all the 111 relevant state in from Hypothesis. 112 2. The function is suffering from too much recursion and its failure 113 depends sensitively on where it's been called from. 114 3. The function is timing sensitive and can fail or pass depending on 115 how long it takes. Try breaking it up into smaller functions which 116 don't do that and testing those instead. 117 118 """ 119 120 121 class Timeout(Unsatisfiable): 122 123 """We were unable to find enough examples that satisfied the preconditions 124 of this hypothesis in the amount of time allotted to us.""" 125 126 127 class WrongFormat(HypothesisException, ValueError): 128 129 """An exception indicating you have attempted to serialize a value that 130 does not match the type described by this format.""" 131 132 133 class BadData(HypothesisException, ValueError): 134 135 """The data that we got out of the database does not seem to match the data 136 we could have put into the database given this schema.""" 137 138 139 class InvalidArgument(HypothesisException, TypeError): 140 141 """Used to indicate that the arguments to a Hypothesis function were in 142 some manner incorrect.""" 143 144 145 class ResolutionFailed(InvalidArgument): 146 147 """Hypothesis had to resolve a type to a strategy, but this failed. 148 149 Type inference is best-effort, so this only happens when an 150 annotation exists but could not be resolved for a required argument 151 to the target of ``builds()``, or where the user passed ``infer``. 152 153 """ 154 155 156 class InvalidState(HypothesisException): 157 158 """The system is not in a state where you were allowed to do that.""" 159 160 161 class InvalidDefinition(HypothesisException, TypeError): 162 163 """Used to indicate that a class definition was not well put together and 164 has something wrong with it.""" 165 166 167 class AbnormalExit(HypothesisException): 168 169 """Raised when a test running in a child process exits without returning or 170 raising an exception.""" 171 172 173 class FailedHealthCheck(HypothesisException, Warning): 174 """Raised when a test fails a preliminary healthcheck that occurs before 175 execution.""" 176 177 def __init__(self, message, check): 178 super(FailedHealthCheck, self).__init__(message) 179 self.health_check = check 180 181 182 class HypothesisDeprecationWarning(HypothesisException, DeprecationWarning): 183 pass 184 185 186 warnings.simplefilter('once', HypothesisDeprecationWarning) 187 188 189 class Frozen(HypothesisException): 190 191 """Raised when a mutation method has been called on a ConjectureData object 192 after freeze() has been called.""" 193 194 195 class MultipleFailures(HypothesisException): 196 """Indicates that Hypothesis found more than one distinct bug when testing 197 your code.""" 198 199 200 class DeadlineExceeded(HypothesisException): 201 """Raised when an individual test body has taken too long to run.""" 202 203 def __init__(self, runtime, deadline): 204 super(DeadlineExceeded, self).__init__(( 205 'Test took %.2fms, which exceeds the deadline of ' 206 '%.2fms') % (runtime, deadline)) 207 self.runtime = runtime 208 self.deadline = deadline 209 210 211 class StopTest(BaseException): 212 213 def __init__(self, testcounter): 214 super(StopTest, self).__init__(repr(testcounter)) 215 self.testcounter = testcounter 216 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/src/hypothesis/errors.py b/src/hypothesis/errors.py --- a/src/hypothesis/errors.py +++ b/src/hypothesis/errors.py @@ -17,8 +17,6 @@ from __future__ import division, print_function, absolute_import -import warnings - class HypothesisException(Exception): @@ -179,13 +177,10 @@ self.health_check = check -class HypothesisDeprecationWarning(HypothesisException, DeprecationWarning): +class HypothesisDeprecationWarning(HypothesisException, FutureWarning): pass -warnings.simplefilter('once', HypothesisDeprecationWarning) - - class Frozen(HypothesisException): """Raised when a mutation method has been called on a ConjectureData object
{"golden_diff": "diff --git a/src/hypothesis/errors.py b/src/hypothesis/errors.py\n--- a/src/hypothesis/errors.py\n+++ b/src/hypothesis/errors.py\n@@ -17,8 +17,6 @@\n \n from __future__ import division, print_function, absolute_import\n \n-import warnings\n-\n \n class HypothesisException(Exception):\n \n@@ -179,13 +177,10 @@\n self.health_check = check\n \n \n-class HypothesisDeprecationWarning(HypothesisException, DeprecationWarning):\n+class HypothesisDeprecationWarning(HypothesisException, FutureWarning):\n pass\n \n \n-warnings.simplefilter('once', HypothesisDeprecationWarning)\n-\n-\n class Frozen(HypothesisException):\n \n \"\"\"Raised when a mutation method has been called on a ConjectureData object\n", "issue": "Importing hypothesis mutates global warnings state\n`hypothesis.errors` mutates the global warnings state:\r\n\r\nhttps://github.com/HypothesisWorks/hypothesis-python/blob/master/src/hypothesis/errors.py#L182\r\n\r\nThis causes hypothesis to override any warnings settings that have already been applied. E.g., setting PYTHONWARNINGS=\"error\" will not be respected, because hypothesis will change HypothesisDeprecationWarnings to be instead printed.\r\n\r\nThe filter there should presumably not do anything if the user has already modified any warnings defaults.\n", "before_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2017 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport warnings\n\n\nclass HypothesisException(Exception):\n\n \"\"\"Generic parent class for exceptions thrown by Hypothesis.\"\"\"\n\n\nclass CleanupFailed(HypothesisException):\n\n \"\"\"At least one cleanup task failed and no other exception was raised.\"\"\"\n\n\nclass UnsatisfiedAssumption(HypothesisException):\n\n \"\"\"An internal error raised by assume.\n\n If you're seeing this error something has gone wrong.\n\n \"\"\"\n\n\nclass BadTemplateDraw(HypothesisException):\n\n \"\"\"An internal error raised when something unfortunate happened during\n template generation and you should restart the draw, preferably with a new\n parameter.\n\n This is not an error condition internally, but if you ever see this\n in your code it's probably a Hypothesis bug\n\n \"\"\"\n\n\nclass NoSuchExample(HypothesisException):\n\n \"\"\"The condition we have been asked to satisfy appears to be always false.\n\n This does not guarantee that no example exists, only that we were\n unable to find one.\n\n \"\"\"\n\n def __init__(self, condition_string, extra=''):\n super(NoSuchExample, self).__init__(\n 'No examples found of condition %s%s' % (\n condition_string, extra)\n )\n\n\nclass DefinitelyNoSuchExample(NoSuchExample): # pragma: no cover\n \"\"\"Hypothesis used to be able to detect exhaustive coverage of a search\n space and no longer can.\n\n This exception remains for compatibility reasons for now but can\n never actually be thrown.\n\n \"\"\"\n\n\nclass NoExamples(HypothesisException):\n\n \"\"\"Raised when example() is called on a strategy but we cannot find any\n examples after enough tries that we really should have been able to if this\n was ever going to work.\"\"\"\n\n\nclass Unsatisfiable(HypothesisException):\n\n \"\"\"We ran out of time or examples before we could find enough examples\n which satisfy the assumptions of this hypothesis.\n\n This could be because the function is too slow. If so, try upping\n the timeout. It could also be because the function is using assume\n in a way that is too hard to satisfy. If so, try writing a custom\n strategy or using a better starting point (e.g if you are requiring\n a list has unique values you could instead filter out all duplicate\n values from the list)\n\n \"\"\"\n\n\nclass Flaky(HypothesisException):\n\n \"\"\"This function appears to fail non-deterministically: We have seen it\n fail when passed this example at least once, but a subsequent invocation\n did not fail.\n\n Common causes for this problem are:\n 1. The function depends on external state. e.g. it uses an external\n random number generator. Try to make a version that passes all the\n relevant state in from Hypothesis.\n 2. The function is suffering from too much recursion and its failure\n depends sensitively on where it's been called from.\n 3. The function is timing sensitive and can fail or pass depending on\n how long it takes. Try breaking it up into smaller functions which\n don't do that and testing those instead.\n\n \"\"\"\n\n\nclass Timeout(Unsatisfiable):\n\n \"\"\"We were unable to find enough examples that satisfied the preconditions\n of this hypothesis in the amount of time allotted to us.\"\"\"\n\n\nclass WrongFormat(HypothesisException, ValueError):\n\n \"\"\"An exception indicating you have attempted to serialize a value that\n does not match the type described by this format.\"\"\"\n\n\nclass BadData(HypothesisException, ValueError):\n\n \"\"\"The data that we got out of the database does not seem to match the data\n we could have put into the database given this schema.\"\"\"\n\n\nclass InvalidArgument(HypothesisException, TypeError):\n\n \"\"\"Used to indicate that the arguments to a Hypothesis function were in\n some manner incorrect.\"\"\"\n\n\nclass ResolutionFailed(InvalidArgument):\n\n \"\"\"Hypothesis had to resolve a type to a strategy, but this failed.\n\n Type inference is best-effort, so this only happens when an\n annotation exists but could not be resolved for a required argument\n to the target of ``builds()``, or where the user passed ``infer``.\n\n \"\"\"\n\n\nclass InvalidState(HypothesisException):\n\n \"\"\"The system is not in a state where you were allowed to do that.\"\"\"\n\n\nclass InvalidDefinition(HypothesisException, TypeError):\n\n \"\"\"Used to indicate that a class definition was not well put together and\n has something wrong with it.\"\"\"\n\n\nclass AbnormalExit(HypothesisException):\n\n \"\"\"Raised when a test running in a child process exits without returning or\n raising an exception.\"\"\"\n\n\nclass FailedHealthCheck(HypothesisException, Warning):\n \"\"\"Raised when a test fails a preliminary healthcheck that occurs before\n execution.\"\"\"\n\n def __init__(self, message, check):\n super(FailedHealthCheck, self).__init__(message)\n self.health_check = check\n\n\nclass HypothesisDeprecationWarning(HypothesisException, DeprecationWarning):\n pass\n\n\nwarnings.simplefilter('once', HypothesisDeprecationWarning)\n\n\nclass Frozen(HypothesisException):\n\n \"\"\"Raised when a mutation method has been called on a ConjectureData object\n after freeze() has been called.\"\"\"\n\n\nclass MultipleFailures(HypothesisException):\n \"\"\"Indicates that Hypothesis found more than one distinct bug when testing\n your code.\"\"\"\n\n\nclass DeadlineExceeded(HypothesisException):\n \"\"\"Raised when an individual test body has taken too long to run.\"\"\"\n\n def __init__(self, runtime, deadline):\n super(DeadlineExceeded, self).__init__((\n 'Test took %.2fms, which exceeds the deadline of '\n '%.2fms') % (runtime, deadline))\n self.runtime = runtime\n self.deadline = deadline\n\n\nclass StopTest(BaseException):\n\n def __init__(self, testcounter):\n super(StopTest, self).__init__(repr(testcounter))\n self.testcounter = testcounter\n", "path": "src/hypothesis/errors.py"}], "after_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2017 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\n\nclass HypothesisException(Exception):\n\n \"\"\"Generic parent class for exceptions thrown by Hypothesis.\"\"\"\n\n\nclass CleanupFailed(HypothesisException):\n\n \"\"\"At least one cleanup task failed and no other exception was raised.\"\"\"\n\n\nclass UnsatisfiedAssumption(HypothesisException):\n\n \"\"\"An internal error raised by assume.\n\n If you're seeing this error something has gone wrong.\n\n \"\"\"\n\n\nclass BadTemplateDraw(HypothesisException):\n\n \"\"\"An internal error raised when something unfortunate happened during\n template generation and you should restart the draw, preferably with a new\n parameter.\n\n This is not an error condition internally, but if you ever see this\n in your code it's probably a Hypothesis bug\n\n \"\"\"\n\n\nclass NoSuchExample(HypothesisException):\n\n \"\"\"The condition we have been asked to satisfy appears to be always false.\n\n This does not guarantee that no example exists, only that we were\n unable to find one.\n\n \"\"\"\n\n def __init__(self, condition_string, extra=''):\n super(NoSuchExample, self).__init__(\n 'No examples found of condition %s%s' % (\n condition_string, extra)\n )\n\n\nclass DefinitelyNoSuchExample(NoSuchExample): # pragma: no cover\n \"\"\"Hypothesis used to be able to detect exhaustive coverage of a search\n space and no longer can.\n\n This exception remains for compatibility reasons for now but can\n never actually be thrown.\n\n \"\"\"\n\n\nclass NoExamples(HypothesisException):\n\n \"\"\"Raised when example() is called on a strategy but we cannot find any\n examples after enough tries that we really should have been able to if this\n was ever going to work.\"\"\"\n\n\nclass Unsatisfiable(HypothesisException):\n\n \"\"\"We ran out of time or examples before we could find enough examples\n which satisfy the assumptions of this hypothesis.\n\n This could be because the function is too slow. If so, try upping\n the timeout. It could also be because the function is using assume\n in a way that is too hard to satisfy. If so, try writing a custom\n strategy or using a better starting point (e.g if you are requiring\n a list has unique values you could instead filter out all duplicate\n values from the list)\n\n \"\"\"\n\n\nclass Flaky(HypothesisException):\n\n \"\"\"This function appears to fail non-deterministically: We have seen it\n fail when passed this example at least once, but a subsequent invocation\n did not fail.\n\n Common causes for this problem are:\n 1. The function depends on external state. e.g. it uses an external\n random number generator. Try to make a version that passes all the\n relevant state in from Hypothesis.\n 2. The function is suffering from too much recursion and its failure\n depends sensitively on where it's been called from.\n 3. The function is timing sensitive and can fail or pass depending on\n how long it takes. Try breaking it up into smaller functions which\n don't do that and testing those instead.\n\n \"\"\"\n\n\nclass Timeout(Unsatisfiable):\n\n \"\"\"We were unable to find enough examples that satisfied the preconditions\n of this hypothesis in the amount of time allotted to us.\"\"\"\n\n\nclass WrongFormat(HypothesisException, ValueError):\n\n \"\"\"An exception indicating you have attempted to serialize a value that\n does not match the type described by this format.\"\"\"\n\n\nclass BadData(HypothesisException, ValueError):\n\n \"\"\"The data that we got out of the database does not seem to match the data\n we could have put into the database given this schema.\"\"\"\n\n\nclass InvalidArgument(HypothesisException, TypeError):\n\n \"\"\"Used to indicate that the arguments to a Hypothesis function were in\n some manner incorrect.\"\"\"\n\n\nclass ResolutionFailed(InvalidArgument):\n\n \"\"\"Hypothesis had to resolve a type to a strategy, but this failed.\n\n Type inference is best-effort, so this only happens when an\n annotation exists but could not be resolved for a required argument\n to the target of ``builds()``, or where the user passed ``infer``.\n\n \"\"\"\n\n\nclass InvalidState(HypothesisException):\n\n \"\"\"The system is not in a state where you were allowed to do that.\"\"\"\n\n\nclass InvalidDefinition(HypothesisException, TypeError):\n\n \"\"\"Used to indicate that a class definition was not well put together and\n has something wrong with it.\"\"\"\n\n\nclass AbnormalExit(HypothesisException):\n\n \"\"\"Raised when a test running in a child process exits without returning or\n raising an exception.\"\"\"\n\n\nclass FailedHealthCheck(HypothesisException, Warning):\n \"\"\"Raised when a test fails a preliminary healthcheck that occurs before\n execution.\"\"\"\n\n def __init__(self, message, check):\n super(FailedHealthCheck, self).__init__(message)\n self.health_check = check\n\n\nclass HypothesisDeprecationWarning(HypothesisException, FutureWarning):\n pass\n\n\nclass Frozen(HypothesisException):\n\n \"\"\"Raised when a mutation method has been called on a ConjectureData object\n after freeze() has been called.\"\"\"\n\n\nclass MultipleFailures(HypothesisException):\n \"\"\"Indicates that Hypothesis found more than one distinct bug when testing\n your code.\"\"\"\n\n\nclass DeadlineExceeded(HypothesisException):\n \"\"\"Raised when an individual test body has taken too long to run.\"\"\"\n\n def __init__(self, runtime, deadline):\n super(DeadlineExceeded, self).__init__((\n 'Test took %.2fms, which exceeds the deadline of '\n '%.2fms') % (runtime, deadline))\n self.runtime = runtime\n self.deadline = deadline\n\n\nclass StopTest(BaseException):\n\n def __init__(self, testcounter):\n super(StopTest, self).__init__(repr(testcounter))\n self.testcounter = testcounter\n", "path": "src/hypothesis/errors.py"}]}
2,416
181
gh_patches_debug_25506
rasdani/github-patches
git_diff
angr__angr-2677
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- SimSegfaultException due to collision of stack and heap when tracing a binary **Describe the bug.** When tracing a CGC binary using a PoV for it, a `SimSegfaultException` is raised due to the collision of stack and heap. **Environment Information.** Platform: linux-x86_64 Python version: 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0] ######## angr ######### Python found it in /home/dnivra/angr-dev/angr/angr Pip version angr 9.0.gitrolling Git info: Current commit 762becbf9c66d4798b8c23cfa512a7f893e2bcf9 from branch master Checked out from remote origin: https://github.com/angr/angr ######## ailment ######### Python found it in /home/dnivra/angr-dev/ailment/ailment Pip version ailment 9.0.gitrolling Git info: Current commit 4e2bba6f0299d1eda6ae570ceabd91eb8a0c72be from branch master Checked out from remote origin: https://github.com/angr/ailment ######## cle ######### Python found it in /home/dnivra/angr-dev/cle/cle Pip version cle 9.0.gitrolling Git info: Current commit 80dcd50abfaa70cbd5b2e360fe41b71406acbfb4 from branch master Checked out from remote origin: https://github.com/angr/cle ######## pyvex ######### Python found it in /home/dnivra/angr-dev/pyvex/pyvex Pip version pyvex 9.0.gitrolling Git info: Current commit 969ec1f10d3e3b15407ee986052aa4b6f2e9df05 from branch master Checked out from remote origin: https://github.com/angr/pyvex ######## claripy ######### Python found it in /home/dnivra/angr-dev/claripy/claripy Pip version claripy 9.0.gitrolling Git info: Current commit 34f31c487f7453f4666cd6fd1d529f417ff6ca08 from branch master Checked out from remote origin: https://github.com/angr/claripy ######## archinfo ######### Python found it in /home/dnivra/angr-dev/archinfo/archinfo Pip version archinfo 9.0.gitrolling Git info: Current commit 437b194538ccb0bf118b4b674613b88832b0b342 from branch master Checked out from remote origin: https://github.com/angr/archinfo ######## z3 ######### Python found it in /home/dnivra/.virtualenvs/angr-dev/lib/python3.8/site-packages/z3 Pip version z3-solver 4.8.9.0 Couldn't find git info ######## unicorn ######### Python found it in /home/dnivra/.virtualenvs/angr-dev/lib/python3.8/site-packages/unicorn Pip version unicorn 1.0.2rc4 Couldn't find git info ######### Native Module Info ########## angr: <CDLL '/home/dnivra/angr-dev/angr/angr/lib/angr_native.so', handle 3375570 at 0x7f938b7ee220> unicorn: <CDLL '/home/dnivra/.virtualenvs/angr-dev/lib/python3.8/site-packages/unicorn/lib/libunicorn.so', handle 2aa9f70 at 0x7f93913291c0> pyvex: <cffi.api._make_ffi_library.<locals>.FFILibrary object at 0x7f9391f39130> z3: <CDLL '/home/dnivra/.virtualenvs/angr-dev/lib/python3.8/site-packages/z3/lib/libz3.so', handle 2d099d0 at 0x7f938ea3a340> **To Reproduce.** [CROMU_00004-stack-heap-collision-repro.zip](https://github.com/angr/angr/files/6481681/CROMU_00004-stack-heap-collision-repro.zip) has script, input and binary to reproduce the issue. `SimSegfaultException` is raised in the `receive` syscall after block 0x804b87b is executed for first time in the VEX engine(13th overall execution). It takes about 10 minutes for execution to reach this location . --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `angr/simos/cgc.py` Content: ``` 1 import logging 2 3 import claripy 4 from cle import BackedCGC 5 6 from ..procedures import SIM_LIBRARIES as L 7 from ..state_plugins import SimActionData 8 from .. import sim_options as o 9 from .userland import SimUserland 10 11 _l = logging.getLogger(name=__name__) 12 13 14 class SimCGC(SimUserland): 15 """ 16 Environment configuration for the CGC DECREE platform 17 """ 18 19 def __init__(self, project, **kwargs): 20 super(SimCGC, self).__init__(project, 21 syscall_library=L['cgcabi'], 22 syscall_addr_alignment=1, 23 name="CGC", 24 **kwargs) 25 26 # pylint: disable=arguments-differ 27 def state_blank(self, flag_page=None, **kwargs): 28 """ 29 :param flag_page: Flag page content, either a string or a list of BV8s 30 """ 31 # default stack as specified in the cgc abi 32 if kwargs.get('stack_end', None) is None: 33 kwargs['stack_end'] = 0xbaaab000 34 if kwargs.get('stack_size', None) is None: 35 kwargs['stack_size'] = 1024*1024*8 36 37 s = super(SimCGC, self).state_blank(**kwargs) # pylint:disable=invalid-name 38 39 # pre-grow the stack by 20 pages. unsure if this is strictly required or just a hack around a compiler bug 40 if hasattr(s.memory, 'allocate_stack_pages'): 41 s.memory.allocate_stack_pages(kwargs['stack_end'] - 1, 20 * 0x1000) 42 43 # Map the flag page 44 if o.ABSTRACT_MEMORY not in s.options: 45 s.memory.map_region(0x4347c000, 4096, 1) 46 47 # Create the CGC plugin 48 s.get_plugin('cgc') 49 50 # Set up the flag page 51 if flag_page is None: 52 flag_page = [s.solver.BVS("cgc-flag-byte-%d" % i, 8, key=('flag', i), eternal=True) for i in range(0x1000)] 53 elif type(flag_page) is bytes: 54 flag_page = [s.solver.BVV(c, 8) for c in flag_page] 55 elif type(flag_page) is list: 56 pass 57 else: 58 raise ValueError("Bad flag page: expected None, bytestring, or list, but got %s" % type(flag_page)) 59 60 s.cgc.flag_bytes = flag_page 61 if s.mode != 'static': 62 s.memory.store(0x4347c000, claripy.Concat(*s.cgc.flag_bytes), priv=True) 63 64 # set up the address for concrete transmits 65 s.unicorn.transmit_addr = self.syscall_from_number(2).addr 66 67 s.libc.max_str_len = 1000000 68 s.libc.max_strtol_len = 10 69 s.libc.max_memcpy_size = 0x100000 70 s.libc.max_buffer_size = 0x100000 71 72 return s 73 74 def state_entry(self, add_options=None, **kwargs): 75 if isinstance(self.project.loader.main_object, BackedCGC): 76 kwargs['permissions_backer'] = (True, self.project.loader.main_object.permissions_map) 77 if add_options is None: 78 add_options = set() 79 add_options.add(o.ZERO_FILL_UNCONSTRAINED_MEMORY) 80 81 state = super(SimCGC, self).state_entry(add_options=add_options, **kwargs) 82 83 if isinstance(self.project.loader.main_object, BackedCGC): 84 # Update allocation base 85 state.cgc.allocation_base = self.project.loader.main_object.current_allocation_base 86 87 # Do all the writes 88 writes_backer = self.project.loader.main_object.writes_backer 89 stdout = state.posix.get_fd(1) 90 pos = 0 91 for size in writes_backer: 92 if size == 0: 93 continue 94 str_to_write = state.solver.BVS('file_write', size*8) 95 a = SimActionData( 96 state, 97 'file_1_0', 98 'write', 99 addr=claripy.BVV(pos, state.arch.bits), 100 data=str_to_write, 101 size=size) 102 stdout.write_data(str_to_write) 103 state.history.add_action(a) 104 pos += size 105 106 else: 107 # Set CGC-specific variables 108 state.regs.eax = 0 109 state.regs.ebx = 0 110 state.regs.ecx = 0x4347c000 111 state.regs.edx = 0 112 state.regs.edi = 0 113 state.regs.esi = 0 114 state.regs.esp = 0xbaaaaffc 115 state.regs.ebp = 0 116 state.regs.cc_dep1 = 0x202 # default eflags 117 state.regs.cc_op = 0 # OP_COPY 118 state.regs.cc_dep2 = 0 # doesn't matter 119 state.regs.cc_ndep = 0 # doesn't matter 120 121 # fpu values 122 state.regs.mm0 = 0 123 state.regs.mm1 = 0 124 state.regs.mm2 = 0 125 state.regs.mm3 = 0 126 state.regs.mm4 = 0 127 state.regs.mm5 = 0 128 state.regs.mm6 = 0 129 state.regs.mm7 = 0 130 state.regs.fpu_tags = 0 131 state.regs.fpround = 0 132 state.regs.fc3210 = 0x0300 133 state.regs.ftop = 0 134 135 # sse values 136 state.regs.sseround = 0 137 state.regs.xmm0 = 0 138 state.regs.xmm1 = 0 139 state.regs.xmm2 = 0 140 state.regs.xmm3 = 0 141 state.regs.xmm4 = 0 142 state.regs.xmm5 = 0 143 state.regs.xmm6 = 0 144 state.regs.xmm7 = 0 145 146 # segmentation registers 147 state.regs.ds = 0 148 state.regs.es = 0 149 state.regs.fs = 0 150 state.regs.gs = 0 151 state.regs.ss = 0 152 state.regs.cs = 0 153 154 return state 155 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/angr/simos/cgc.py b/angr/simos/cgc.py --- a/angr/simos/cgc.py +++ b/angr/simos/cgc.py @@ -24,9 +24,10 @@ **kwargs) # pylint: disable=arguments-differ - def state_blank(self, flag_page=None, **kwargs): + def state_blank(self, flag_page=None, allocate_stack_page_count=0x100, **kwargs): """ - :param flag_page: Flag page content, either a string or a list of BV8s + :param flag_page: Flag page content, either a string or a list of BV8s + :param allocate_stack_page_count: Number of pages to pre-allocate for stack """ # default stack as specified in the cgc abi if kwargs.get('stack_end', None) is None: @@ -36,9 +37,9 @@ s = super(SimCGC, self).state_blank(**kwargs) # pylint:disable=invalid-name - # pre-grow the stack by 20 pages. unsure if this is strictly required or just a hack around a compiler bug + # pre-grow the stack. unsure if this is strictly required or just a hack around a compiler bug if hasattr(s.memory, 'allocate_stack_pages'): - s.memory.allocate_stack_pages(kwargs['stack_end'] - 1, 20 * 0x1000) + s.memory.allocate_stack_pages(kwargs['stack_end'] - 1, allocate_stack_page_count * 0x1000) # Map the flag page if o.ABSTRACT_MEMORY not in s.options:
{"golden_diff": "diff --git a/angr/simos/cgc.py b/angr/simos/cgc.py\n--- a/angr/simos/cgc.py\n+++ b/angr/simos/cgc.py\n@@ -24,9 +24,10 @@\n **kwargs)\n \n # pylint: disable=arguments-differ\n- def state_blank(self, flag_page=None, **kwargs):\n+ def state_blank(self, flag_page=None, allocate_stack_page_count=0x100, **kwargs):\n \"\"\"\n- :param flag_page: Flag page content, either a string or a list of BV8s\n+ :param flag_page: Flag page content, either a string or a list of BV8s\n+ :param allocate_stack_page_count: Number of pages to pre-allocate for stack\n \"\"\"\n # default stack as specified in the cgc abi\n if kwargs.get('stack_end', None) is None:\n@@ -36,9 +37,9 @@\n \n s = super(SimCGC, self).state_blank(**kwargs) # pylint:disable=invalid-name\n \n- # pre-grow the stack by 20 pages. unsure if this is strictly required or just a hack around a compiler bug\n+ # pre-grow the stack. unsure if this is strictly required or just a hack around a compiler bug\n if hasattr(s.memory, 'allocate_stack_pages'):\n- s.memory.allocate_stack_pages(kwargs['stack_end'] - 1, 20 * 0x1000)\n+ s.memory.allocate_stack_pages(kwargs['stack_end'] - 1, allocate_stack_page_count * 0x1000)\n \n # Map the flag page\n if o.ABSTRACT_MEMORY not in s.options:\n", "issue": "SimSegfaultException due to collision of stack and heap when tracing a binary\n**Describe the bug.**\r\n\r\nWhen tracing a CGC binary using a PoV for it, a `SimSegfaultException` is raised due to the collision of stack and heap.\r\n\r\n**Environment Information.**\r\n\r\nPlatform: linux-x86_64\r\nPython version: 3.8.5 (default, Jan 27 2021, 15:41:15) \r\n[GCC 9.3.0]\r\n######## angr #########\r\nPython found it in /home/dnivra/angr-dev/angr/angr\r\nPip version angr 9.0.gitrolling\r\nGit info:\r\n Current commit 762becbf9c66d4798b8c23cfa512a7f893e2bcf9 from branch master\r\n Checked out from remote origin: https://github.com/angr/angr\r\n######## ailment #########\r\nPython found it in /home/dnivra/angr-dev/ailment/ailment\r\nPip version ailment 9.0.gitrolling\r\nGit info:\r\n Current commit 4e2bba6f0299d1eda6ae570ceabd91eb8a0c72be from branch master\r\n Checked out from remote origin: https://github.com/angr/ailment\r\n######## cle #########\r\nPython found it in /home/dnivra/angr-dev/cle/cle\r\nPip version cle 9.0.gitrolling\r\nGit info:\r\n Current commit 80dcd50abfaa70cbd5b2e360fe41b71406acbfb4 from branch master\r\n Checked out from remote origin: https://github.com/angr/cle\r\n######## pyvex #########\r\nPython found it in /home/dnivra/angr-dev/pyvex/pyvex\r\nPip version pyvex 9.0.gitrolling\r\nGit info:\r\n Current commit 969ec1f10d3e3b15407ee986052aa4b6f2e9df05 from branch master\r\n Checked out from remote origin: https://github.com/angr/pyvex\r\n######## claripy #########\r\nPython found it in /home/dnivra/angr-dev/claripy/claripy\r\nPip version claripy 9.0.gitrolling\r\nGit info:\r\n Current commit 34f31c487f7453f4666cd6fd1d529f417ff6ca08 from branch master\r\n Checked out from remote origin: https://github.com/angr/claripy\r\n######## archinfo #########\r\nPython found it in /home/dnivra/angr-dev/archinfo/archinfo\r\nPip version archinfo 9.0.gitrolling\r\nGit info:\r\n Current commit 437b194538ccb0bf118b4b674613b88832b0b342 from branch master\r\n Checked out from remote origin: https://github.com/angr/archinfo\r\n######## z3 #########\r\nPython found it in /home/dnivra/.virtualenvs/angr-dev/lib/python3.8/site-packages/z3\r\nPip version z3-solver 4.8.9.0\r\nCouldn't find git info\r\n######## unicorn #########\r\nPython found it in /home/dnivra/.virtualenvs/angr-dev/lib/python3.8/site-packages/unicorn\r\nPip version unicorn 1.0.2rc4\r\nCouldn't find git info\r\n######### Native Module Info ##########\r\nangr: <CDLL '/home/dnivra/angr-dev/angr/angr/lib/angr_native.so', handle 3375570 at 0x7f938b7ee220>\r\nunicorn: <CDLL '/home/dnivra/.virtualenvs/angr-dev/lib/python3.8/site-packages/unicorn/lib/libunicorn.so', handle 2aa9f70 at 0x7f93913291c0>\r\npyvex: <cffi.api._make_ffi_library.<locals>.FFILibrary object at 0x7f9391f39130>\r\nz3: <CDLL '/home/dnivra/.virtualenvs/angr-dev/lib/python3.8/site-packages/z3/lib/libz3.so', handle 2d099d0 at 0x7f938ea3a340>\r\n\r\n**To Reproduce.**\r\n\r\n[CROMU_00004-stack-heap-collision-repro.zip](https://github.com/angr/angr/files/6481681/CROMU_00004-stack-heap-collision-repro.zip) has script, input and binary to reproduce the issue. `SimSegfaultException` is raised in the `receive` syscall after block 0x804b87b is executed for first time in the VEX engine(13th overall execution). It takes about 10 minutes for execution to reach this location\r\n\r\n.\n", "before_files": [{"content": "import logging\n\nimport claripy\nfrom cle import BackedCGC\n\nfrom ..procedures import SIM_LIBRARIES as L\nfrom ..state_plugins import SimActionData\nfrom .. import sim_options as o\nfrom .userland import SimUserland\n\n_l = logging.getLogger(name=__name__)\n\n\nclass SimCGC(SimUserland):\n \"\"\"\n Environment configuration for the CGC DECREE platform\n \"\"\"\n\n def __init__(self, project, **kwargs):\n super(SimCGC, self).__init__(project,\n syscall_library=L['cgcabi'],\n syscall_addr_alignment=1,\n name=\"CGC\",\n **kwargs)\n\n # pylint: disable=arguments-differ\n def state_blank(self, flag_page=None, **kwargs):\n \"\"\"\n :param flag_page: Flag page content, either a string or a list of BV8s\n \"\"\"\n # default stack as specified in the cgc abi\n if kwargs.get('stack_end', None) is None:\n kwargs['stack_end'] = 0xbaaab000\n if kwargs.get('stack_size', None) is None:\n kwargs['stack_size'] = 1024*1024*8\n\n s = super(SimCGC, self).state_blank(**kwargs) # pylint:disable=invalid-name\n\n # pre-grow the stack by 20 pages. unsure if this is strictly required or just a hack around a compiler bug\n if hasattr(s.memory, 'allocate_stack_pages'):\n s.memory.allocate_stack_pages(kwargs['stack_end'] - 1, 20 * 0x1000)\n\n # Map the flag page\n if o.ABSTRACT_MEMORY not in s.options:\n s.memory.map_region(0x4347c000, 4096, 1)\n\n # Create the CGC plugin\n s.get_plugin('cgc')\n\n # Set up the flag page\n if flag_page is None:\n flag_page = [s.solver.BVS(\"cgc-flag-byte-%d\" % i, 8, key=('flag', i), eternal=True) for i in range(0x1000)]\n elif type(flag_page) is bytes:\n flag_page = [s.solver.BVV(c, 8) for c in flag_page]\n elif type(flag_page) is list:\n pass\n else:\n raise ValueError(\"Bad flag page: expected None, bytestring, or list, but got %s\" % type(flag_page))\n\n s.cgc.flag_bytes = flag_page\n if s.mode != 'static':\n s.memory.store(0x4347c000, claripy.Concat(*s.cgc.flag_bytes), priv=True)\n\n # set up the address for concrete transmits\n s.unicorn.transmit_addr = self.syscall_from_number(2).addr\n\n s.libc.max_str_len = 1000000\n s.libc.max_strtol_len = 10\n s.libc.max_memcpy_size = 0x100000\n s.libc.max_buffer_size = 0x100000\n\n return s\n\n def state_entry(self, add_options=None, **kwargs):\n if isinstance(self.project.loader.main_object, BackedCGC):\n kwargs['permissions_backer'] = (True, self.project.loader.main_object.permissions_map)\n if add_options is None:\n add_options = set()\n add_options.add(o.ZERO_FILL_UNCONSTRAINED_MEMORY)\n\n state = super(SimCGC, self).state_entry(add_options=add_options, **kwargs)\n\n if isinstance(self.project.loader.main_object, BackedCGC):\n # Update allocation base\n state.cgc.allocation_base = self.project.loader.main_object.current_allocation_base\n\n # Do all the writes\n writes_backer = self.project.loader.main_object.writes_backer\n stdout = state.posix.get_fd(1)\n pos = 0\n for size in writes_backer:\n if size == 0:\n continue\n str_to_write = state.solver.BVS('file_write', size*8)\n a = SimActionData(\n state,\n 'file_1_0',\n 'write',\n addr=claripy.BVV(pos, state.arch.bits),\n data=str_to_write,\n size=size)\n stdout.write_data(str_to_write)\n state.history.add_action(a)\n pos += size\n\n else:\n # Set CGC-specific variables\n state.regs.eax = 0\n state.regs.ebx = 0\n state.regs.ecx = 0x4347c000\n state.regs.edx = 0\n state.regs.edi = 0\n state.regs.esi = 0\n state.regs.esp = 0xbaaaaffc\n state.regs.ebp = 0\n state.regs.cc_dep1 = 0x202 # default eflags\n state.regs.cc_op = 0 # OP_COPY\n state.regs.cc_dep2 = 0 # doesn't matter\n state.regs.cc_ndep = 0 # doesn't matter\n\n # fpu values\n state.regs.mm0 = 0\n state.regs.mm1 = 0\n state.regs.mm2 = 0\n state.regs.mm3 = 0\n state.regs.mm4 = 0\n state.regs.mm5 = 0\n state.regs.mm6 = 0\n state.regs.mm7 = 0\n state.regs.fpu_tags = 0\n state.regs.fpround = 0\n state.regs.fc3210 = 0x0300\n state.regs.ftop = 0\n\n # sse values\n state.regs.sseround = 0\n state.regs.xmm0 = 0\n state.regs.xmm1 = 0\n state.regs.xmm2 = 0\n state.regs.xmm3 = 0\n state.regs.xmm4 = 0\n state.regs.xmm5 = 0\n state.regs.xmm6 = 0\n state.regs.xmm7 = 0\n\n # segmentation registers\n state.regs.ds = 0\n state.regs.es = 0\n state.regs.fs = 0\n state.regs.gs = 0\n state.regs.ss = 0\n state.regs.cs = 0\n\n return state\n", "path": "angr/simos/cgc.py"}], "after_files": [{"content": "import logging\n\nimport claripy\nfrom cle import BackedCGC\n\nfrom ..procedures import SIM_LIBRARIES as L\nfrom ..state_plugins import SimActionData\nfrom .. import sim_options as o\nfrom .userland import SimUserland\n\n_l = logging.getLogger(name=__name__)\n\n\nclass SimCGC(SimUserland):\n \"\"\"\n Environment configuration for the CGC DECREE platform\n \"\"\"\n\n def __init__(self, project, **kwargs):\n super(SimCGC, self).__init__(project,\n syscall_library=L['cgcabi'],\n syscall_addr_alignment=1,\n name=\"CGC\",\n **kwargs)\n\n # pylint: disable=arguments-differ\n def state_blank(self, flag_page=None, allocate_stack_page_count=0x100, **kwargs):\n \"\"\"\n :param flag_page: Flag page content, either a string or a list of BV8s\n :param allocate_stack_page_count: Number of pages to pre-allocate for stack\n \"\"\"\n # default stack as specified in the cgc abi\n if kwargs.get('stack_end', None) is None:\n kwargs['stack_end'] = 0xbaaab000\n if kwargs.get('stack_size', None) is None:\n kwargs['stack_size'] = 1024*1024*8\n\n s = super(SimCGC, self).state_blank(**kwargs) # pylint:disable=invalid-name\n\n # pre-grow the stack. unsure if this is strictly required or just a hack around a compiler bug\n if hasattr(s.memory, 'allocate_stack_pages'):\n s.memory.allocate_stack_pages(kwargs['stack_end'] - 1, allocate_stack_page_count * 0x1000)\n\n # Map the flag page\n if o.ABSTRACT_MEMORY not in s.options:\n s.memory.map_region(0x4347c000, 4096, 1)\n\n # Create the CGC plugin\n s.get_plugin('cgc')\n\n # Set up the flag page\n if flag_page is None:\n flag_page = [s.solver.BVS(\"cgc-flag-byte-%d\" % i, 8, key=('flag', i), eternal=True) for i in range(0x1000)]\n elif type(flag_page) is bytes:\n flag_page = [s.solver.BVV(c, 8) for c in flag_page]\n elif type(flag_page) is list:\n pass\n else:\n raise ValueError(\"Bad flag page: expected None, bytestring, or list, but got %s\" % type(flag_page))\n\n s.cgc.flag_bytes = flag_page\n if s.mode != 'static':\n s.memory.store(0x4347c000, claripy.Concat(*s.cgc.flag_bytes), priv=True)\n\n # set up the address for concrete transmits\n s.unicorn.transmit_addr = self.syscall_from_number(2).addr\n\n s.libc.max_str_len = 1000000\n s.libc.max_strtol_len = 10\n s.libc.max_memcpy_size = 0x100000\n s.libc.max_buffer_size = 0x100000\n\n return s\n\n def state_entry(self, add_options=None, **kwargs):\n if isinstance(self.project.loader.main_object, BackedCGC):\n kwargs['permissions_backer'] = (True, self.project.loader.main_object.permissions_map)\n if add_options is None:\n add_options = set()\n add_options.add(o.ZERO_FILL_UNCONSTRAINED_MEMORY)\n\n state = super(SimCGC, self).state_entry(add_options=add_options, **kwargs)\n\n if isinstance(self.project.loader.main_object, BackedCGC):\n # Update allocation base\n state.cgc.allocation_base = self.project.loader.main_object.current_allocation_base\n\n # Do all the writes\n writes_backer = self.project.loader.main_object.writes_backer\n stdout = state.posix.get_fd(1)\n pos = 0\n for size in writes_backer:\n if size == 0:\n continue\n str_to_write = state.solver.BVS('file_write', size*8)\n a = SimActionData(\n state,\n 'file_1_0',\n 'write',\n addr=claripy.BVV(pos, state.arch.bits),\n data=str_to_write,\n size=size)\n stdout.write_data(str_to_write)\n state.history.add_action(a)\n pos += size\n\n else:\n # Set CGC-specific variables\n state.regs.eax = 0\n state.regs.ebx = 0\n state.regs.ecx = 0x4347c000\n state.regs.edx = 0\n state.regs.edi = 0\n state.regs.esi = 0\n state.regs.esp = 0xbaaaaffc\n state.regs.ebp = 0\n state.regs.cc_dep1 = 0x202 # default eflags\n state.regs.cc_op = 0 # OP_COPY\n state.regs.cc_dep2 = 0 # doesn't matter\n state.regs.cc_ndep = 0 # doesn't matter\n\n # fpu values\n state.regs.mm0 = 0\n state.regs.mm1 = 0\n state.regs.mm2 = 0\n state.regs.mm3 = 0\n state.regs.mm4 = 0\n state.regs.mm5 = 0\n state.regs.mm6 = 0\n state.regs.mm7 = 0\n state.regs.fpu_tags = 0\n state.regs.fpround = 0\n state.regs.fc3210 = 0x0300\n state.regs.ftop = 0\n\n # sse values\n state.regs.sseround = 0\n state.regs.xmm0 = 0\n state.regs.xmm1 = 0\n state.regs.xmm2 = 0\n state.regs.xmm3 = 0\n state.regs.xmm4 = 0\n state.regs.xmm5 = 0\n state.regs.xmm6 = 0\n state.regs.xmm7 = 0\n\n # segmentation registers\n state.regs.ds = 0\n state.regs.es = 0\n state.regs.fs = 0\n state.regs.gs = 0\n state.regs.ss = 0\n state.regs.cs = 0\n\n return state\n", "path": "angr/simos/cgc.py"}]}
3,241
382
gh_patches_debug_34087
rasdani/github-patches
git_diff
voxel51__fiftyone-509
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Show currently selected samples in the App, and allow selection/exclusion We should replace `Dataset: coco-2017-validation` in the App screenshot below with `Selected # samples`, to indicate how many samples are currently selected: Also, we should add an icon next to `Selected # samples` that the user can click on. This icon would open a small window that lets the user pick between two actions to take: - show only these samples: this would add a `Select([list, of, selected, samples])` stage to the current view - exclude these samples: this would add an `Exclude([list, of, selected, samples])` stage to the current view <img width="1323" alt="Screen Shot 2020-08-13 at 3 25 08 PM" src="https://user-images.githubusercontent.com/25985824/90178699-45803480-dd7a-11ea-9b21-3e1adc413e23.png"> --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `fiftyone/server/main.py` Content: ``` 1 """ 2 FiftyOne Flask server. 3 4 | Copyright 2017-2020, Voxel51, Inc. 5 | `voxel51.com <https://voxel51.com/>`_ 6 | 7 """ 8 import argparse 9 import json 10 import logging 11 import os 12 import uuid 13 14 from bson import json_util 15 from flask import Flask, jsonify, request, send_file 16 from flask_cors import CORS 17 from flask_socketio import emit, Namespace, SocketIO 18 19 import eta.core.utils as etau 20 21 os.environ["FIFTYONE_SERVER"] = "1" 22 import fiftyone.constants as foc 23 import fiftyone.core.fields as fof 24 import fiftyone.core.odm as foo 25 from fiftyone.core.service import DatabaseService 26 from fiftyone.core.stages import _STAGES 27 import fiftyone.core.state as fos 28 29 from util import get_image_size 30 from pipelines import DISTRIBUTION_PIPELINES, LABELS, SCALARS 31 32 33 logger = logging.getLogger(__name__) 34 35 # connect to the existing DB service to initialize global port information 36 db = DatabaseService() 37 db.start() 38 39 app = Flask(__name__) 40 CORS(app) 41 42 app.config["SECRET_KEY"] = "fiftyone" 43 44 socketio = SocketIO(app, async_mode="eventlet", cors_allowed_origins="*") 45 46 47 def get_user_id(): 48 uid_path = os.path.join(foc.FIFTYONE_CONFIG_DIR, "var", "uid") 49 50 def read(): 51 try: 52 with open(uid_path) as f: 53 return next(f).strip() 54 except (IOError, StopIteration): 55 return None 56 57 if not read(): 58 os.makedirs(os.path.dirname(uid_path), exist_ok=True) 59 with open(uid_path, "w") as f: 60 f.write(str(uuid.uuid4())) 61 return read() 62 63 64 @app.route("/") 65 def get_sample_media(): 66 """Gets the sample media. 67 68 Returns: 69 bytes 70 """ 71 path = request.args.get("path") 72 return send_file(path) 73 74 75 @app.route("/fiftyone") 76 def get_fiftyone_info(): 77 return jsonify({"version": foc.VERSION}) 78 79 80 @app.route("/stages") 81 def get_stages(): 82 """Gets ViewStage descriptions""" 83 return { 84 "stages": [ 85 {"name": stage.__name__, "params": stage._params()} 86 for stage in _STAGES 87 ] 88 } 89 90 91 def _load_state(func): 92 def wrapper(self, *args, **kwargs): 93 state = fos.StateDescriptionWithDerivables.from_dict(self.state) 94 state = func(self, state, *args, **kwargs) 95 self.state = state.serialize() 96 emit("update", self.state, broadcast=True, include_self=False) 97 return self.state 98 99 return wrapper 100 101 102 class StateController(Namespace): 103 """State controller. 104 105 Attributes: 106 state: a :class:`fiftyone.core.state.StateDescriptionWithDerivables` 107 instance 108 109 Args: 110 **args: positional arguments for ``flask_socketio.Namespace`` 111 **kwargs: keyword arguments for ``flask_socketio.Namespace`` 112 """ 113 114 def __init__(self, *args, **kwargs): 115 self.state = fos.StateDescriptionWithDerivables().serialize() 116 super().__init__(*args, **kwargs) 117 118 def on_connect(self): 119 """Handles connection to the server.""" 120 pass 121 122 def on_disconnect(self): 123 """Handles disconnection from the server.""" 124 pass 125 126 def on_update(self, data): 127 """Updates the state. 128 129 Args: 130 state_dict: a serialized 131 :class:`fiftyone.core.state.StateDescription` 132 """ 133 self.state = fos.StateDescriptionWithDerivables.from_dict( 134 data["data"] 135 ).serialize() 136 emit( 137 "update", 138 self.state, 139 broadcast=True, 140 include_self=data["include_self"], 141 ) 142 143 def on_get_fiftyone_info(self): 144 """Retrieves information about the FiftyOne installation.""" 145 return { 146 "version": foc.VERSION, 147 "user_id": get_user_id(), 148 } 149 150 def on_get_current_state(self, _): 151 """Gets the current state. 152 153 Returns: 154 a :class:`fiftyone.core.state.StateDescriptionWithDerivables` 155 """ 156 return self.state 157 158 @_load_state 159 def on_add_selection(self, state, _id): 160 """Adds a sample to the selected samples list. 161 162 Args: 163 state: the current 164 :class:`fiftyone.core.state.StateDescriptionWithDerivables` 165 _id: the sample ID 166 167 Returns: 168 the updated 169 :class:`fiftyone.core.state.StateDescriptionWithDerivables` 170 """ 171 selected = set(state.selected) 172 selected.add(_id) 173 state.selected = list(selected) 174 return state 175 176 @_load_state 177 def on_remove_selection(self, state, _id): 178 """Remove a sample from the selected samples list 179 180 Args: 181 state: the current 182 :class:`fiftyone.core.state.StateDescriptionWithDerivables` 183 _id: the sample ID 184 185 Returns: 186 the updated 187 :class:`fiftyone.core.state.StateDescriptionWithDerivables` 188 """ 189 selected = set(state.selected) 190 selected.remove(_id) 191 state.selected = list(selected) 192 return state 193 194 def on_page(self, page, page_length=20): 195 """Gets the requested page of samples. 196 197 Args: 198 page: the page number 199 page_length: the page length 200 201 Returns: 202 the list of sample dicts for the page 203 """ 204 state = fos.StateDescriptionWithDerivables.from_dict(self.state) 205 if state.view is not None: 206 view = state.view 207 elif state.dataset is not None: 208 view = state.dataset.view() 209 else: 210 return [] 211 212 view = view.skip((page - 1) * page_length).limit(page_length + 1) 213 samples = [ 214 json.loads( 215 json_util.dumps(s.to_mongo_dict()), parse_constant=lambda c: c 216 ) 217 for s in view 218 ] 219 more = False 220 if len(samples) > page_length: 221 samples = samples[:page_length] 222 more = page + 1 223 224 results = [{"sample": s} for s in samples] 225 for r in results: 226 w, h = get_image_size(r["sample"]["filepath"]) 227 r["width"] = w 228 r["height"] = h 229 230 return {"results": results, "more": more} 231 232 def on_get_distributions(self, group): 233 """Gets the distributions for the current state with respect to a 234 group. 235 236 Args: 237 group: one of "labels", "tags", or "scalars" 238 239 Returns: 240 a list of distributions 241 """ 242 state = fos.StateDescriptionWithDerivables.from_dict(self.state) 243 if state.view is not None: 244 view = state.view 245 elif state.dataset is not None: 246 view = state.dataset.view() 247 else: 248 return [] 249 250 return _get_distributions(view, group) 251 252 253 def _get_distributions(view, group): 254 pipeline = DISTRIBUTION_PIPELINES[group] 255 256 # we add a sub-pipeline for each numeric as it looks like multiple 257 # buckets in a single pipeline is not supported 258 if group == SCALARS: 259 _numeric_distribution_pipelines(view, pipeline) 260 261 result = list(view.aggregate(pipeline)) 262 263 if group in {LABELS, SCALARS}: 264 new_result = [] 265 for f in result[0].values(): 266 new_result += f 267 result = new_result 268 269 if group != SCALARS: 270 for idx, dist in enumerate(result): 271 result[idx]["data"] = sorted( 272 result[idx]["data"], key=lambda c: c["count"], reverse=True 273 ) 274 275 return sorted(result, key=lambda d: d["name"]) 276 277 278 def _numeric_bounds(view, numerics): 279 bounds_pipeline = [{"$facet": {}}] 280 for idx, (k, v) in enumerate(numerics.items()): 281 bounds_pipeline[0]["$facet"]["numeric-%d" % idx] = [ 282 { 283 "$group": { 284 "_id": k, 285 "min": {"$min": "$%s" % k}, 286 "max": {"$max": "$%s" % k}, 287 }, 288 } 289 ] 290 291 return list(view.aggregate(bounds_pipeline))[0] if len(numerics) else {} 292 293 294 def _numeric_distribution_pipelines(view, pipeline, buckets=50): 295 numerics = view._dataset.get_field_schema(ftype=fof.IntField) 296 numerics.update(view._dataset.get_field_schema(ftype=fof.FloatField)) 297 298 # here we query the min and max for each numeric field 299 # unfortunately, it looks like this has to be a separate query 300 bounds = _numeric_bounds(view, numerics) 301 302 # for each numeric field, build the boundaries array with the 303 # min/max results when adding the field's sub-pipeline 304 for idx, (k, v) in enumerate(numerics.items()): 305 sub_pipeline = "numeric-%d" % idx 306 field_bounds = bounds[sub_pipeline][0] 307 mn = field_bounds["min"] 308 mx = field_bounds["max"] 309 310 # if min and max are equal, we artifically create a boundary 311 # @todo alternative approach to scalar fields with only one value 312 if mn == mx: 313 if mx > 0: 314 mn = 0 315 else: 316 mx = 0 317 318 step = (mx - mn) / buckets 319 boundaries = [mn + step * s for s in range(0, buckets)] 320 321 pipeline[0]["$facet"][sub_pipeline] = [ 322 { 323 "$bucket": { 324 "groupBy": "$%s" % k, 325 "boundaries": boundaries, 326 "default": "null", 327 "output": {"count": {"$sum": 1}}, 328 } 329 }, 330 { 331 "$group": { 332 "_id": k, 333 "data": { 334 "$push": { 335 "key": { 336 "$cond": [ 337 {"$ne": ["$_id", "null"]}, 338 {"$add": ["$_id", step / 2]}, 339 "null", 340 ] 341 }, 342 "count": "$count", 343 } 344 }, 345 } 346 }, 347 { 348 "$project": { 349 "name": k, 350 "type": v.__class__.__name__[ 351 : -len("Field") # grab field type from the class 352 ].lower(), 353 "data": "$data", 354 } 355 }, 356 ] 357 358 359 socketio.on_namespace(StateController("/state")) 360 361 362 if __name__ == "__main__": 363 log_path = os.path.join( 364 foc.FIFTYONE_CONFIG_DIR, "var", "log", "server.log" 365 ) 366 etau.ensure_basedir(log_path) 367 # pylint: disable=no-member 368 app.logger.addHandler(logging.FileHandler(log_path, mode="w")) 369 370 parser = argparse.ArgumentParser() 371 parser.add_argument("--port", type=int, default=5151) 372 args = parser.parse_args() 373 374 socketio.run(app, port=args.port, debug=foc.DEV_INSTALL) 375 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/fiftyone/server/main.py b/fiftyone/server/main.py --- a/fiftyone/server/main.py +++ b/fiftyone/server/main.py @@ -88,15 +88,23 @@ } -def _load_state(func): - def wrapper(self, *args, **kwargs): - state = fos.StateDescriptionWithDerivables.from_dict(self.state) - state = func(self, state, *args, **kwargs) - self.state = state.serialize() - emit("update", self.state, broadcast=True, include_self=False) - return self.state +def _load_state(trigger_update=False): + def decorator(func): + def wrapper(self, *args, **kwargs): + state = fos.StateDescriptionWithDerivables.from_dict(self.state) + state = func(self, state, *args, **kwargs) + self.state = state.serialize() + emit( + "update", + self.state, + broadcast=True, + include_self=trigger_update, + ) + return self.state + + return wrapper - return wrapper + return decorator class StateController(Namespace): @@ -155,7 +163,7 @@ """ return self.state - @_load_state + @_load_state() def on_add_selection(self, state, _id): """Adds a sample to the selected samples list. @@ -173,7 +181,7 @@ state.selected = list(selected) return state - @_load_state + @_load_state() def on_remove_selection(self, state, _id): """Remove a sample from the selected samples list @@ -191,6 +199,21 @@ state.selected = list(selected) return state + @_load_state() + def on_clear_selection(self, state): + """Remove all samples from the selected samples list + + Args: + state: the current + :class:`fiftyone.core.state.StateDescriptionWithDerivables` + + Returns: + the updated + :class:`fiftyone.core.state.StateDescriptionWithDerivables` + """ + state.selected = [] + return state + def on_page(self, page, page_length=20): """Gets the requested page of samples.
{"golden_diff": "diff --git a/fiftyone/server/main.py b/fiftyone/server/main.py\n--- a/fiftyone/server/main.py\n+++ b/fiftyone/server/main.py\n@@ -88,15 +88,23 @@\n }\n \n \n-def _load_state(func):\n- def wrapper(self, *args, **kwargs):\n- state = fos.StateDescriptionWithDerivables.from_dict(self.state)\n- state = func(self, state, *args, **kwargs)\n- self.state = state.serialize()\n- emit(\"update\", self.state, broadcast=True, include_self=False)\n- return self.state\n+def _load_state(trigger_update=False):\n+ def decorator(func):\n+ def wrapper(self, *args, **kwargs):\n+ state = fos.StateDescriptionWithDerivables.from_dict(self.state)\n+ state = func(self, state, *args, **kwargs)\n+ self.state = state.serialize()\n+ emit(\n+ \"update\",\n+ self.state,\n+ broadcast=True,\n+ include_self=trigger_update,\n+ )\n+ return self.state\n+\n+ return wrapper\n \n- return wrapper\n+ return decorator\n \n \n class StateController(Namespace):\n@@ -155,7 +163,7 @@\n \"\"\"\n return self.state\n \n- @_load_state\n+ @_load_state()\n def on_add_selection(self, state, _id):\n \"\"\"Adds a sample to the selected samples list.\n \n@@ -173,7 +181,7 @@\n state.selected = list(selected)\n return state\n \n- @_load_state\n+ @_load_state()\n def on_remove_selection(self, state, _id):\n \"\"\"Remove a sample from the selected samples list\n \n@@ -191,6 +199,21 @@\n state.selected = list(selected)\n return state\n \n+ @_load_state()\n+ def on_clear_selection(self, state):\n+ \"\"\"Remove all samples from the selected samples list\n+\n+ Args:\n+ state: the current\n+ :class:`fiftyone.core.state.StateDescriptionWithDerivables`\n+\n+ Returns:\n+ the updated\n+ :class:`fiftyone.core.state.StateDescriptionWithDerivables`\n+ \"\"\"\n+ state.selected = []\n+ return state\n+\n def on_page(self, page, page_length=20):\n \"\"\"Gets the requested page of samples.\n", "issue": "Show currently selected samples in the App, and allow selection/exclusion\nWe should replace `Dataset: coco-2017-validation` in the App screenshot below with `Selected # samples`, to indicate how many samples are currently selected:\r\n\r\nAlso, we should add an icon next to `Selected # samples` that the user can click on. This icon would open a small window that lets the user pick between two actions to take:\r\n- show only these samples: this would add a `Select([list, of, selected, samples])` stage to the current view\r\n- exclude these samples: this would add an `Exclude([list, of, selected, samples])` stage to the current view\r\n\r\n<img width=\"1323\" alt=\"Screen Shot 2020-08-13 at 3 25 08 PM\" src=\"https://user-images.githubusercontent.com/25985824/90178699-45803480-dd7a-11ea-9b21-3e1adc413e23.png\">\r\n\n", "before_files": [{"content": "\"\"\"\nFiftyOne Flask server.\n\n| Copyright 2017-2020, Voxel51, Inc.\n| `voxel51.com <https://voxel51.com/>`_\n|\n\"\"\"\nimport argparse\nimport json\nimport logging\nimport os\nimport uuid\n\nfrom bson import json_util\nfrom flask import Flask, jsonify, request, send_file\nfrom flask_cors import CORS\nfrom flask_socketio import emit, Namespace, SocketIO\n\nimport eta.core.utils as etau\n\nos.environ[\"FIFTYONE_SERVER\"] = \"1\"\nimport fiftyone.constants as foc\nimport fiftyone.core.fields as fof\nimport fiftyone.core.odm as foo\nfrom fiftyone.core.service import DatabaseService\nfrom fiftyone.core.stages import _STAGES\nimport fiftyone.core.state as fos\n\nfrom util import get_image_size\nfrom pipelines import DISTRIBUTION_PIPELINES, LABELS, SCALARS\n\n\nlogger = logging.getLogger(__name__)\n\n# connect to the existing DB service to initialize global port information\ndb = DatabaseService()\ndb.start()\n\napp = Flask(__name__)\nCORS(app)\n\napp.config[\"SECRET_KEY\"] = \"fiftyone\"\n\nsocketio = SocketIO(app, async_mode=\"eventlet\", cors_allowed_origins=\"*\")\n\n\ndef get_user_id():\n uid_path = os.path.join(foc.FIFTYONE_CONFIG_DIR, \"var\", \"uid\")\n\n def read():\n try:\n with open(uid_path) as f:\n return next(f).strip()\n except (IOError, StopIteration):\n return None\n\n if not read():\n os.makedirs(os.path.dirname(uid_path), exist_ok=True)\n with open(uid_path, \"w\") as f:\n f.write(str(uuid.uuid4()))\n return read()\n\n\[email protected](\"/\")\ndef get_sample_media():\n \"\"\"Gets the sample media.\n\n Returns:\n bytes\n \"\"\"\n path = request.args.get(\"path\")\n return send_file(path)\n\n\[email protected](\"/fiftyone\")\ndef get_fiftyone_info():\n return jsonify({\"version\": foc.VERSION})\n\n\[email protected](\"/stages\")\ndef get_stages():\n \"\"\"Gets ViewStage descriptions\"\"\"\n return {\n \"stages\": [\n {\"name\": stage.__name__, \"params\": stage._params()}\n for stage in _STAGES\n ]\n }\n\n\ndef _load_state(func):\n def wrapper(self, *args, **kwargs):\n state = fos.StateDescriptionWithDerivables.from_dict(self.state)\n state = func(self, state, *args, **kwargs)\n self.state = state.serialize()\n emit(\"update\", self.state, broadcast=True, include_self=False)\n return self.state\n\n return wrapper\n\n\nclass StateController(Namespace):\n \"\"\"State controller.\n\n Attributes:\n state: a :class:`fiftyone.core.state.StateDescriptionWithDerivables`\n instance\n\n Args:\n **args: positional arguments for ``flask_socketio.Namespace``\n **kwargs: keyword arguments for ``flask_socketio.Namespace``\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n self.state = fos.StateDescriptionWithDerivables().serialize()\n super().__init__(*args, **kwargs)\n\n def on_connect(self):\n \"\"\"Handles connection to the server.\"\"\"\n pass\n\n def on_disconnect(self):\n \"\"\"Handles disconnection from the server.\"\"\"\n pass\n\n def on_update(self, data):\n \"\"\"Updates the state.\n\n Args:\n state_dict: a serialized\n :class:`fiftyone.core.state.StateDescription`\n \"\"\"\n self.state = fos.StateDescriptionWithDerivables.from_dict(\n data[\"data\"]\n ).serialize()\n emit(\n \"update\",\n self.state,\n broadcast=True,\n include_self=data[\"include_self\"],\n )\n\n def on_get_fiftyone_info(self):\n \"\"\"Retrieves information about the FiftyOne installation.\"\"\"\n return {\n \"version\": foc.VERSION,\n \"user_id\": get_user_id(),\n }\n\n def on_get_current_state(self, _):\n \"\"\"Gets the current state.\n\n Returns:\n a :class:`fiftyone.core.state.StateDescriptionWithDerivables`\n \"\"\"\n return self.state\n\n @_load_state\n def on_add_selection(self, state, _id):\n \"\"\"Adds a sample to the selected samples list.\n\n Args:\n state: the current\n :class:`fiftyone.core.state.StateDescriptionWithDerivables`\n _id: the sample ID\n\n Returns:\n the updated\n :class:`fiftyone.core.state.StateDescriptionWithDerivables`\n \"\"\"\n selected = set(state.selected)\n selected.add(_id)\n state.selected = list(selected)\n return state\n\n @_load_state\n def on_remove_selection(self, state, _id):\n \"\"\"Remove a sample from the selected samples list\n\n Args:\n state: the current\n :class:`fiftyone.core.state.StateDescriptionWithDerivables`\n _id: the sample ID\n\n Returns:\n the updated\n :class:`fiftyone.core.state.StateDescriptionWithDerivables`\n \"\"\"\n selected = set(state.selected)\n selected.remove(_id)\n state.selected = list(selected)\n return state\n\n def on_page(self, page, page_length=20):\n \"\"\"Gets the requested page of samples.\n\n Args:\n page: the page number\n page_length: the page length\n\n Returns:\n the list of sample dicts for the page\n \"\"\"\n state = fos.StateDescriptionWithDerivables.from_dict(self.state)\n if state.view is not None:\n view = state.view\n elif state.dataset is not None:\n view = state.dataset.view()\n else:\n return []\n\n view = view.skip((page - 1) * page_length).limit(page_length + 1)\n samples = [\n json.loads(\n json_util.dumps(s.to_mongo_dict()), parse_constant=lambda c: c\n )\n for s in view\n ]\n more = False\n if len(samples) > page_length:\n samples = samples[:page_length]\n more = page + 1\n\n results = [{\"sample\": s} for s in samples]\n for r in results:\n w, h = get_image_size(r[\"sample\"][\"filepath\"])\n r[\"width\"] = w\n r[\"height\"] = h\n\n return {\"results\": results, \"more\": more}\n\n def on_get_distributions(self, group):\n \"\"\"Gets the distributions for the current state with respect to a\n group.\n\n Args:\n group: one of \"labels\", \"tags\", or \"scalars\"\n\n Returns:\n a list of distributions\n \"\"\"\n state = fos.StateDescriptionWithDerivables.from_dict(self.state)\n if state.view is not None:\n view = state.view\n elif state.dataset is not None:\n view = state.dataset.view()\n else:\n return []\n\n return _get_distributions(view, group)\n\n\ndef _get_distributions(view, group):\n pipeline = DISTRIBUTION_PIPELINES[group]\n\n # we add a sub-pipeline for each numeric as it looks like multiple\n # buckets in a single pipeline is not supported\n if group == SCALARS:\n _numeric_distribution_pipelines(view, pipeline)\n\n result = list(view.aggregate(pipeline))\n\n if group in {LABELS, SCALARS}:\n new_result = []\n for f in result[0].values():\n new_result += f\n result = new_result\n\n if group != SCALARS:\n for idx, dist in enumerate(result):\n result[idx][\"data\"] = sorted(\n result[idx][\"data\"], key=lambda c: c[\"count\"], reverse=True\n )\n\n return sorted(result, key=lambda d: d[\"name\"])\n\n\ndef _numeric_bounds(view, numerics):\n bounds_pipeline = [{\"$facet\": {}}]\n for idx, (k, v) in enumerate(numerics.items()):\n bounds_pipeline[0][\"$facet\"][\"numeric-%d\" % idx] = [\n {\n \"$group\": {\n \"_id\": k,\n \"min\": {\"$min\": \"$%s\" % k},\n \"max\": {\"$max\": \"$%s\" % k},\n },\n }\n ]\n\n return list(view.aggregate(bounds_pipeline))[0] if len(numerics) else {}\n\n\ndef _numeric_distribution_pipelines(view, pipeline, buckets=50):\n numerics = view._dataset.get_field_schema(ftype=fof.IntField)\n numerics.update(view._dataset.get_field_schema(ftype=fof.FloatField))\n\n # here we query the min and max for each numeric field\n # unfortunately, it looks like this has to be a separate query\n bounds = _numeric_bounds(view, numerics)\n\n # for each numeric field, build the boundaries array with the\n # min/max results when adding the field's sub-pipeline\n for idx, (k, v) in enumerate(numerics.items()):\n sub_pipeline = \"numeric-%d\" % idx\n field_bounds = bounds[sub_pipeline][0]\n mn = field_bounds[\"min\"]\n mx = field_bounds[\"max\"]\n\n # if min and max are equal, we artifically create a boundary\n # @todo alternative approach to scalar fields with only one value\n if mn == mx:\n if mx > 0:\n mn = 0\n else:\n mx = 0\n\n step = (mx - mn) / buckets\n boundaries = [mn + step * s for s in range(0, buckets)]\n\n pipeline[0][\"$facet\"][sub_pipeline] = [\n {\n \"$bucket\": {\n \"groupBy\": \"$%s\" % k,\n \"boundaries\": boundaries,\n \"default\": \"null\",\n \"output\": {\"count\": {\"$sum\": 1}},\n }\n },\n {\n \"$group\": {\n \"_id\": k,\n \"data\": {\n \"$push\": {\n \"key\": {\n \"$cond\": [\n {\"$ne\": [\"$_id\", \"null\"]},\n {\"$add\": [\"$_id\", step / 2]},\n \"null\",\n ]\n },\n \"count\": \"$count\",\n }\n },\n }\n },\n {\n \"$project\": {\n \"name\": k,\n \"type\": v.__class__.__name__[\n : -len(\"Field\") # grab field type from the class\n ].lower(),\n \"data\": \"$data\",\n }\n },\n ]\n\n\nsocketio.on_namespace(StateController(\"/state\"))\n\n\nif __name__ == \"__main__\":\n log_path = os.path.join(\n foc.FIFTYONE_CONFIG_DIR, \"var\", \"log\", \"server.log\"\n )\n etau.ensure_basedir(log_path)\n # pylint: disable=no-member\n app.logger.addHandler(logging.FileHandler(log_path, mode=\"w\"))\n\n parser = argparse.ArgumentParser()\n parser.add_argument(\"--port\", type=int, default=5151)\n args = parser.parse_args()\n\n socketio.run(app, port=args.port, debug=foc.DEV_INSTALL)\n", "path": "fiftyone/server/main.py"}], "after_files": [{"content": "\"\"\"\nFiftyOne Flask server.\n\n| Copyright 2017-2020, Voxel51, Inc.\n| `voxel51.com <https://voxel51.com/>`_\n|\n\"\"\"\nimport argparse\nimport json\nimport logging\nimport os\nimport uuid\n\nfrom bson import json_util\nfrom flask import Flask, jsonify, request, send_file\nfrom flask_cors import CORS\nfrom flask_socketio import emit, Namespace, SocketIO\n\nimport eta.core.utils as etau\n\nos.environ[\"FIFTYONE_SERVER\"] = \"1\"\nimport fiftyone.constants as foc\nimport fiftyone.core.fields as fof\nimport fiftyone.core.odm as foo\nfrom fiftyone.core.service import DatabaseService\nfrom fiftyone.core.stages import _STAGES\nimport fiftyone.core.state as fos\n\nfrom util import get_image_size\nfrom pipelines import DISTRIBUTION_PIPELINES, LABELS, SCALARS\n\n\nlogger = logging.getLogger(__name__)\n\n# connect to the existing DB service to initialize global port information\ndb = DatabaseService()\ndb.start()\n\napp = Flask(__name__)\nCORS(app)\n\napp.config[\"SECRET_KEY\"] = \"fiftyone\"\n\nsocketio = SocketIO(app, async_mode=\"eventlet\", cors_allowed_origins=\"*\")\n\n\ndef get_user_id():\n uid_path = os.path.join(foc.FIFTYONE_CONFIG_DIR, \"var\", \"uid\")\n\n def read():\n try:\n with open(uid_path) as f:\n return next(f).strip()\n except (IOError, StopIteration):\n return None\n\n if not read():\n os.makedirs(os.path.dirname(uid_path), exist_ok=True)\n with open(uid_path, \"w\") as f:\n f.write(str(uuid.uuid4()))\n return read()\n\n\[email protected](\"/\")\ndef get_sample_media():\n \"\"\"Gets the sample media.\n\n Returns:\n bytes\n \"\"\"\n path = request.args.get(\"path\")\n return send_file(path)\n\n\[email protected](\"/fiftyone\")\ndef get_fiftyone_info():\n return jsonify({\"version\": foc.VERSION})\n\n\[email protected](\"/stages\")\ndef get_stages():\n \"\"\"Gets ViewStage descriptions\"\"\"\n return {\n \"stages\": [\n {\"name\": stage.__name__, \"params\": stage._params()}\n for stage in _STAGES\n ]\n }\n\n\ndef _load_state(trigger_update=False):\n def decorator(func):\n def wrapper(self, *args, **kwargs):\n state = fos.StateDescriptionWithDerivables.from_dict(self.state)\n state = func(self, state, *args, **kwargs)\n self.state = state.serialize()\n emit(\n \"update\",\n self.state,\n broadcast=True,\n include_self=trigger_update,\n )\n return self.state\n\n return wrapper\n\n return decorator\n\n\nclass StateController(Namespace):\n \"\"\"State controller.\n\n Attributes:\n state: a :class:`fiftyone.core.state.StateDescriptionWithDerivables`\n instance\n\n Args:\n **args: positional arguments for ``flask_socketio.Namespace``\n **kwargs: keyword arguments for ``flask_socketio.Namespace``\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n self.state = fos.StateDescriptionWithDerivables().serialize()\n super().__init__(*args, **kwargs)\n\n def on_connect(self):\n \"\"\"Handles connection to the server.\"\"\"\n pass\n\n def on_disconnect(self):\n \"\"\"Handles disconnection from the server.\"\"\"\n pass\n\n def on_update(self, data):\n \"\"\"Updates the state.\n\n Args:\n state_dict: a serialized\n :class:`fiftyone.core.state.StateDescription`\n \"\"\"\n self.state = fos.StateDescriptionWithDerivables.from_dict(\n data[\"data\"]\n ).serialize()\n emit(\n \"update\",\n self.state,\n broadcast=True,\n include_self=data[\"include_self\"],\n )\n\n def on_get_fiftyone_info(self):\n \"\"\"Retrieves information about the FiftyOne installation.\"\"\"\n return {\n \"version\": foc.VERSION,\n \"user_id\": get_user_id(),\n }\n\n def on_get_current_state(self, _):\n \"\"\"Gets the current state.\n\n Returns:\n a :class:`fiftyone.core.state.StateDescriptionWithDerivables`\n \"\"\"\n return self.state\n\n @_load_state()\n def on_add_selection(self, state, _id):\n \"\"\"Adds a sample to the selected samples list.\n\n Args:\n state: the current\n :class:`fiftyone.core.state.StateDescriptionWithDerivables`\n _id: the sample ID\n\n Returns:\n the updated\n :class:`fiftyone.core.state.StateDescriptionWithDerivables`\n \"\"\"\n selected = set(state.selected)\n selected.add(_id)\n state.selected = list(selected)\n return state\n\n @_load_state()\n def on_remove_selection(self, state, _id):\n \"\"\"Remove a sample from the selected samples list\n\n Args:\n state: the current\n :class:`fiftyone.core.state.StateDescriptionWithDerivables`\n _id: the sample ID\n\n Returns:\n the updated\n :class:`fiftyone.core.state.StateDescriptionWithDerivables`\n \"\"\"\n selected = set(state.selected)\n selected.remove(_id)\n state.selected = list(selected)\n return state\n\n @_load_state()\n def on_clear_selection(self, state):\n \"\"\"Remove all samples from the selected samples list\n\n Args:\n state: the current\n :class:`fiftyone.core.state.StateDescriptionWithDerivables`\n\n Returns:\n the updated\n :class:`fiftyone.core.state.StateDescriptionWithDerivables`\n \"\"\"\n state.selected = []\n return state\n\n def on_page(self, page, page_length=20):\n \"\"\"Gets the requested page of samples.\n\n Args:\n page: the page number\n page_length: the page length\n\n Returns:\n the list of sample dicts for the page\n \"\"\"\n state = fos.StateDescriptionWithDerivables.from_dict(self.state)\n if state.view is not None:\n view = state.view\n elif state.dataset is not None:\n view = state.dataset.view()\n else:\n return []\n\n view = view.skip((page - 1) * page_length).limit(page_length + 1)\n samples = [\n json.loads(\n json_util.dumps(s.to_mongo_dict()), parse_constant=lambda c: c\n )\n for s in view\n ]\n more = False\n if len(samples) > page_length:\n samples = samples[:page_length]\n more = page + 1\n\n results = [{\"sample\": s} for s in samples]\n for r in results:\n w, h = get_image_size(r[\"sample\"][\"filepath\"])\n r[\"width\"] = w\n r[\"height\"] = h\n\n return {\"results\": results, \"more\": more}\n\n def on_get_distributions(self, group):\n \"\"\"Gets the distributions for the current state with respect to a\n group.\n\n Args:\n group: one of \"labels\", \"tags\", or \"scalars\"\n\n Returns:\n a list of distributions\n \"\"\"\n state = fos.StateDescriptionWithDerivables.from_dict(self.state)\n if state.view is not None:\n view = state.view\n elif state.dataset is not None:\n view = state.dataset.view()\n else:\n return []\n\n return _get_distributions(view, group)\n\n\ndef _get_distributions(view, group):\n pipeline = DISTRIBUTION_PIPELINES[group]\n\n # we add a sub-pipeline for each numeric as it looks like multiple\n # buckets in a single pipeline is not supported\n if group == SCALARS:\n _numeric_distribution_pipelines(view, pipeline)\n\n result = list(view.aggregate(pipeline))\n\n if group in {LABELS, SCALARS}:\n new_result = []\n for f in result[0].values():\n new_result += f\n result = new_result\n\n if group != SCALARS:\n for idx, dist in enumerate(result):\n result[idx][\"data\"] = sorted(\n result[idx][\"data\"], key=lambda c: c[\"count\"], reverse=True\n )\n\n return sorted(result, key=lambda d: d[\"name\"])\n\n\ndef _numeric_bounds(view, numerics):\n bounds_pipeline = [{\"$facet\": {}}]\n for idx, (k, v) in enumerate(numerics.items()):\n bounds_pipeline[0][\"$facet\"][\"numeric-%d\" % idx] = [\n {\n \"$group\": {\n \"_id\": k,\n \"min\": {\"$min\": \"$%s\" % k},\n \"max\": {\"$max\": \"$%s\" % k},\n },\n }\n ]\n\n return list(view.aggregate(bounds_pipeline))[0] if len(numerics) else {}\n\n\ndef _numeric_distribution_pipelines(view, pipeline, buckets=50):\n numerics = view._dataset.get_field_schema(ftype=fof.IntField)\n numerics.update(view._dataset.get_field_schema(ftype=fof.FloatField))\n\n # here we query the min and max for each numeric field\n # unfortunately, it looks like this has to be a separate query\n bounds = _numeric_bounds(view, numerics)\n\n # for each numeric field, build the boundaries array with the\n # min/max results when adding the field's sub-pipeline\n for idx, (k, v) in enumerate(numerics.items()):\n sub_pipeline = \"numeric-%d\" % idx\n field_bounds = bounds[sub_pipeline][0]\n mn = field_bounds[\"min\"]\n mx = field_bounds[\"max\"]\n\n # if min and max are equal, we artifically create a boundary\n # @todo alternative approach to scalar fields with only one value\n if mn == mx:\n if mx > 0:\n mn = 0\n else:\n mx = 0\n\n step = (mx - mn) / buckets\n boundaries = [mn + step * s for s in range(0, buckets)]\n\n pipeline[0][\"$facet\"][sub_pipeline] = [\n {\n \"$bucket\": {\n \"groupBy\": \"$%s\" % k,\n \"boundaries\": boundaries,\n \"default\": \"null\",\n \"output\": {\"count\": {\"$sum\": 1}},\n }\n },\n {\n \"$group\": {\n \"_id\": k,\n \"data\": {\n \"$push\": {\n \"key\": {\n \"$cond\": [\n {\"$ne\": [\"$_id\", \"null\"]},\n {\"$add\": [\"$_id\", step / 2]},\n \"null\",\n ]\n },\n \"count\": \"$count\",\n }\n },\n }\n },\n {\n \"$project\": {\n \"name\": k,\n \"type\": v.__class__.__name__[\n : -len(\"Field\") # grab field type from the class\n ].lower(),\n \"data\": \"$data\",\n }\n },\n ]\n\n\nsocketio.on_namespace(StateController(\"/state\"))\n\n\nif __name__ == \"__main__\":\n log_path = os.path.join(\n foc.FIFTYONE_CONFIG_DIR, \"var\", \"log\", \"server.log\"\n )\n etau.ensure_basedir(log_path)\n # pylint: disable=no-member\n app.logger.addHandler(logging.FileHandler(log_path, mode=\"w\"))\n\n parser = argparse.ArgumentParser()\n parser.add_argument(\"--port\", type=int, default=5151)\n args = parser.parse_args()\n\n socketio.run(app, port=args.port, debug=foc.DEV_INSTALL)\n", "path": "fiftyone/server/main.py"}]}
3,940
522
gh_patches_debug_41739
rasdani/github-patches
git_diff
CiviWiki__OpenCiviWiki-1044
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- When uploading a profile image failed then user needs to go back and click on Edit Profile again to upload new image When users uploading a profile picture and this failed with the error message "Please use an image that 1280 x 960 pixels or smaller" then users need to go back to the profile and click on Edit Profile again in order to upload a new picture. Just clicking on Choose Picture doesn't do it. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `project/accounts/models.py` Content: ``` 1 from django.contrib.auth.models import AbstractUser 2 import os 3 import io 4 from django.core.files.storage import default_storage 5 from django.conf import settings 6 from django.db import models 7 from PIL import Image, ImageOps 8 from django.core.files.uploadedfile import InMemoryUploadedFile 9 10 from taggit.managers import TaggableManager 11 12 from api.models.category import Category 13 from common.utils import PathAndRename 14 15 16 class User(AbstractUser): 17 """ 18 A new custom User model for any functionality needed in the future. Extending AbstractUser 19 allows for adding new fields to the user model as needed. 20 """ 21 22 class Meta: 23 db_table = "users" 24 25 26 # Image manipulation constants 27 PROFILE_IMG_SIZE = (171, 171) 28 PROFILE_IMG_THUMB_SIZE = (40, 40) 29 WHITE_BG = (255, 255, 255) 30 31 32 class ProfileManager(models.Manager): 33 def summarize(self, profile): 34 from api.models.civi import Civi 35 36 data = { 37 "username": profile.user.username, 38 "first_name": profile.first_name, 39 "last_name": profile.last_name, 40 "about_me": profile.about_me, 41 "history": [ 42 Civi.objects.serialize(c) 43 for c in Civi.objects.filter(author_id=profile.id).order_by("-created") 44 ], 45 "profile_image": profile.profile_image_url, 46 "followers": self.followers(profile), 47 "following": self.following(profile), 48 } 49 return data 50 51 def chip_summarize(self, profile): 52 data = { 53 "username": profile.user.username, 54 "first_name": profile.first_name, 55 "last_name": profile.last_name, 56 "profile_image": profile.profile_image_url, 57 } 58 return data 59 60 def card_summarize(self, profile, request_profile): 61 # Length at which to truncate 'about me' text 62 about_me_truncate_length = 150 63 64 # If 'about me' text is longer than 150 characters... add elipsis (truncate) 65 ellipsis_if_too_long = ( 66 "" if len(profile.about_me) <= about_me_truncate_length else "..." 67 ) 68 69 data = { 70 "id": profile.user.id, 71 "username": profile.user.username, 72 "first_name": profile.first_name, 73 "last_name": profile.last_name, 74 "about_me": profile.about_me[:about_me_truncate_length] + ellipsis_if_too_long, 75 "profile_image": profile.profile_image_url, 76 "follow_state": True 77 if profile in request_profile.following.all() 78 else False, 79 "request_profile": request_profile.first_name, 80 } 81 return data 82 83 def followers(self, profile): 84 return [self.chip_summarize(follower) for follower in profile.followers.all()] 85 86 def following(self, profile): 87 return [self.chip_summarize(following) for following in profile.following.all()] 88 89 90 profile_upload_path = PathAndRename("") 91 92 93 class Profile(models.Model): 94 user = models.ForeignKey(User, on_delete=models.CASCADE) 95 first_name = models.CharField(max_length=63, blank=False) 96 last_name = models.CharField(max_length=63, blank=False) 97 about_me = models.CharField(max_length=511, blank=True) 98 99 categories = models.ManyToManyField( 100 Category, related_name="user_categories", symmetrical=False 101 ) 102 tags = TaggableManager() 103 104 followers = models.ManyToManyField( 105 "self", related_name="follower", symmetrical=False 106 ) 107 following = models.ManyToManyField( 108 "self", related_name="followings", symmetrical=False 109 ) 110 111 is_verified = models.BooleanField(default=False) 112 full_profile = models.BooleanField(default=False) 113 114 objects = ProfileManager() 115 profile_image = models.ImageField( 116 upload_to=profile_upload_path, blank=True, null=True 117 ) 118 profile_image_thumb = models.ImageField( 119 upload_to=profile_upload_path, blank=True, null=True 120 ) 121 122 @property 123 def full_name(self): 124 """Returns the person's full name.""" 125 126 return f"{self.first_name} {self.last_name}" 127 128 @property 129 def profile_image_url(self): 130 """Return placeholder profile image if user didn't upload one""" 131 132 if self.profile_image: 133 file_exists = default_storage.exists( 134 os.path.join(settings.MEDIA_ROOT, self.profile_image.name) 135 ) 136 if file_exists: 137 return self.profile_image.url 138 139 return "/static/img/no_image_md.png" 140 141 @property 142 def profile_image_thumb_url(self): 143 """Return placeholder profile image if user didn't upload one""" 144 145 if self.profile_image_thumb: 146 file_exists = default_storage.exists( 147 os.path.join(settings.MEDIA_ROOT, self.profile_image_thumb.name) 148 ) 149 if file_exists: 150 return self.profile_image_thumb.url 151 152 return "/static/img/no_image_md.png" 153 154 def __init__(self, *args, **kwargs): 155 super(Profile, self).__init__(*args, **kwargs) 156 157 def save(self, *args, **kwargs): 158 """ Image crop/resize and thumbnail creation """ 159 160 # New Profile image -- 161 if self.profile_image: 162 self.resize_profile_image() 163 164 self.full_profile = self.is_full_profile() 165 166 super(Profile, self).save(*args, **kwargs) 167 168 def resize_profile_image(self): 169 """ 170 Resizes and crops the user uploaded image and creates a thumbnail version of it 171 """ 172 profile_image_field = self.profile_image 173 image_file = io.StringIO(profile_image_field.read()) 174 profile_image = Image.open(image_file) 175 profile_image.load() 176 177 # Resize image 178 profile_image = ImageOps.fit( 179 profile_image, PROFILE_IMG_SIZE, Image.ANTIALIAS, centering=(0.5, 0.5) 180 ) 181 182 # Convert to JPG image format with white background 183 if profile_image.mode not in ("L", "RGB"): 184 white_bg_img = Image.new("RGB", PROFILE_IMG_SIZE, WHITE_BG) 185 white_bg_img.paste(profile_image, mask=profile_image.split()[3]) 186 profile_image = white_bg_img 187 188 # Save new cropped image 189 tmp_image_file = io.StringIO() 190 profile_image.save(tmp_image_file, "JPEG", quality=90) 191 tmp_image_file.seek(0) 192 self.profile_image = InMemoryUploadedFile( 193 tmp_image_file, 194 "ImageField", 195 self.profile_image.name, 196 "image/jpeg", 197 tmp_image_file.len, 198 None, 199 ) 200 # Make a Thumbnail Image for the new resized image 201 thumb_image = profile_image.copy() 202 thumb_image.thumbnail(PROFILE_IMG_THUMB_SIZE, resample=Image.ANTIALIAS) 203 tmp_image_file = io.StringIO() 204 thumb_image.save(tmp_image_file, "JPEG", quality=90) 205 tmp_image_file.seek(0) 206 self.profile_image_thumb = InMemoryUploadedFile( 207 tmp_image_file, 208 "ImageField", 209 self.profile_image.name, 210 "image/jpeg", 211 tmp_image_file.len, 212 None, 213 ) 214 215 def is_full_profile(self): 216 if self.first_name and self.last_name: 217 return True 218 else: 219 return False 220 ``` Path: `project/accounts/admin.py` Content: ``` 1 from django.contrib import admin 2 3 # Register your models here. 4 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/project/accounts/admin.py b/project/accounts/admin.py --- a/project/accounts/admin.py +++ b/project/accounts/admin.py @@ -1,3 +1,6 @@ from django.contrib import admin +from .models import User + # Register your models here. +admin.site.register(User) diff --git a/project/accounts/models.py b/project/accounts/models.py --- a/project/accounts/models.py +++ b/project/accounts/models.py @@ -71,7 +71,8 @@ "username": profile.user.username, "first_name": profile.first_name, "last_name": profile.last_name, - "about_me": profile.about_me[:about_me_truncate_length] + ellipsis_if_too_long, + "about_me": profile.about_me[:about_me_truncate_length] + + ellipsis_if_too_long, "profile_image": profile.profile_image_url, "follow_state": True if profile in request_profile.following.all() @@ -155,7 +156,7 @@ super(Profile, self).__init__(*args, **kwargs) def save(self, *args, **kwargs): - """ Image crop/resize and thumbnail creation """ + """Image crop/resize and thumbnail creation""" # New Profile image -- if self.profile_image: @@ -169,11 +170,7 @@ """ Resizes and crops the user uploaded image and creates a thumbnail version of it """ - profile_image_field = self.profile_image - image_file = io.StringIO(profile_image_field.read()) - profile_image = Image.open(image_file) - profile_image.load() - + profile_image = Image.open(self.profile_image) # Resize image profile_image = ImageOps.fit( profile_image, PROFILE_IMG_SIZE, Image.ANTIALIAS, centering=(0.5, 0.5) @@ -186,7 +183,7 @@ profile_image = white_bg_img # Save new cropped image - tmp_image_file = io.StringIO() + tmp_image_file = io.BytesIO() profile_image.save(tmp_image_file, "JPEG", quality=90) tmp_image_file.seek(0) self.profile_image = InMemoryUploadedFile( @@ -194,21 +191,23 @@ "ImageField", self.profile_image.name, "image/jpeg", - tmp_image_file.len, + profile_image.tell(), None, ) # Make a Thumbnail Image for the new resized image thumb_image = profile_image.copy() + thumb_image.thumbnail(PROFILE_IMG_THUMB_SIZE, resample=Image.ANTIALIAS) - tmp_image_file = io.StringIO() - thumb_image.save(tmp_image_file, "JPEG", quality=90) - tmp_image_file.seek(0) + tmp_thumb_file = io.BytesIO() + thumb_image.save(tmp_thumb_file, "JPEG", quality=90) + tmp_thumb_file.seek(0) + self.profile_image_thumb = InMemoryUploadedFile( - tmp_image_file, + tmp_thumb_file, "ImageField", self.profile_image.name, "image/jpeg", - tmp_image_file.len, + thumb_image.tell(), None, )
{"golden_diff": "diff --git a/project/accounts/admin.py b/project/accounts/admin.py\n--- a/project/accounts/admin.py\n+++ b/project/accounts/admin.py\n@@ -1,3 +1,6 @@\n from django.contrib import admin\n+from .models import User\n+\n \n # Register your models here.\n+admin.site.register(User)\ndiff --git a/project/accounts/models.py b/project/accounts/models.py\n--- a/project/accounts/models.py\n+++ b/project/accounts/models.py\n@@ -71,7 +71,8 @@\n \"username\": profile.user.username,\n \"first_name\": profile.first_name,\n \"last_name\": profile.last_name,\n- \"about_me\": profile.about_me[:about_me_truncate_length] + ellipsis_if_too_long,\n+ \"about_me\": profile.about_me[:about_me_truncate_length]\n+ + ellipsis_if_too_long,\n \"profile_image\": profile.profile_image_url,\n \"follow_state\": True\n if profile in request_profile.following.all()\n@@ -155,7 +156,7 @@\n super(Profile, self).__init__(*args, **kwargs)\n \n def save(self, *args, **kwargs):\n- \"\"\" Image crop/resize and thumbnail creation \"\"\"\n+ \"\"\"Image crop/resize and thumbnail creation\"\"\"\n \n # New Profile image --\n if self.profile_image:\n@@ -169,11 +170,7 @@\n \"\"\"\n Resizes and crops the user uploaded image and creates a thumbnail version of it\n \"\"\"\n- profile_image_field = self.profile_image\n- image_file = io.StringIO(profile_image_field.read())\n- profile_image = Image.open(image_file)\n- profile_image.load()\n-\n+ profile_image = Image.open(self.profile_image)\n # Resize image\n profile_image = ImageOps.fit(\n profile_image, PROFILE_IMG_SIZE, Image.ANTIALIAS, centering=(0.5, 0.5)\n@@ -186,7 +183,7 @@\n profile_image = white_bg_img\n \n # Save new cropped image\n- tmp_image_file = io.StringIO()\n+ tmp_image_file = io.BytesIO()\n profile_image.save(tmp_image_file, \"JPEG\", quality=90)\n tmp_image_file.seek(0)\n self.profile_image = InMemoryUploadedFile(\n@@ -194,21 +191,23 @@\n \"ImageField\",\n self.profile_image.name,\n \"image/jpeg\",\n- tmp_image_file.len,\n+ profile_image.tell(),\n None,\n )\n # Make a Thumbnail Image for the new resized image\n thumb_image = profile_image.copy()\n+\n thumb_image.thumbnail(PROFILE_IMG_THUMB_SIZE, resample=Image.ANTIALIAS)\n- tmp_image_file = io.StringIO()\n- thumb_image.save(tmp_image_file, \"JPEG\", quality=90)\n- tmp_image_file.seek(0)\n+ tmp_thumb_file = io.BytesIO()\n+ thumb_image.save(tmp_thumb_file, \"JPEG\", quality=90)\n+ tmp_thumb_file.seek(0)\n+\n self.profile_image_thumb = InMemoryUploadedFile(\n- tmp_image_file,\n+ tmp_thumb_file,\n \"ImageField\",\n self.profile_image.name,\n \"image/jpeg\",\n- tmp_image_file.len,\n+ thumb_image.tell(),\n None,\n )\n", "issue": "When uploading a profile image failed then user needs to go back and click on Edit Profile again to upload new image\nWhen users uploading a profile picture and this failed with the error message \"Please use an image that 1280 x 960 pixels or smaller\" then users need to go back to the profile and click on Edit Profile again in order to upload a new picture. Just clicking on Choose Picture doesn't do it.\n", "before_files": [{"content": "from django.contrib.auth.models import AbstractUser\nimport os\nimport io\nfrom django.core.files.storage import default_storage\nfrom django.conf import settings\nfrom django.db import models\nfrom PIL import Image, ImageOps\nfrom django.core.files.uploadedfile import InMemoryUploadedFile\n\nfrom taggit.managers import TaggableManager\n\nfrom api.models.category import Category\nfrom common.utils import PathAndRename\n\n\nclass User(AbstractUser):\n \"\"\"\n A new custom User model for any functionality needed in the future. Extending AbstractUser\n allows for adding new fields to the user model as needed.\n \"\"\"\n\n class Meta:\n db_table = \"users\"\n\n\n# Image manipulation constants\nPROFILE_IMG_SIZE = (171, 171)\nPROFILE_IMG_THUMB_SIZE = (40, 40)\nWHITE_BG = (255, 255, 255)\n\n\nclass ProfileManager(models.Manager):\n def summarize(self, profile):\n from api.models.civi import Civi\n\n data = {\n \"username\": profile.user.username,\n \"first_name\": profile.first_name,\n \"last_name\": profile.last_name,\n \"about_me\": profile.about_me,\n \"history\": [\n Civi.objects.serialize(c)\n for c in Civi.objects.filter(author_id=profile.id).order_by(\"-created\")\n ],\n \"profile_image\": profile.profile_image_url,\n \"followers\": self.followers(profile),\n \"following\": self.following(profile),\n }\n return data\n\n def chip_summarize(self, profile):\n data = {\n \"username\": profile.user.username,\n \"first_name\": profile.first_name,\n \"last_name\": profile.last_name,\n \"profile_image\": profile.profile_image_url,\n }\n return data\n\n def card_summarize(self, profile, request_profile):\n # Length at which to truncate 'about me' text\n about_me_truncate_length = 150\n\n # If 'about me' text is longer than 150 characters... add elipsis (truncate)\n ellipsis_if_too_long = (\n \"\" if len(profile.about_me) <= about_me_truncate_length else \"...\"\n )\n\n data = {\n \"id\": profile.user.id,\n \"username\": profile.user.username,\n \"first_name\": profile.first_name,\n \"last_name\": profile.last_name,\n \"about_me\": profile.about_me[:about_me_truncate_length] + ellipsis_if_too_long,\n \"profile_image\": profile.profile_image_url,\n \"follow_state\": True\n if profile in request_profile.following.all()\n else False,\n \"request_profile\": request_profile.first_name,\n }\n return data\n\n def followers(self, profile):\n return [self.chip_summarize(follower) for follower in profile.followers.all()]\n\n def following(self, profile):\n return [self.chip_summarize(following) for following in profile.following.all()]\n\n\nprofile_upload_path = PathAndRename(\"\")\n\n\nclass Profile(models.Model):\n user = models.ForeignKey(User, on_delete=models.CASCADE)\n first_name = models.CharField(max_length=63, blank=False)\n last_name = models.CharField(max_length=63, blank=False)\n about_me = models.CharField(max_length=511, blank=True)\n\n categories = models.ManyToManyField(\n Category, related_name=\"user_categories\", symmetrical=False\n )\n tags = TaggableManager()\n\n followers = models.ManyToManyField(\n \"self\", related_name=\"follower\", symmetrical=False\n )\n following = models.ManyToManyField(\n \"self\", related_name=\"followings\", symmetrical=False\n )\n\n is_verified = models.BooleanField(default=False)\n full_profile = models.BooleanField(default=False)\n\n objects = ProfileManager()\n profile_image = models.ImageField(\n upload_to=profile_upload_path, blank=True, null=True\n )\n profile_image_thumb = models.ImageField(\n upload_to=profile_upload_path, blank=True, null=True\n )\n\n @property\n def full_name(self):\n \"\"\"Returns the person's full name.\"\"\"\n\n return f\"{self.first_name} {self.last_name}\"\n\n @property\n def profile_image_url(self):\n \"\"\"Return placeholder profile image if user didn't upload one\"\"\"\n\n if self.profile_image:\n file_exists = default_storage.exists(\n os.path.join(settings.MEDIA_ROOT, self.profile_image.name)\n )\n if file_exists:\n return self.profile_image.url\n\n return \"/static/img/no_image_md.png\"\n\n @property\n def profile_image_thumb_url(self):\n \"\"\"Return placeholder profile image if user didn't upload one\"\"\"\n\n if self.profile_image_thumb:\n file_exists = default_storage.exists(\n os.path.join(settings.MEDIA_ROOT, self.profile_image_thumb.name)\n )\n if file_exists:\n return self.profile_image_thumb.url\n\n return \"/static/img/no_image_md.png\"\n\n def __init__(self, *args, **kwargs):\n super(Profile, self).__init__(*args, **kwargs)\n\n def save(self, *args, **kwargs):\n \"\"\" Image crop/resize and thumbnail creation \"\"\"\n\n # New Profile image --\n if self.profile_image:\n self.resize_profile_image()\n\n self.full_profile = self.is_full_profile()\n\n super(Profile, self).save(*args, **kwargs)\n\n def resize_profile_image(self):\n \"\"\"\n Resizes and crops the user uploaded image and creates a thumbnail version of it\n \"\"\"\n profile_image_field = self.profile_image\n image_file = io.StringIO(profile_image_field.read())\n profile_image = Image.open(image_file)\n profile_image.load()\n\n # Resize image\n profile_image = ImageOps.fit(\n profile_image, PROFILE_IMG_SIZE, Image.ANTIALIAS, centering=(0.5, 0.5)\n )\n\n # Convert to JPG image format with white background\n if profile_image.mode not in (\"L\", \"RGB\"):\n white_bg_img = Image.new(\"RGB\", PROFILE_IMG_SIZE, WHITE_BG)\n white_bg_img.paste(profile_image, mask=profile_image.split()[3])\n profile_image = white_bg_img\n\n # Save new cropped image\n tmp_image_file = io.StringIO()\n profile_image.save(tmp_image_file, \"JPEG\", quality=90)\n tmp_image_file.seek(0)\n self.profile_image = InMemoryUploadedFile(\n tmp_image_file,\n \"ImageField\",\n self.profile_image.name,\n \"image/jpeg\",\n tmp_image_file.len,\n None,\n )\n # Make a Thumbnail Image for the new resized image\n thumb_image = profile_image.copy()\n thumb_image.thumbnail(PROFILE_IMG_THUMB_SIZE, resample=Image.ANTIALIAS)\n tmp_image_file = io.StringIO()\n thumb_image.save(tmp_image_file, \"JPEG\", quality=90)\n tmp_image_file.seek(0)\n self.profile_image_thumb = InMemoryUploadedFile(\n tmp_image_file,\n \"ImageField\",\n self.profile_image.name,\n \"image/jpeg\",\n tmp_image_file.len,\n None,\n )\n\n def is_full_profile(self):\n if self.first_name and self.last_name:\n return True\n else:\n return False\n", "path": "project/accounts/models.py"}, {"content": "from django.contrib import admin\n\n# Register your models here.\n", "path": "project/accounts/admin.py"}], "after_files": [{"content": "from django.contrib.auth.models import AbstractUser\nimport os\nimport io\nfrom django.core.files.storage import default_storage\nfrom django.conf import settings\nfrom django.db import models\nfrom PIL import Image, ImageOps\nfrom django.core.files.uploadedfile import InMemoryUploadedFile\n\nfrom taggit.managers import TaggableManager\n\nfrom api.models.category import Category\nfrom common.utils import PathAndRename\n\n\nclass User(AbstractUser):\n \"\"\"\n A new custom User model for any functionality needed in the future. Extending AbstractUser\n allows for adding new fields to the user model as needed.\n \"\"\"\n\n class Meta:\n db_table = \"users\"\n\n\n# Image manipulation constants\nPROFILE_IMG_SIZE = (171, 171)\nPROFILE_IMG_THUMB_SIZE = (40, 40)\nWHITE_BG = (255, 255, 255)\n\n\nclass ProfileManager(models.Manager):\n def summarize(self, profile):\n from api.models.civi import Civi\n\n data = {\n \"username\": profile.user.username,\n \"first_name\": profile.first_name,\n \"last_name\": profile.last_name,\n \"about_me\": profile.about_me,\n \"history\": [\n Civi.objects.serialize(c)\n for c in Civi.objects.filter(author_id=profile.id).order_by(\"-created\")\n ],\n \"profile_image\": profile.profile_image_url,\n \"followers\": self.followers(profile),\n \"following\": self.following(profile),\n }\n return data\n\n def chip_summarize(self, profile):\n data = {\n \"username\": profile.user.username,\n \"first_name\": profile.first_name,\n \"last_name\": profile.last_name,\n \"profile_image\": profile.profile_image_url,\n }\n return data\n\n def card_summarize(self, profile, request_profile):\n # Length at which to truncate 'about me' text\n about_me_truncate_length = 150\n\n # If 'about me' text is longer than 150 characters... add elipsis (truncate)\n ellipsis_if_too_long = (\n \"\" if len(profile.about_me) <= about_me_truncate_length else \"...\"\n )\n\n data = {\n \"id\": profile.user.id,\n \"username\": profile.user.username,\n \"first_name\": profile.first_name,\n \"last_name\": profile.last_name,\n \"about_me\": profile.about_me[:about_me_truncate_length]\n + ellipsis_if_too_long,\n \"profile_image\": profile.profile_image_url,\n \"follow_state\": True\n if profile in request_profile.following.all()\n else False,\n \"request_profile\": request_profile.first_name,\n }\n return data\n\n def followers(self, profile):\n return [self.chip_summarize(follower) for follower in profile.followers.all()]\n\n def following(self, profile):\n return [self.chip_summarize(following) for following in profile.following.all()]\n\n\nprofile_upload_path = PathAndRename(\"\")\n\n\nclass Profile(models.Model):\n user = models.ForeignKey(User, on_delete=models.CASCADE)\n first_name = models.CharField(max_length=63, blank=False)\n last_name = models.CharField(max_length=63, blank=False)\n about_me = models.CharField(max_length=511, blank=True)\n\n categories = models.ManyToManyField(\n Category, related_name=\"user_categories\", symmetrical=False\n )\n tags = TaggableManager()\n\n followers = models.ManyToManyField(\n \"self\", related_name=\"follower\", symmetrical=False\n )\n following = models.ManyToManyField(\n \"self\", related_name=\"followings\", symmetrical=False\n )\n\n is_verified = models.BooleanField(default=False)\n full_profile = models.BooleanField(default=False)\n\n objects = ProfileManager()\n profile_image = models.ImageField(\n upload_to=profile_upload_path, blank=True, null=True\n )\n profile_image_thumb = models.ImageField(\n upload_to=profile_upload_path, blank=True, null=True\n )\n\n @property\n def full_name(self):\n \"\"\"Returns the person's full name.\"\"\"\n\n return f\"{self.first_name} {self.last_name}\"\n\n @property\n def profile_image_url(self):\n \"\"\"Return placeholder profile image if user didn't upload one\"\"\"\n\n if self.profile_image:\n file_exists = default_storage.exists(\n os.path.join(settings.MEDIA_ROOT, self.profile_image.name)\n )\n if file_exists:\n return self.profile_image.url\n\n return \"/static/img/no_image_md.png\"\n\n @property\n def profile_image_thumb_url(self):\n \"\"\"Return placeholder profile image if user didn't upload one\"\"\"\n\n if self.profile_image_thumb:\n file_exists = default_storage.exists(\n os.path.join(settings.MEDIA_ROOT, self.profile_image_thumb.name)\n )\n if file_exists:\n return self.profile_image_thumb.url\n\n return \"/static/img/no_image_md.png\"\n\n def __init__(self, *args, **kwargs):\n super(Profile, self).__init__(*args, **kwargs)\n\n def save(self, *args, **kwargs):\n \"\"\"Image crop/resize and thumbnail creation\"\"\"\n\n # New Profile image --\n if self.profile_image:\n self.resize_profile_image()\n\n self.full_profile = self.is_full_profile()\n\n super(Profile, self).save(*args, **kwargs)\n\n def resize_profile_image(self):\n \"\"\"\n Resizes and crops the user uploaded image and creates a thumbnail version of it\n \"\"\"\n profile_image = Image.open(self.profile_image)\n # Resize image\n profile_image = ImageOps.fit(\n profile_image, PROFILE_IMG_SIZE, Image.ANTIALIAS, centering=(0.5, 0.5)\n )\n\n # Convert to JPG image format with white background\n if profile_image.mode not in (\"L\", \"RGB\"):\n white_bg_img = Image.new(\"RGB\", PROFILE_IMG_SIZE, WHITE_BG)\n white_bg_img.paste(profile_image, mask=profile_image.split()[3])\n profile_image = white_bg_img\n\n # Save new cropped image\n tmp_image_file = io.BytesIO()\n profile_image.save(tmp_image_file, \"JPEG\", quality=90)\n tmp_image_file.seek(0)\n self.profile_image = InMemoryUploadedFile(\n tmp_image_file,\n \"ImageField\",\n self.profile_image.name,\n \"image/jpeg\",\n profile_image.tell(),\n None,\n )\n # Make a Thumbnail Image for the new resized image\n thumb_image = profile_image.copy()\n\n thumb_image.thumbnail(PROFILE_IMG_THUMB_SIZE, resample=Image.ANTIALIAS)\n tmp_thumb_file = io.BytesIO()\n thumb_image.save(tmp_thumb_file, \"JPEG\", quality=90)\n tmp_thumb_file.seek(0)\n\n self.profile_image_thumb = InMemoryUploadedFile(\n tmp_thumb_file,\n \"ImageField\",\n self.profile_image.name,\n \"image/jpeg\",\n thumb_image.tell(),\n None,\n )\n\n def is_full_profile(self):\n if self.first_name and self.last_name:\n return True\n else:\n return False\n", "path": "project/accounts/models.py"}, {"content": "from django.contrib import admin\nfrom .models import User\n\n\n# Register your models here.\nadmin.site.register(User)\n", "path": "project/accounts/admin.py"}]}
2,478
712
gh_patches_debug_35944
rasdani/github-patches
git_diff
pytorch__text-146
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Escape csv header lines I haven't been able to see how to skip first csv line in case of loading from a file with header. I could of course preprocess the file, but it'd be nice if there was an option to TabularDataset to tell it to skip the first line. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `torchtext/data/dataset.py` Content: ``` 1 import io 2 import os 3 import zipfile 4 import tarfile 5 6 import torch.utils.data 7 from six.moves import urllib 8 9 from .example import Example 10 11 12 class Dataset(torch.utils.data.Dataset): 13 """Defines a dataset composed of Examples along with its Fields. 14 15 Attributes: 16 sort_key (callable): A key to use for sorting dataset examples for batching 17 together examples with similar lengths to minimize padding. 18 examples (list(Example)): The examples in this dataset. 19 fields: A dictionary containing the name of each column together with 20 its corresponding Field object. Two columns with the same Field 21 object will share a vocabulary. 22 fields (dict[str, Field]): Contains the name of each column or field, together 23 with the corresponding Field object. Two fields with the same Field object 24 will have a shared vocabulary. 25 """ 26 sort_key = None 27 28 def __init__(self, examples, fields, filter_pred=None): 29 """Create a dataset from a list of Examples and Fields. 30 31 Arguments: 32 examples: List of Examples. 33 fields (List(tuple(str, Field))): The Fields to use in this tuple. The 34 string is a field name, and the Field is the associated field. 35 filter_pred (callable or None): Use only examples for which 36 filter_pred(example) is True, or use all examples if None. 37 Default is None. 38 """ 39 if filter_pred is not None: 40 examples = list(filter(filter_pred, examples)) 41 self.examples = examples 42 self.fields = dict(fields) 43 44 @classmethod 45 def splits(cls, path, train=None, validation=None, test=None, **kwargs): 46 """Create Dataset objects for multiple splits of a dataset. 47 48 Arguments: 49 path (str): Common prefix of the splits' file paths. 50 train (str): Suffix to add to path for the train set, or None for no 51 train set. Default is None. 52 validation (str): Suffix to add to path for the validation set, or None 53 for no validation set. Default is None. 54 test (str): Suffix to add to path for the test set, or None for no test 55 set. Default is None. 56 Remaining keyword arguments: Passed to the constructor of the 57 Dataset (sub)class being used. 58 59 Returns: 60 split_datasets (tuple(Dataset)): Datasets for train, validation, and 61 test splits in that order, if provided. 62 """ 63 train_data = None if train is None else cls(path + train, **kwargs) 64 val_data = None if validation is None else cls(path + validation, 65 **kwargs) 66 test_data = None if test is None else cls(path + test, **kwargs) 67 return tuple(d for d in (train_data, val_data, test_data) 68 if d is not None) 69 70 def __getitem__(self, i): 71 return self.examples[i] 72 73 def __len__(self): 74 try: 75 return len(self.examples) 76 except TypeError: 77 return 2**32 78 79 def __iter__(self): 80 for x in self.examples: 81 yield x 82 83 def __getattr__(self, attr): 84 if attr in self.fields: 85 for x in self.examples: 86 yield getattr(x, attr) 87 88 @classmethod 89 def download(cls, root, check=None): 90 """Download and unzip an online archive (.zip, .gz, or .tgz). 91 92 Arguments: 93 root (str): Folder to download data to. 94 check (str or None): Folder whose existence indicates 95 that the dataset has already been downloaded, or 96 None to check the existence of root. 97 98 Returns: 99 dataset_path (str): Path to extracted dataset. 100 """ 101 path = os.path.join(root, cls.name) 102 check = path if check is None else check 103 if not os.path.isdir(check): 104 for url in cls.urls: 105 filename = os.path.basename(url) 106 zpath = os.path.join(path, filename) 107 if not os.path.isfile(zpath): 108 if not os.path.exists(os.path.dirname(zpath)): 109 os.makedirs(os.path.dirname(zpath)) 110 print('downloading {}'.format(filename)) 111 urllib.request.urlretrieve(url, zpath) 112 ext = os.path.splitext(filename)[-1] 113 if ext == '.zip': 114 with zipfile.ZipFile(zpath, 'r') as zfile: 115 print('extracting') 116 zfile.extractall(path) 117 elif ext in ['.gz', '.tgz']: 118 with tarfile.open(zpath, 'r:gz') as tar: 119 dirs = [member for member in tar.getmembers()] 120 tar.extractall(path=path, members=dirs) 121 return os.path.join(path, cls.dirname) 122 123 124 class TabularDataset(Dataset): 125 """Defines a Dataset of columns stored in CSV, TSV, or JSON format.""" 126 127 def __init__(self, path, format, fields, **kwargs): 128 """Create a TabularDataset given a path, file format, and field list. 129 130 Arguments: 131 path (str): Path to the data file. 132 format (str): The format of the data file. One of "CSV", "TSV", or 133 "JSON" (case-insensitive). 134 fields (list(tuple(str, Field)) or dict[str, (name, Field)]: For CSV and 135 TSV formats, list of tuples of (name, field). The list should be in 136 the same order as the columns in the CSV or TSV file, while tuples of 137 (name, None) represent columns that will be ignored. For JSON format, 138 dictionary whose keys are the JSON keys and whose values are tuples of 139 (name, field). This allows the user to rename columns from their JSON key 140 names and also enables selecting a subset of columns to load 141 (since JSON keys not present in the input dictionary are ignored). 142 """ 143 make_example = { 144 'json': Example.fromJSON, 'dict': Example.fromdict, 145 'tsv': Example.fromTSV, 'csv': Example.fromCSV}[format.lower()] 146 147 with io.open(os.path.expanduser(path), encoding="utf8") as f: 148 examples = [make_example(line, fields) for line in f] 149 150 if make_example in (Example.fromdict, Example.fromJSON): 151 fields, field_dict = [], fields 152 for field in field_dict.values(): 153 if isinstance(field, list): 154 fields.extend(field) 155 else: 156 fields.append(field) 157 158 super(TabularDataset, self).__init__(examples, fields, **kwargs) 159 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/torchtext/data/dataset.py b/torchtext/data/dataset.py --- a/torchtext/data/dataset.py +++ b/torchtext/data/dataset.py @@ -124,14 +124,14 @@ class TabularDataset(Dataset): """Defines a Dataset of columns stored in CSV, TSV, or JSON format.""" - def __init__(self, path, format, fields, **kwargs): + def __init__(self, path, format, fields, skip_header=False, **kwargs): """Create a TabularDataset given a path, file format, and field list. Arguments: path (str): Path to the data file. format (str): The format of the data file. One of "CSV", "TSV", or "JSON" (case-insensitive). - fields (list(tuple(str, Field)) or dict[str, (name, Field)]: For CSV and + fields (list(tuple(str, Field)) or dict[str: tuple(str, Field)]: For CSV and TSV formats, list of tuples of (name, field). The list should be in the same order as the columns in the CSV or TSV file, while tuples of (name, None) represent columns that will be ignored. For JSON format, @@ -139,12 +139,15 @@ (name, field). This allows the user to rename columns from their JSON key names and also enables selecting a subset of columns to load (since JSON keys not present in the input dictionary are ignored). + skip_header (bool): Whether to skip the first line of the input file. """ make_example = { 'json': Example.fromJSON, 'dict': Example.fromdict, 'tsv': Example.fromTSV, 'csv': Example.fromCSV}[format.lower()] with io.open(os.path.expanduser(path), encoding="utf8") as f: + if skip_header: + next(f) examples = [make_example(line, fields) for line in f] if make_example in (Example.fromdict, Example.fromJSON):
{"golden_diff": "diff --git a/torchtext/data/dataset.py b/torchtext/data/dataset.py\n--- a/torchtext/data/dataset.py\n+++ b/torchtext/data/dataset.py\n@@ -124,14 +124,14 @@\n class TabularDataset(Dataset):\n \"\"\"Defines a Dataset of columns stored in CSV, TSV, or JSON format.\"\"\"\n \n- def __init__(self, path, format, fields, **kwargs):\n+ def __init__(self, path, format, fields, skip_header=False, **kwargs):\n \"\"\"Create a TabularDataset given a path, file format, and field list.\n \n Arguments:\n path (str): Path to the data file.\n format (str): The format of the data file. One of \"CSV\", \"TSV\", or\n \"JSON\" (case-insensitive).\n- fields (list(tuple(str, Field)) or dict[str, (name, Field)]: For CSV and\n+ fields (list(tuple(str, Field)) or dict[str: tuple(str, Field)]: For CSV and\n TSV formats, list of tuples of (name, field). The list should be in\n the same order as the columns in the CSV or TSV file, while tuples of\n (name, None) represent columns that will be ignored. For JSON format,\n@@ -139,12 +139,15 @@\n (name, field). This allows the user to rename columns from their JSON key\n names and also enables selecting a subset of columns to load\n (since JSON keys not present in the input dictionary are ignored).\n+ skip_header (bool): Whether to skip the first line of the input file.\n \"\"\"\n make_example = {\n 'json': Example.fromJSON, 'dict': Example.fromdict,\n 'tsv': Example.fromTSV, 'csv': Example.fromCSV}[format.lower()]\n \n with io.open(os.path.expanduser(path), encoding=\"utf8\") as f:\n+ if skip_header:\n+ next(f)\n examples = [make_example(line, fields) for line in f]\n \n if make_example in (Example.fromdict, Example.fromJSON):\n", "issue": "Escape csv header lines\nI haven't been able to see how to skip first csv line in case of loading from a file with header. I could of course preprocess the file, but it'd be nice if there was an option to TabularDataset to tell it to skip the first line.\n", "before_files": [{"content": "import io\nimport os\nimport zipfile\nimport tarfile\n\nimport torch.utils.data\nfrom six.moves import urllib\n\nfrom .example import Example\n\n\nclass Dataset(torch.utils.data.Dataset):\n \"\"\"Defines a dataset composed of Examples along with its Fields.\n\n Attributes:\n sort_key (callable): A key to use for sorting dataset examples for batching\n together examples with similar lengths to minimize padding.\n examples (list(Example)): The examples in this dataset.\n fields: A dictionary containing the name of each column together with\n its corresponding Field object. Two columns with the same Field\n object will share a vocabulary.\n fields (dict[str, Field]): Contains the name of each column or field, together\n with the corresponding Field object. Two fields with the same Field object\n will have a shared vocabulary.\n \"\"\"\n sort_key = None\n\n def __init__(self, examples, fields, filter_pred=None):\n \"\"\"Create a dataset from a list of Examples and Fields.\n\n Arguments:\n examples: List of Examples.\n fields (List(tuple(str, Field))): The Fields to use in this tuple. The\n string is a field name, and the Field is the associated field.\n filter_pred (callable or None): Use only examples for which\n filter_pred(example) is True, or use all examples if None.\n Default is None.\n \"\"\"\n if filter_pred is not None:\n examples = list(filter(filter_pred, examples))\n self.examples = examples\n self.fields = dict(fields)\n\n @classmethod\n def splits(cls, path, train=None, validation=None, test=None, **kwargs):\n \"\"\"Create Dataset objects for multiple splits of a dataset.\n\n Arguments:\n path (str): Common prefix of the splits' file paths.\n train (str): Suffix to add to path for the train set, or None for no\n train set. Default is None.\n validation (str): Suffix to add to path for the validation set, or None\n for no validation set. Default is None.\n test (str): Suffix to add to path for the test set, or None for no test\n set. Default is None.\n Remaining keyword arguments: Passed to the constructor of the\n Dataset (sub)class being used.\n\n Returns:\n split_datasets (tuple(Dataset)): Datasets for train, validation, and\n test splits in that order, if provided.\n \"\"\"\n train_data = None if train is None else cls(path + train, **kwargs)\n val_data = None if validation is None else cls(path + validation,\n **kwargs)\n test_data = None if test is None else cls(path + test, **kwargs)\n return tuple(d for d in (train_data, val_data, test_data)\n if d is not None)\n\n def __getitem__(self, i):\n return self.examples[i]\n\n def __len__(self):\n try:\n return len(self.examples)\n except TypeError:\n return 2**32\n\n def __iter__(self):\n for x in self.examples:\n yield x\n\n def __getattr__(self, attr):\n if attr in self.fields:\n for x in self.examples:\n yield getattr(x, attr)\n\n @classmethod\n def download(cls, root, check=None):\n \"\"\"Download and unzip an online archive (.zip, .gz, or .tgz).\n\n Arguments:\n root (str): Folder to download data to.\n check (str or None): Folder whose existence indicates\n that the dataset has already been downloaded, or\n None to check the existence of root.\n\n Returns:\n dataset_path (str): Path to extracted dataset.\n \"\"\"\n path = os.path.join(root, cls.name)\n check = path if check is None else check\n if not os.path.isdir(check):\n for url in cls.urls:\n filename = os.path.basename(url)\n zpath = os.path.join(path, filename)\n if not os.path.isfile(zpath):\n if not os.path.exists(os.path.dirname(zpath)):\n os.makedirs(os.path.dirname(zpath))\n print('downloading {}'.format(filename))\n urllib.request.urlretrieve(url, zpath)\n ext = os.path.splitext(filename)[-1]\n if ext == '.zip':\n with zipfile.ZipFile(zpath, 'r') as zfile:\n print('extracting')\n zfile.extractall(path)\n elif ext in ['.gz', '.tgz']:\n with tarfile.open(zpath, 'r:gz') as tar:\n dirs = [member for member in tar.getmembers()]\n tar.extractall(path=path, members=dirs)\n return os.path.join(path, cls.dirname)\n\n\nclass TabularDataset(Dataset):\n \"\"\"Defines a Dataset of columns stored in CSV, TSV, or JSON format.\"\"\"\n\n def __init__(self, path, format, fields, **kwargs):\n \"\"\"Create a TabularDataset given a path, file format, and field list.\n\n Arguments:\n path (str): Path to the data file.\n format (str): The format of the data file. One of \"CSV\", \"TSV\", or\n \"JSON\" (case-insensitive).\n fields (list(tuple(str, Field)) or dict[str, (name, Field)]: For CSV and\n TSV formats, list of tuples of (name, field). The list should be in\n the same order as the columns in the CSV or TSV file, while tuples of\n (name, None) represent columns that will be ignored. For JSON format,\n dictionary whose keys are the JSON keys and whose values are tuples of\n (name, field). This allows the user to rename columns from their JSON key\n names and also enables selecting a subset of columns to load\n (since JSON keys not present in the input dictionary are ignored).\n \"\"\"\n make_example = {\n 'json': Example.fromJSON, 'dict': Example.fromdict,\n 'tsv': Example.fromTSV, 'csv': Example.fromCSV}[format.lower()]\n\n with io.open(os.path.expanduser(path), encoding=\"utf8\") as f:\n examples = [make_example(line, fields) for line in f]\n\n if make_example in (Example.fromdict, Example.fromJSON):\n fields, field_dict = [], fields\n for field in field_dict.values():\n if isinstance(field, list):\n fields.extend(field)\n else:\n fields.append(field)\n\n super(TabularDataset, self).__init__(examples, fields, **kwargs)\n", "path": "torchtext/data/dataset.py"}], "after_files": [{"content": "import io\nimport os\nimport zipfile\nimport tarfile\n\nimport torch.utils.data\nfrom six.moves import urllib\n\nfrom .example import Example\n\n\nclass Dataset(torch.utils.data.Dataset):\n \"\"\"Defines a dataset composed of Examples along with its Fields.\n\n Attributes:\n sort_key (callable): A key to use for sorting dataset examples for batching\n together examples with similar lengths to minimize padding.\n examples (list(Example)): The examples in this dataset.\n fields: A dictionary containing the name of each column together with\n its corresponding Field object. Two columns with the same Field\n object will share a vocabulary.\n fields (dict[str, Field]): Contains the name of each column or field, together\n with the corresponding Field object. Two fields with the same Field object\n will have a shared vocabulary.\n \"\"\"\n sort_key = None\n\n def __init__(self, examples, fields, filter_pred=None):\n \"\"\"Create a dataset from a list of Examples and Fields.\n\n Arguments:\n examples: List of Examples.\n fields (List(tuple(str, Field))): The Fields to use in this tuple. The\n string is a field name, and the Field is the associated field.\n filter_pred (callable or None): Use only examples for which\n filter_pred(example) is True, or use all examples if None.\n Default is None.\n \"\"\"\n if filter_pred is not None:\n examples = list(filter(filter_pred, examples))\n self.examples = examples\n self.fields = dict(fields)\n\n @classmethod\n def splits(cls, path, train=None, validation=None, test=None, **kwargs):\n \"\"\"Create Dataset objects for multiple splits of a dataset.\n\n Arguments:\n path (str): Common prefix of the splits' file paths.\n train (str): Suffix to add to path for the train set, or None for no\n train set. Default is None.\n validation (str): Suffix to add to path for the validation set, or None\n for no validation set. Default is None.\n test (str): Suffix to add to path for the test set, or None for no test\n set. Default is None.\n Remaining keyword arguments: Passed to the constructor of the\n Dataset (sub)class being used.\n\n Returns:\n split_datasets (tuple(Dataset)): Datasets for train, validation, and\n test splits in that order, if provided.\n \"\"\"\n train_data = None if train is None else cls(path + train, **kwargs)\n val_data = None if validation is None else cls(path + validation,\n **kwargs)\n test_data = None if test is None else cls(path + test, **kwargs)\n return tuple(d for d in (train_data, val_data, test_data)\n if d is not None)\n\n def __getitem__(self, i):\n return self.examples[i]\n\n def __len__(self):\n try:\n return len(self.examples)\n except TypeError:\n return 2**32\n\n def __iter__(self):\n for x in self.examples:\n yield x\n\n def __getattr__(self, attr):\n if attr in self.fields:\n for x in self.examples:\n yield getattr(x, attr)\n\n @classmethod\n def download(cls, root, check=None):\n \"\"\"Download and unzip an online archive (.zip, .gz, or .tgz).\n\n Arguments:\n root (str): Folder to download data to.\n check (str or None): Folder whose existence indicates\n that the dataset has already been downloaded, or\n None to check the existence of root.\n\n Returns:\n dataset_path (str): Path to extracted dataset.\n \"\"\"\n path = os.path.join(root, cls.name)\n check = path if check is None else check\n if not os.path.isdir(check):\n for url in cls.urls:\n filename = os.path.basename(url)\n zpath = os.path.join(path, filename)\n if not os.path.isfile(zpath):\n if not os.path.exists(os.path.dirname(zpath)):\n os.makedirs(os.path.dirname(zpath))\n print('downloading {}'.format(filename))\n urllib.request.urlretrieve(url, zpath)\n ext = os.path.splitext(filename)[-1]\n if ext == '.zip':\n with zipfile.ZipFile(zpath, 'r') as zfile:\n print('extracting')\n zfile.extractall(path)\n elif ext in ['.gz', '.tgz']:\n with tarfile.open(zpath, 'r:gz') as tar:\n dirs = [member for member in tar.getmembers()]\n tar.extractall(path=path, members=dirs)\n return os.path.join(path, cls.dirname)\n\n\nclass TabularDataset(Dataset):\n \"\"\"Defines a Dataset of columns stored in CSV, TSV, or JSON format.\"\"\"\n\n def __init__(self, path, format, fields, skip_header=False, **kwargs):\n \"\"\"Create a TabularDataset given a path, file format, and field list.\n\n Arguments:\n path (str): Path to the data file.\n format (str): The format of the data file. One of \"CSV\", \"TSV\", or\n \"JSON\" (case-insensitive).\n fields (list(tuple(str, Field)) or dict[str: tuple(str, Field)]: For CSV and\n TSV formats, list of tuples of (name, field). The list should be in\n the same order as the columns in the CSV or TSV file, while tuples of\n (name, None) represent columns that will be ignored. For JSON format,\n dictionary whose keys are the JSON keys and whose values are tuples of\n (name, field). This allows the user to rename columns from their JSON key\n names and also enables selecting a subset of columns to load\n (since JSON keys not present in the input dictionary are ignored).\n skip_header (bool): Whether to skip the first line of the input file.\n \"\"\"\n make_example = {\n 'json': Example.fromJSON, 'dict': Example.fromdict,\n 'tsv': Example.fromTSV, 'csv': Example.fromCSV}[format.lower()]\n\n with io.open(os.path.expanduser(path), encoding=\"utf8\") as f:\n if skip_header:\n next(f)\n examples = [make_example(line, fields) for line in f]\n\n if make_example in (Example.fromdict, Example.fromJSON):\n fields, field_dict = [], fields\n for field in field_dict.values():\n if isinstance(field, list):\n fields.extend(field)\n else:\n fields.append(field)\n\n super(TabularDataset, self).__init__(examples, fields, **kwargs)\n", "path": "torchtext/data/dataset.py"}]}
2,083
470
gh_patches_debug_39382
rasdani/github-patches
git_diff
scikit-hep__pyhf-1208
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Participate in iminuit v2.0 beta? Dear pyhf team, I am about to finish a major rewrite of iminuit, version 2.0, that replaces Cython as the tool to wrap C++ Minuit2 with pybind11, which is going to solve several issues that the legacy code had. All the good things that this will bring are listed on top of this PR: scikit-hep/iminuit#502 Switching to the new version of iminuit should be completely transparent to you, since the new version passes the comprehensive suite of unit tests of iminuit-v1.x. However, I would like to use this opportunity to finally remove interface that has been successively marked as deprecated in versions 1.3 to 1.5. Therefore my two question to you: * Did you take note of the deprecation warnings in iminuit and did you keep up with the interface changes so far? * Are you interested in trying out a Beta release of v2.0 to work out any possible bugs in the new version before the release? Best regards, Hans, iminuit maintainer --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `src/pyhf/optimize/opt_minuit.py` Content: ``` 1 """Minuit Optimizer Class.""" 2 from .. import default_backend, exceptions 3 from .mixins import OptimizerMixin 4 import scipy 5 import iminuit 6 7 8 class minuit_optimizer(OptimizerMixin): 9 """ 10 Optimizer that uses iminuit.Minuit.migrad. 11 """ 12 13 __slots__ = ['name', 'errordef', 'steps', 'strategy', 'tolerance'] 14 15 def __init__(self, *args, **kwargs): 16 """ 17 Create MINUIT Optimizer. 18 19 .. note:: 20 21 ``errordef`` should be 1.0 for a least-squares cost function and 0.5 22 for negative log-likelihood function. See page 37 of 23 http://hep.fi.infn.it/minuit.pdf. This parameter is sometimes 24 called ``UP`` in the ``MINUIT`` docs. 25 26 27 Args: 28 errordef (:obj:`float`): See minuit docs. Default is 1.0. 29 steps (:obj:`int`): Number of steps for the bounds. Default is 1000. 30 strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. Default is None. 31 tolerance (:obj:`float`): tolerance for termination. See specific optimizer for detailed meaning. Default is 0.1. 32 """ 33 self.name = 'minuit' 34 self.errordef = kwargs.pop('errordef', 1) 35 self.steps = kwargs.pop('steps', 1000) 36 self.strategy = kwargs.pop('strategy', None) 37 self.tolerance = kwargs.pop('tolerance', 0.1) 38 super().__init__(*args, **kwargs) 39 40 def _get_minimizer( 41 self, objective_and_grad, init_pars, init_bounds, fixed_vals=None, do_grad=False 42 ): 43 44 step_sizes = [(b[1] - b[0]) / float(self.steps) for b in init_bounds] 45 fixed_vals = fixed_vals or [] 46 # Minuit wants True/False for each parameter 47 fixed_bools = [False] * len(init_pars) 48 for index, val in fixed_vals: 49 fixed_bools[index] = True 50 init_pars[index] = val 51 step_sizes[index] = 0.0 52 53 # Minuit requires jac=callable 54 if do_grad: 55 wrapped_objective = lambda pars: objective_and_grad(pars)[0] # noqa: E731 56 jac = lambda pars: objective_and_grad(pars)[1] # noqa: E731 57 else: 58 wrapped_objective = objective_and_grad 59 jac = None 60 61 kwargs = dict( 62 fcn=wrapped_objective, 63 grad=jac, 64 start=init_pars, 65 error=step_sizes, 66 limit=init_bounds, 67 fix=fixed_bools, 68 print_level=self.verbose, 69 errordef=self.errordef, 70 ) 71 return iminuit.Minuit.from_array_func(**kwargs) 72 73 def _minimize( 74 self, 75 minimizer, 76 func, 77 x0, 78 do_grad=False, 79 bounds=None, 80 fixed_vals=None, 81 return_uncertainties=False, 82 options={}, 83 ): 84 85 """ 86 Same signature as :func:`scipy.optimize.minimize`. 87 88 Note: an additional `minuit` is injected into the fitresult to get the 89 underlying minimizer. 90 91 Minimizer Options: 92 maxiter (:obj:`int`): maximum number of iterations. Default is 100000. 93 return_uncertainties (:obj:`bool`): Return uncertainties on the fitted parameters. Default is off. 94 strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. Default is to configure in response to `do_grad`. 95 96 Returns: 97 fitresult (scipy.optimize.OptimizeResult): the fit result 98 """ 99 maxiter = options.pop('maxiter', self.maxiter) 100 return_uncertainties = options.pop('return_uncertainties', False) 101 # 0: Fast, user-provided gradient 102 # 1: Default, no user-provided gradient 103 strategy = options.pop( 104 'strategy', self.strategy if self.strategy else not do_grad 105 ) 106 tolerance = options.pop('tolerance', self.tolerance) 107 if options: 108 raise exceptions.Unsupported( 109 f"Unsupported options were passed in: {list(options.keys())}." 110 ) 111 112 minimizer.strategy = strategy 113 minimizer.tol = tolerance 114 minimizer.migrad(ncall=maxiter) 115 # Following lines below come from: 116 # https://github.com/scikit-hep/iminuit/blob/64acac11cfa2fb91ccbd02d1b3c51f8a9e2cc484/src/iminuit/_minimize.py#L102-L121 117 message = "Optimization terminated successfully." 118 if not minimizer.valid: 119 message = "Optimization failed." 120 fmin = minimizer.fmin 121 if fmin.has_reached_call_limit: 122 message += " Call limit was reached." 123 if fmin.is_above_max_edm: 124 message += " Estimated distance to minimum too large." 125 126 n = len(x0) 127 hess_inv = default_backend.ones((n, n)) 128 if minimizer.valid: 129 # Extra call to hesse() after migrad() is always needed for good error estimates. If you pass a user-provided gradient to MINUIT, convergence is faster. 130 minimizer.hesse() 131 hess_inv = minimizer.np_covariance() 132 133 unc = None 134 if return_uncertainties: 135 unc = minimizer.np_errors() 136 137 return scipy.optimize.OptimizeResult( 138 x=minimizer.np_values(), 139 unc=unc, 140 success=minimizer.valid, 141 fun=minimizer.fval, 142 hess_inv=hess_inv, 143 message=message, 144 nfev=minimizer.ncalls_total, 145 njev=minimizer.ngrads_total, 146 minuit=minimizer, 147 ) 148 ``` Path: `setup.py` Content: ``` 1 from setuptools import setup 2 3 extras_require = { 4 'shellcomplete': ['click_completion'], 5 'tensorflow': [ 6 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major 7 'tensorflow-probability~=0.10.0', 8 ], 9 'torch': ['torch~=1.2'], 10 'jax': ['jax~=0.2.4', 'jaxlib~=0.1.56'], 11 'xmlio': ['uproot3~=3.14'], # Future proof against uproot4 API changes 12 'minuit': ['iminuit~=1.5.3'], 13 } 14 extras_require['backends'] = sorted( 15 set( 16 extras_require['tensorflow'] 17 + extras_require['torch'] 18 + extras_require['jax'] 19 + extras_require['minuit'] 20 ) 21 ) 22 extras_require['contrib'] = sorted({'matplotlib', 'requests'}) 23 extras_require['lint'] = sorted({'flake8', 'black'}) 24 25 extras_require['test'] = sorted( 26 set( 27 extras_require['backends'] 28 + extras_require['xmlio'] 29 + extras_require['contrib'] 30 + extras_require['shellcomplete'] 31 + [ 32 'pytest~=6.0', 33 'pytest-cov>=2.5.1', 34 'pytest-mock', 35 'pytest-benchmark[histogram]', 36 'pytest-console-scripts', 37 'pytest-mpl', 38 'pydocstyle', 39 'coverage>=4.0', # coveralls 40 'papermill~=2.0', 41 'nteract-scrapbook~=0.2', 42 'jupyter', 43 'graphviz', 44 'jsonpatch', 45 ] 46 ) 47 ) 48 extras_require['docs'] = sorted( 49 { 50 'sphinx>=3.1.2', 51 'sphinxcontrib-bibtex~=1.0', 52 'sphinx-click', 53 'sphinx_rtd_theme', 54 'nbsphinx', 55 'ipywidgets', 56 'sphinx-issues', 57 'sphinx-copybutton>0.2.9', 58 } 59 ) 60 extras_require['develop'] = sorted( 61 set( 62 extras_require['docs'] 63 + extras_require['lint'] 64 + extras_require['test'] 65 + [ 66 'nbdime', 67 'bump2version', 68 'ipython', 69 'pre-commit', 70 'check-manifest', 71 'codemetapy>=0.3.4', 72 'twine', 73 ] 74 ) 75 ) 76 extras_require['complete'] = sorted(set(sum(extras_require.values(), []))) 77 78 79 setup( 80 extras_require=extras_require, 81 use_scm_version=lambda: {'local_scheme': lambda version: ''}, 82 ) 83 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -9,7 +9,7 @@ 'torch': ['torch~=1.2'], 'jax': ['jax~=0.2.4', 'jaxlib~=0.1.56'], 'xmlio': ['uproot3~=3.14'], # Future proof against uproot4 API changes - 'minuit': ['iminuit~=1.5.3'], + 'minuit': ['iminuit~=2.1'], } extras_require['backends'] = sorted( set( diff --git a/src/pyhf/optimize/opt_minuit.py b/src/pyhf/optimize/opt_minuit.py --- a/src/pyhf/optimize/opt_minuit.py +++ b/src/pyhf/optimize/opt_minuit.py @@ -58,17 +58,13 @@ wrapped_objective = objective_and_grad jac = None - kwargs = dict( - fcn=wrapped_objective, - grad=jac, - start=init_pars, - error=step_sizes, - limit=init_bounds, - fix=fixed_bools, - print_level=self.verbose, - errordef=self.errordef, - ) - return iminuit.Minuit.from_array_func(**kwargs) + minuit = iminuit.Minuit(wrapped_objective, init_pars, grad=jac) + minuit.errors = step_sizes + minuit.limits = init_bounds + minuit.fixed = fixed_bools + minuit.print_level = self.verbose + minuit.errordef = self.errordef + return minuit def _minimize( self, @@ -113,7 +109,7 @@ minimizer.tol = tolerance minimizer.migrad(ncall=maxiter) # Following lines below come from: - # https://github.com/scikit-hep/iminuit/blob/64acac11cfa2fb91ccbd02d1b3c51f8a9e2cc484/src/iminuit/_minimize.py#L102-L121 + # https://github.com/scikit-hep/iminuit/blob/23bad7697e39d363f259ca8349684df939b1b2e6/src/iminuit/_minimize.py#L111-L130 message = "Optimization terminated successfully." if not minimizer.valid: message = "Optimization failed." @@ -128,20 +124,20 @@ if minimizer.valid: # Extra call to hesse() after migrad() is always needed for good error estimates. If you pass a user-provided gradient to MINUIT, convergence is faster. minimizer.hesse() - hess_inv = minimizer.np_covariance() + hess_inv = minimizer.covariance unc = None if return_uncertainties: - unc = minimizer.np_errors() + unc = minimizer.errors return scipy.optimize.OptimizeResult( - x=minimizer.np_values(), + x=minimizer.values, unc=unc, success=minimizer.valid, fun=minimizer.fval, hess_inv=hess_inv, message=message, - nfev=minimizer.ncalls_total, - njev=minimizer.ngrads_total, + nfev=minimizer.nfcn, + njev=minimizer.ngrad, minuit=minimizer, )
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -9,7 +9,7 @@\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.2.4', 'jaxlib~=0.1.56'],\n 'xmlio': ['uproot3~=3.14'], # Future proof against uproot4 API changes\n- 'minuit': ['iminuit~=1.5.3'],\n+ 'minuit': ['iminuit~=2.1'],\n }\n extras_require['backends'] = sorted(\n set(\ndiff --git a/src/pyhf/optimize/opt_minuit.py b/src/pyhf/optimize/opt_minuit.py\n--- a/src/pyhf/optimize/opt_minuit.py\n+++ b/src/pyhf/optimize/opt_minuit.py\n@@ -58,17 +58,13 @@\n wrapped_objective = objective_and_grad\n jac = None\n \n- kwargs = dict(\n- fcn=wrapped_objective,\n- grad=jac,\n- start=init_pars,\n- error=step_sizes,\n- limit=init_bounds,\n- fix=fixed_bools,\n- print_level=self.verbose,\n- errordef=self.errordef,\n- )\n- return iminuit.Minuit.from_array_func(**kwargs)\n+ minuit = iminuit.Minuit(wrapped_objective, init_pars, grad=jac)\n+ minuit.errors = step_sizes\n+ minuit.limits = init_bounds\n+ minuit.fixed = fixed_bools\n+ minuit.print_level = self.verbose\n+ minuit.errordef = self.errordef\n+ return minuit\n \n def _minimize(\n self,\n@@ -113,7 +109,7 @@\n minimizer.tol = tolerance\n minimizer.migrad(ncall=maxiter)\n # Following lines below come from:\n- # https://github.com/scikit-hep/iminuit/blob/64acac11cfa2fb91ccbd02d1b3c51f8a9e2cc484/src/iminuit/_minimize.py#L102-L121\n+ # https://github.com/scikit-hep/iminuit/blob/23bad7697e39d363f259ca8349684df939b1b2e6/src/iminuit/_minimize.py#L111-L130\n message = \"Optimization terminated successfully.\"\n if not minimizer.valid:\n message = \"Optimization failed.\"\n@@ -128,20 +124,20 @@\n if minimizer.valid:\n # Extra call to hesse() after migrad() is always needed for good error estimates. If you pass a user-provided gradient to MINUIT, convergence is faster.\n minimizer.hesse()\n- hess_inv = minimizer.np_covariance()\n+ hess_inv = minimizer.covariance\n \n unc = None\n if return_uncertainties:\n- unc = minimizer.np_errors()\n+ unc = minimizer.errors\n \n return scipy.optimize.OptimizeResult(\n- x=minimizer.np_values(),\n+ x=minimizer.values,\n unc=unc,\n success=minimizer.valid,\n fun=minimizer.fval,\n hess_inv=hess_inv,\n message=message,\n- nfev=minimizer.ncalls_total,\n- njev=minimizer.ngrads_total,\n+ nfev=minimizer.nfcn,\n+ njev=minimizer.ngrad,\n minuit=minimizer,\n )\n", "issue": " Participate in iminuit v2.0 beta?\nDear pyhf team,\r\n\r\nI am about to finish a major rewrite of iminuit, version 2.0, that replaces Cython as the tool to wrap C++ Minuit2 with pybind11, which is going to solve several issues that the legacy code had. All the good things that this will bring are listed on top of this PR:\r\nscikit-hep/iminuit#502\r\n\r\nSwitching to the new version of iminuit should be completely transparent to you, since the new version passes the comprehensive suite of unit tests of iminuit-v1.x. However, I would like to use this opportunity to finally remove interface that has been successively marked as deprecated in versions 1.3 to 1.5.\r\n\r\nTherefore my two question to you:\r\n\r\n * Did you take note of the deprecation warnings in iminuit and did you keep up with the interface changes so far?\r\n * Are you interested in trying out a Beta release of v2.0 to work out any possible bugs in the new version before the release?\r\n\r\nBest regards,\r\nHans, iminuit maintainer\n", "before_files": [{"content": "\"\"\"Minuit Optimizer Class.\"\"\"\nfrom .. import default_backend, exceptions\nfrom .mixins import OptimizerMixin\nimport scipy\nimport iminuit\n\n\nclass minuit_optimizer(OptimizerMixin):\n \"\"\"\n Optimizer that uses iminuit.Minuit.migrad.\n \"\"\"\n\n __slots__ = ['name', 'errordef', 'steps', 'strategy', 'tolerance']\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n Create MINUIT Optimizer.\n\n .. note::\n\n ``errordef`` should be 1.0 for a least-squares cost function and 0.5\n for negative log-likelihood function. See page 37 of\n http://hep.fi.infn.it/minuit.pdf. This parameter is sometimes\n called ``UP`` in the ``MINUIT`` docs.\n\n\n Args:\n errordef (:obj:`float`): See minuit docs. Default is 1.0.\n steps (:obj:`int`): Number of steps for the bounds. Default is 1000.\n strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. Default is None.\n tolerance (:obj:`float`): tolerance for termination. See specific optimizer for detailed meaning. Default is 0.1.\n \"\"\"\n self.name = 'minuit'\n self.errordef = kwargs.pop('errordef', 1)\n self.steps = kwargs.pop('steps', 1000)\n self.strategy = kwargs.pop('strategy', None)\n self.tolerance = kwargs.pop('tolerance', 0.1)\n super().__init__(*args, **kwargs)\n\n def _get_minimizer(\n self, objective_and_grad, init_pars, init_bounds, fixed_vals=None, do_grad=False\n ):\n\n step_sizes = [(b[1] - b[0]) / float(self.steps) for b in init_bounds]\n fixed_vals = fixed_vals or []\n # Minuit wants True/False for each parameter\n fixed_bools = [False] * len(init_pars)\n for index, val in fixed_vals:\n fixed_bools[index] = True\n init_pars[index] = val\n step_sizes[index] = 0.0\n\n # Minuit requires jac=callable\n if do_grad:\n wrapped_objective = lambda pars: objective_and_grad(pars)[0] # noqa: E731\n jac = lambda pars: objective_and_grad(pars)[1] # noqa: E731\n else:\n wrapped_objective = objective_and_grad\n jac = None\n\n kwargs = dict(\n fcn=wrapped_objective,\n grad=jac,\n start=init_pars,\n error=step_sizes,\n limit=init_bounds,\n fix=fixed_bools,\n print_level=self.verbose,\n errordef=self.errordef,\n )\n return iminuit.Minuit.from_array_func(**kwargs)\n\n def _minimize(\n self,\n minimizer,\n func,\n x0,\n do_grad=False,\n bounds=None,\n fixed_vals=None,\n return_uncertainties=False,\n options={},\n ):\n\n \"\"\"\n Same signature as :func:`scipy.optimize.minimize`.\n\n Note: an additional `minuit` is injected into the fitresult to get the\n underlying minimizer.\n\n Minimizer Options:\n maxiter (:obj:`int`): maximum number of iterations. Default is 100000.\n return_uncertainties (:obj:`bool`): Return uncertainties on the fitted parameters. Default is off.\n strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. Default is to configure in response to `do_grad`.\n\n Returns:\n fitresult (scipy.optimize.OptimizeResult): the fit result\n \"\"\"\n maxiter = options.pop('maxiter', self.maxiter)\n return_uncertainties = options.pop('return_uncertainties', False)\n # 0: Fast, user-provided gradient\n # 1: Default, no user-provided gradient\n strategy = options.pop(\n 'strategy', self.strategy if self.strategy else not do_grad\n )\n tolerance = options.pop('tolerance', self.tolerance)\n if options:\n raise exceptions.Unsupported(\n f\"Unsupported options were passed in: {list(options.keys())}.\"\n )\n\n minimizer.strategy = strategy\n minimizer.tol = tolerance\n minimizer.migrad(ncall=maxiter)\n # Following lines below come from:\n # https://github.com/scikit-hep/iminuit/blob/64acac11cfa2fb91ccbd02d1b3c51f8a9e2cc484/src/iminuit/_minimize.py#L102-L121\n message = \"Optimization terminated successfully.\"\n if not minimizer.valid:\n message = \"Optimization failed.\"\n fmin = minimizer.fmin\n if fmin.has_reached_call_limit:\n message += \" Call limit was reached.\"\n if fmin.is_above_max_edm:\n message += \" Estimated distance to minimum too large.\"\n\n n = len(x0)\n hess_inv = default_backend.ones((n, n))\n if minimizer.valid:\n # Extra call to hesse() after migrad() is always needed for good error estimates. If you pass a user-provided gradient to MINUIT, convergence is faster.\n minimizer.hesse()\n hess_inv = minimizer.np_covariance()\n\n unc = None\n if return_uncertainties:\n unc = minimizer.np_errors()\n\n return scipy.optimize.OptimizeResult(\n x=minimizer.np_values(),\n unc=unc,\n success=minimizer.valid,\n fun=minimizer.fval,\n hess_inv=hess_inv,\n message=message,\n nfev=minimizer.ncalls_total,\n njev=minimizer.ngrads_total,\n minuit=minimizer,\n )\n", "path": "src/pyhf/optimize/opt_minuit.py"}, {"content": "from setuptools import setup\n\nextras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major\n 'tensorflow-probability~=0.10.0',\n ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.2.4', 'jaxlib~=0.1.56'],\n 'xmlio': ['uproot3~=3.14'], # Future proof against uproot4 API changes\n 'minuit': ['iminuit~=1.5.3'],\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted({'matplotlib', 'requests'})\nextras_require['lint'] = sorted({'flake8', 'black'})\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + extras_require['shellcomplete']\n + [\n 'pytest~=6.0',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'graphviz',\n 'jsonpatch',\n ]\n )\n)\nextras_require['docs'] = sorted(\n {\n 'sphinx>=3.1.2',\n 'sphinxcontrib-bibtex~=1.0',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n }\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + [\n 'nbdime',\n 'bump2version',\n 'ipython',\n 'pre-commit',\n 'check-manifest',\n 'codemetapy>=0.3.4',\n 'twine',\n ]\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}], "after_files": [{"content": "\"\"\"Minuit Optimizer Class.\"\"\"\nfrom .. import default_backend, exceptions\nfrom .mixins import OptimizerMixin\nimport scipy\nimport iminuit\n\n\nclass minuit_optimizer(OptimizerMixin):\n \"\"\"\n Optimizer that uses iminuit.Minuit.migrad.\n \"\"\"\n\n __slots__ = ['name', 'errordef', 'steps', 'strategy', 'tolerance']\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n Create MINUIT Optimizer.\n\n .. note::\n\n ``errordef`` should be 1.0 for a least-squares cost function and 0.5\n for negative log-likelihood function. See page 37 of\n http://hep.fi.infn.it/minuit.pdf. This parameter is sometimes\n called ``UP`` in the ``MINUIT`` docs.\n\n\n Args:\n errordef (:obj:`float`): See minuit docs. Default is 1.0.\n steps (:obj:`int`): Number of steps for the bounds. Default is 1000.\n strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. Default is None.\n tolerance (:obj:`float`): tolerance for termination. See specific optimizer for detailed meaning. Default is 0.1.\n \"\"\"\n self.name = 'minuit'\n self.errordef = kwargs.pop('errordef', 1)\n self.steps = kwargs.pop('steps', 1000)\n self.strategy = kwargs.pop('strategy', None)\n self.tolerance = kwargs.pop('tolerance', 0.1)\n super().__init__(*args, **kwargs)\n\n def _get_minimizer(\n self, objective_and_grad, init_pars, init_bounds, fixed_vals=None, do_grad=False\n ):\n\n step_sizes = [(b[1] - b[0]) / float(self.steps) for b in init_bounds]\n fixed_vals = fixed_vals or []\n # Minuit wants True/False for each parameter\n fixed_bools = [False] * len(init_pars)\n for index, val in fixed_vals:\n fixed_bools[index] = True\n init_pars[index] = val\n step_sizes[index] = 0.0\n\n # Minuit requires jac=callable\n if do_grad:\n wrapped_objective = lambda pars: objective_and_grad(pars)[0] # noqa: E731\n jac = lambda pars: objective_and_grad(pars)[1] # noqa: E731\n else:\n wrapped_objective = objective_and_grad\n jac = None\n\n minuit = iminuit.Minuit(wrapped_objective, init_pars, grad=jac)\n minuit.errors = step_sizes\n minuit.limits = init_bounds\n minuit.fixed = fixed_bools\n minuit.print_level = self.verbose\n minuit.errordef = self.errordef\n return minuit\n\n def _minimize(\n self,\n minimizer,\n func,\n x0,\n do_grad=False,\n bounds=None,\n fixed_vals=None,\n return_uncertainties=False,\n options={},\n ):\n\n \"\"\"\n Same signature as :func:`scipy.optimize.minimize`.\n\n Note: an additional `minuit` is injected into the fitresult to get the\n underlying minimizer.\n\n Minimizer Options:\n maxiter (:obj:`int`): maximum number of iterations. Default is 100000.\n return_uncertainties (:obj:`bool`): Return uncertainties on the fitted parameters. Default is off.\n strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. Default is to configure in response to `do_grad`.\n\n Returns:\n fitresult (scipy.optimize.OptimizeResult): the fit result\n \"\"\"\n maxiter = options.pop('maxiter', self.maxiter)\n return_uncertainties = options.pop('return_uncertainties', False)\n # 0: Fast, user-provided gradient\n # 1: Default, no user-provided gradient\n strategy = options.pop(\n 'strategy', self.strategy if self.strategy else not do_grad\n )\n tolerance = options.pop('tolerance', self.tolerance)\n if options:\n raise exceptions.Unsupported(\n f\"Unsupported options were passed in: {list(options.keys())}.\"\n )\n\n minimizer.strategy = strategy\n minimizer.tol = tolerance\n minimizer.migrad(ncall=maxiter)\n # Following lines below come from:\n # https://github.com/scikit-hep/iminuit/blob/23bad7697e39d363f259ca8349684df939b1b2e6/src/iminuit/_minimize.py#L111-L130\n message = \"Optimization terminated successfully.\"\n if not minimizer.valid:\n message = \"Optimization failed.\"\n fmin = minimizer.fmin\n if fmin.has_reached_call_limit:\n message += \" Call limit was reached.\"\n if fmin.is_above_max_edm:\n message += \" Estimated distance to minimum too large.\"\n\n n = len(x0)\n hess_inv = default_backend.ones((n, n))\n if minimizer.valid:\n # Extra call to hesse() after migrad() is always needed for good error estimates. If you pass a user-provided gradient to MINUIT, convergence is faster.\n minimizer.hesse()\n hess_inv = minimizer.covariance\n\n unc = None\n if return_uncertainties:\n unc = minimizer.errors\n\n return scipy.optimize.OptimizeResult(\n x=minimizer.values,\n unc=unc,\n success=minimizer.valid,\n fun=minimizer.fval,\n hess_inv=hess_inv,\n message=message,\n nfev=minimizer.nfcn,\n njev=minimizer.ngrad,\n minuit=minimizer,\n )\n", "path": "src/pyhf/optimize/opt_minuit.py"}, {"content": "from setuptools import setup\n\nextras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major\n 'tensorflow-probability~=0.10.0',\n ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.2.4', 'jaxlib~=0.1.56'],\n 'xmlio': ['uproot3~=3.14'], # Future proof against uproot4 API changes\n 'minuit': ['iminuit~=2.1'],\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted({'matplotlib', 'requests'})\nextras_require['lint'] = sorted({'flake8', 'black'})\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + extras_require['shellcomplete']\n + [\n 'pytest~=6.0',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'graphviz',\n 'jsonpatch',\n ]\n )\n)\nextras_require['docs'] = sorted(\n {\n 'sphinx>=3.1.2',\n 'sphinxcontrib-bibtex~=1.0',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n }\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + [\n 'nbdime',\n 'bump2version',\n 'ipython',\n 'pre-commit',\n 'check-manifest',\n 'codemetapy>=0.3.4',\n 'twine',\n ]\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}]}
2,883
806
gh_patches_debug_9121
rasdani/github-patches
git_diff
OCHA-DAP__hdx-ckan-1053
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Org Admin: Dataset management page is broken Log in as a sysadmin user. Go to: http://data.hdx.rwlabs.org/organization/bulk_process/ocha-fiss-geneva --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `ckanext-hdx_orgs/ckanext/hdx_orgs/plugin.py` Content: ``` 1 import logging 2 import ckan.plugins as plugins 3 import ckan.plugins.toolkit as tk 4 import ckan.lib.plugins as lib_plugins 5 6 class HDXOrgFormPlugin(plugins.SingletonPlugin, lib_plugins.DefaultOrganizationForm): 7 plugins.implements(plugins.IConfigurer, inherit=False) 8 plugins.implements(plugins.IRoutes, inherit=True) 9 plugins.implements(plugins.IGroupForm, inherit=False) 10 plugins.implements(plugins.ITemplateHelpers, inherit=False) 11 12 num_times_new_template_called = 0 13 num_times_read_template_called = 0 14 num_times_edit_template_called = 0 15 num_times_search_template_called = 0 16 num_times_history_template_called = 0 17 num_times_package_form_called = 0 18 num_times_check_data_dict_called = 0 19 num_times_setup_template_variables_called = 0 20 21 def update_config(self, config): 22 tk.add_template_directory(config, 'templates') 23 24 def get_helpers(self): 25 return {} 26 27 def is_fallback(self): 28 return False 29 30 def group_types(self): 31 return ['organization'] 32 33 def _modify_group_schema(self, schema): 34 schema.update({ 35 'description':[tk.get_validator('not_empty')], 36 'org_url':[tk.get_validator('not_missing'), tk.get_converter('convert_to_extras')], 37 }) 38 return schema 39 40 def form_to_db_schema(self): 41 schema = super(HDXOrgFormPlugin, self).form_to_db_schema() 42 schema = self._modify_group_schema(schema) 43 return schema 44 45 # def check_data_dict(self, data_dict): 46 # return super(HDXOrgFormPlugin, self).check_data_dict(self, data_dict) 47 48 def db_to_form_schema(self): 49 # There's a bug in dictionary validation when form isn't present 50 if tk.request.urlvars['action'] == 'index' or tk.request.urlvars['action'] == 'edit' or tk.request.urlvars['action'] == 'new': 51 schema = super(HDXOrgFormPlugin, self).form_to_db_schema() 52 schema.update({'description':[tk.get_validator('not_empty')] }) 53 schema.update({'org_url':[tk.get_validator('not_missing'), tk.get_converter('convert_to_extras')]}) 54 return schema 55 else: 56 return None 57 58 def before_map(self, map): 59 map.connect('user_dashboard', '/dashboard', controller='ckanext.hdx_orgs.dashboard:DashboardController', action='dashboard', 60 ckan_icon='list') 61 return map 62 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/ckanext-hdx_orgs/ckanext/hdx_orgs/plugin.py b/ckanext-hdx_orgs/ckanext/hdx_orgs/plugin.py --- a/ckanext-hdx_orgs/ckanext/hdx_orgs/plugin.py +++ b/ckanext-hdx_orgs/ckanext/hdx_orgs/plugin.py @@ -58,4 +58,6 @@ def before_map(self, map): map.connect('user_dashboard', '/dashboard', controller='ckanext.hdx_orgs.dashboard:DashboardController', action='dashboard', ckan_icon='list') + map.connect('organization_bulk_process', '/organization/bulk_process/{org_id}', controller='organization', action='index') + map.connect('organization_bulk_process_no_id', '/organization/bulk_process', controller='organization', action='index') return map
{"golden_diff": "diff --git a/ckanext-hdx_orgs/ckanext/hdx_orgs/plugin.py b/ckanext-hdx_orgs/ckanext/hdx_orgs/plugin.py\n--- a/ckanext-hdx_orgs/ckanext/hdx_orgs/plugin.py\n+++ b/ckanext-hdx_orgs/ckanext/hdx_orgs/plugin.py\n@@ -58,4 +58,6 @@\n def before_map(self, map):\n map.connect('user_dashboard', '/dashboard', controller='ckanext.hdx_orgs.dashboard:DashboardController', action='dashboard',\n ckan_icon='list')\n+ map.connect('organization_bulk_process', '/organization/bulk_process/{org_id}', controller='organization', action='index')\n+ map.connect('organization_bulk_process_no_id', '/organization/bulk_process', controller='organization', action='index')\n return map\n", "issue": "Org Admin: Dataset management page is broken\nLog in as a sysadmin user.\nGo to:\nhttp://data.hdx.rwlabs.org/organization/bulk_process/ocha-fiss-geneva\n\n", "before_files": [{"content": "import logging\nimport ckan.plugins as plugins\nimport ckan.plugins.toolkit as tk\nimport ckan.lib.plugins as lib_plugins\n\nclass HDXOrgFormPlugin(plugins.SingletonPlugin, lib_plugins.DefaultOrganizationForm):\n plugins.implements(plugins.IConfigurer, inherit=False)\n plugins.implements(plugins.IRoutes, inherit=True)\n plugins.implements(plugins.IGroupForm, inherit=False)\n plugins.implements(plugins.ITemplateHelpers, inherit=False)\n\n num_times_new_template_called = 0\n num_times_read_template_called = 0\n num_times_edit_template_called = 0\n num_times_search_template_called = 0\n num_times_history_template_called = 0\n num_times_package_form_called = 0\n num_times_check_data_dict_called = 0\n num_times_setup_template_variables_called = 0\n\n def update_config(self, config):\n tk.add_template_directory(config, 'templates')\n\n def get_helpers(self):\n return {}\n\n def is_fallback(self):\n return False\n\n def group_types(self):\n return ['organization']\n\n def _modify_group_schema(self, schema):\n schema.update({\n 'description':[tk.get_validator('not_empty')],\n 'org_url':[tk.get_validator('not_missing'), tk.get_converter('convert_to_extras')],\n })\n return schema\n\n def form_to_db_schema(self):\n schema = super(HDXOrgFormPlugin, self).form_to_db_schema()\n schema = self._modify_group_schema(schema)\n return schema\n \n# def check_data_dict(self, data_dict):\n# return super(HDXOrgFormPlugin, self).check_data_dict(self, data_dict)\n \n def db_to_form_schema(self):\n # There's a bug in dictionary validation when form isn't present\n if tk.request.urlvars['action'] == 'index' or tk.request.urlvars['action'] == 'edit' or tk.request.urlvars['action'] == 'new':\n schema = super(HDXOrgFormPlugin, self).form_to_db_schema()\n schema.update({'description':[tk.get_validator('not_empty')] })\n schema.update({'org_url':[tk.get_validator('not_missing'), tk.get_converter('convert_to_extras')]})\n return schema\n else:\n return None\n\n def before_map(self, map):\n map.connect('user_dashboard', '/dashboard', controller='ckanext.hdx_orgs.dashboard:DashboardController', action='dashboard',\n ckan_icon='list')\n return map\n", "path": "ckanext-hdx_orgs/ckanext/hdx_orgs/plugin.py"}], "after_files": [{"content": "import logging\nimport ckan.plugins as plugins\nimport ckan.plugins.toolkit as tk\nimport ckan.lib.plugins as lib_plugins\n\nclass HDXOrgFormPlugin(plugins.SingletonPlugin, lib_plugins.DefaultOrganizationForm):\n plugins.implements(plugins.IConfigurer, inherit=False)\n plugins.implements(plugins.IRoutes, inherit=True)\n plugins.implements(plugins.IGroupForm, inherit=False)\n plugins.implements(plugins.ITemplateHelpers, inherit=False)\n\n num_times_new_template_called = 0\n num_times_read_template_called = 0\n num_times_edit_template_called = 0\n num_times_search_template_called = 0\n num_times_history_template_called = 0\n num_times_package_form_called = 0\n num_times_check_data_dict_called = 0\n num_times_setup_template_variables_called = 0\n\n def update_config(self, config):\n tk.add_template_directory(config, 'templates')\n\n def get_helpers(self):\n return {}\n\n def is_fallback(self):\n return False\n\n def group_types(self):\n return ['organization']\n\n def _modify_group_schema(self, schema):\n schema.update({\n 'description':[tk.get_validator('not_empty')],\n 'org_url':[tk.get_validator('not_missing'), tk.get_converter('convert_to_extras')],\n })\n return schema\n\n def form_to_db_schema(self):\n schema = super(HDXOrgFormPlugin, self).form_to_db_schema()\n schema = self._modify_group_schema(schema)\n return schema\n \n# def check_data_dict(self, data_dict):\n# return super(HDXOrgFormPlugin, self).check_data_dict(self, data_dict)\n \n def db_to_form_schema(self):\n # There's a bug in dictionary validation when form isn't present\n if tk.request.urlvars['action'] == 'index' or tk.request.urlvars['action'] == 'edit' or tk.request.urlvars['action'] == 'new':\n schema = super(HDXOrgFormPlugin, self).form_to_db_schema()\n schema.update({'description':[tk.get_validator('not_empty')] })\n schema.update({'org_url':[tk.get_validator('not_missing'), tk.get_converter('convert_to_extras')]})\n return schema\n else:\n return None\n\n def before_map(self, map):\n map.connect('user_dashboard', '/dashboard', controller='ckanext.hdx_orgs.dashboard:DashboardController', action='dashboard',\n ckan_icon='list')\n map.connect('organization_bulk_process', '/organization/bulk_process/{org_id}', controller='organization', action='index')\n map.connect('organization_bulk_process_no_id', '/organization/bulk_process', controller='organization', action='index')\n return map\n", "path": "ckanext-hdx_orgs/ckanext/hdx_orgs/plugin.py"}]}
962
194
gh_patches_debug_5383
rasdani/github-patches
git_diff
quantumlib__Cirq-606
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Engine measurements are bytes but simulator measurements are bools This causes code that works with the simulator to fail when given engine results. We should make these consistent. Example code that works with simulator results but not engine results: ```python a = np.zeros([repetition_count], dtype=np.bool) a ^= results.measurements['x'][:, 0] a ^= results.measurements['y'][:, 0] ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `cirq/google/programs.py` Content: ``` 1 # Copyright 2018 The Cirq Developers 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # https://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 from typing import Dict, Iterable, Sequence, Tuple, TYPE_CHECKING 15 16 import numpy as np 17 18 from cirq.api.google.v1 import operations_pb2 19 from cirq.google import xmon_gates, xmon_gate_ext 20 from cirq.google.xmon_device import XmonDevice 21 from cirq.schedules import Schedule, ScheduledOperation 22 from cirq.value import Timestamp 23 24 if TYPE_CHECKING: 25 from typing import Optional # pylint: disable=unused-import 26 27 28 def schedule_to_proto(schedule: Schedule) -> Iterable[operations_pb2.Operation]: 29 """Convert a schedule into protobufs. 30 31 Args: 32 schedule: The schedule to convert to protobufs. Must contain only gates 33 that can be cast to xmon gates. 34 35 Yields: 36 operations_pb2.Operation 37 """ 38 last_time_picos = None # type: Optional[int] 39 for so in schedule.scheduled_operations: 40 gate = xmon_gate_ext.cast(xmon_gates.XmonGate, so.operation.gate) 41 op = gate.to_proto(*so.operation.qubits) 42 time_picos = so.time.raw_picos() 43 if last_time_picos is None: 44 op.incremental_delay_picoseconds = time_picos 45 else: 46 op.incremental_delay_picoseconds = time_picos - last_time_picos 47 last_time_picos = time_picos 48 yield op 49 50 51 def schedule_from_proto( 52 device: XmonDevice, 53 ops: Iterable[operations_pb2.Operation], 54 ) -> Schedule: 55 """Convert protobufs into a Schedule for the given device.""" 56 scheduled_ops = [] 57 last_time_picos = 0 58 for op in ops: 59 time_picos = last_time_picos + op.incremental_delay_picoseconds 60 last_time_picos = time_picos 61 xmon_op = xmon_gates.XmonGate.from_proto(op) 62 scheduled_ops.append(ScheduledOperation.op_at_on( 63 operation=xmon_op, 64 time=Timestamp(picos=time_picos), 65 device=device, 66 )) 67 return Schedule(device, scheduled_ops) 68 69 70 def pack_results(measurements: Sequence[Tuple[str, np.ndarray]]) -> bytes: 71 """Pack measurement results into a byte string. 72 73 Args: 74 measurements: A sequence of tuples, one for each measurement, consisting 75 of a string key and an array of boolean data. The data should be 76 a 2-D array indexed by (repetition, qubit_index). All data for all 77 measurements must have the same number of repetitions. 78 79 Returns: 80 Packed bytes, as described in the unpack_results docstring below. 81 82 Raises: 83 ValueError if the measurement data do not have the compatible shapes. 84 """ 85 if not measurements: 86 return b'' 87 88 shapes = [(key, np.shape(data)) for key, data in measurements] 89 if not all(len(shape) == 2 for _, shape in shapes): 90 raise ValueError("Expected 2-D data: shapes={}".format(shapes)) 91 92 reps = shapes[0][1][0] 93 if not all(shape[0] == reps for _, shape in shapes): 94 raise ValueError( 95 "Expected same reps for all keys: shapes={}".format(shapes)) 96 97 bits = np.hstack(np.asarray(data, dtype=bool) for _, data in measurements) 98 bits = bits.reshape(-1) 99 100 # Pad length to multiple of 8 if needed. 101 remainder = len(bits) % 8 102 if remainder: 103 bits = np.pad(bits, (0, 8 - remainder), 'constant') 104 105 # Pack in little-endian bit order. 106 bits = bits.reshape((-1, 8))[:, ::-1] 107 byte_arr = np.packbits(bits, axis=1).reshape(-1) 108 109 return byte_arr.tobytes() 110 111 112 def unpack_results( 113 data: bytes, 114 repetitions: int, 115 key_sizes: Sequence[Tuple[str, int]] 116 ) -> Dict[str, np.ndarray]: 117 """Unpack data from a bitstring into individual measurement results. 118 119 Args: 120 data: Packed measurement results, in the form <rep0><rep1>... 121 where each repetition is <key0_0>..<key0_{size0-1}><key1_0>... 122 with bits packed in little-endian order in each byte. 123 repetitions: number of repetitions. 124 key_sizes: Keys and sizes of the measurements in the data. 125 126 Returns: 127 Dict mapping measurement key to a 2D array of boolean results. Each 128 array has shape (repetitions, size) with size for that measurement. 129 """ 130 bits_per_rep = sum(size for _, size in key_sizes) 131 total_bits = repetitions * bits_per_rep 132 133 byte_arr = np.frombuffer(data, dtype='uint8').reshape((len(data), 1)) 134 bits = np.unpackbits(byte_arr, axis=1)[:, ::-1].reshape(-1) 135 bits = bits[:total_bits].reshape((repetitions, bits_per_rep)) 136 137 results = {} 138 ofs = 0 139 for key, size in key_sizes: 140 results[key] = bits[:, ofs:ofs + size] 141 ofs += size 142 143 return results 144 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/cirq/google/programs.py b/cirq/google/programs.py --- a/cirq/google/programs.py +++ b/cirq/google/programs.py @@ -131,7 +131,7 @@ total_bits = repetitions * bits_per_rep byte_arr = np.frombuffer(data, dtype='uint8').reshape((len(data), 1)) - bits = np.unpackbits(byte_arr, axis=1)[:, ::-1].reshape(-1) + bits = np.unpackbits(byte_arr, axis=1)[:, ::-1].reshape(-1).astype(bool) bits = bits[:total_bits].reshape((repetitions, bits_per_rep)) results = {}
{"golden_diff": "diff --git a/cirq/google/programs.py b/cirq/google/programs.py\n--- a/cirq/google/programs.py\n+++ b/cirq/google/programs.py\n@@ -131,7 +131,7 @@\n total_bits = repetitions * bits_per_rep\n \n byte_arr = np.frombuffer(data, dtype='uint8').reshape((len(data), 1))\n- bits = np.unpackbits(byte_arr, axis=1)[:, ::-1].reshape(-1)\n+ bits = np.unpackbits(byte_arr, axis=1)[:, ::-1].reshape(-1).astype(bool)\n bits = bits[:total_bits].reshape((repetitions, bits_per_rep))\n \n results = {}\n", "issue": "Engine measurements are bytes but simulator measurements are bools\nThis causes code that works with the simulator to fail when given engine results. We should make these consistent.\r\n\r\nExample code that works with simulator results but not engine results:\r\n\r\n```python\r\na = np.zeros([repetition_count], dtype=np.bool)\r\na ^= results.measurements['x'][:, 0]\r\na ^= results.measurements['y'][:, 0]\r\n```\n", "before_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Dict, Iterable, Sequence, Tuple, TYPE_CHECKING\n\nimport numpy as np\n\nfrom cirq.api.google.v1 import operations_pb2\nfrom cirq.google import xmon_gates, xmon_gate_ext\nfrom cirq.google.xmon_device import XmonDevice\nfrom cirq.schedules import Schedule, ScheduledOperation\nfrom cirq.value import Timestamp\n\nif TYPE_CHECKING:\n from typing import Optional # pylint: disable=unused-import\n\n\ndef schedule_to_proto(schedule: Schedule) -> Iterable[operations_pb2.Operation]:\n \"\"\"Convert a schedule into protobufs.\n\n Args:\n schedule: The schedule to convert to protobufs. Must contain only gates\n that can be cast to xmon gates.\n\n Yields:\n operations_pb2.Operation\n \"\"\"\n last_time_picos = None # type: Optional[int]\n for so in schedule.scheduled_operations:\n gate = xmon_gate_ext.cast(xmon_gates.XmonGate, so.operation.gate)\n op = gate.to_proto(*so.operation.qubits)\n time_picos = so.time.raw_picos()\n if last_time_picos is None:\n op.incremental_delay_picoseconds = time_picos\n else:\n op.incremental_delay_picoseconds = time_picos - last_time_picos\n last_time_picos = time_picos\n yield op\n\n\ndef schedule_from_proto(\n device: XmonDevice,\n ops: Iterable[operations_pb2.Operation],\n) -> Schedule:\n \"\"\"Convert protobufs into a Schedule for the given device.\"\"\"\n scheduled_ops = []\n last_time_picos = 0\n for op in ops:\n time_picos = last_time_picos + op.incremental_delay_picoseconds\n last_time_picos = time_picos\n xmon_op = xmon_gates.XmonGate.from_proto(op)\n scheduled_ops.append(ScheduledOperation.op_at_on(\n operation=xmon_op,\n time=Timestamp(picos=time_picos),\n device=device,\n ))\n return Schedule(device, scheduled_ops)\n\n\ndef pack_results(measurements: Sequence[Tuple[str, np.ndarray]]) -> bytes:\n \"\"\"Pack measurement results into a byte string.\n\n Args:\n measurements: A sequence of tuples, one for each measurement, consisting\n of a string key and an array of boolean data. The data should be\n a 2-D array indexed by (repetition, qubit_index). All data for all\n measurements must have the same number of repetitions.\n\n Returns:\n Packed bytes, as described in the unpack_results docstring below.\n\n Raises:\n ValueError if the measurement data do not have the compatible shapes.\n \"\"\"\n if not measurements:\n return b''\n\n shapes = [(key, np.shape(data)) for key, data in measurements]\n if not all(len(shape) == 2 for _, shape in shapes):\n raise ValueError(\"Expected 2-D data: shapes={}\".format(shapes))\n\n reps = shapes[0][1][0]\n if not all(shape[0] == reps for _, shape in shapes):\n raise ValueError(\n \"Expected same reps for all keys: shapes={}\".format(shapes))\n\n bits = np.hstack(np.asarray(data, dtype=bool) for _, data in measurements)\n bits = bits.reshape(-1)\n\n # Pad length to multiple of 8 if needed.\n remainder = len(bits) % 8\n if remainder:\n bits = np.pad(bits, (0, 8 - remainder), 'constant')\n\n # Pack in little-endian bit order.\n bits = bits.reshape((-1, 8))[:, ::-1]\n byte_arr = np.packbits(bits, axis=1).reshape(-1)\n\n return byte_arr.tobytes()\n\n\ndef unpack_results(\n data: bytes,\n repetitions: int,\n key_sizes: Sequence[Tuple[str, int]]\n) -> Dict[str, np.ndarray]:\n \"\"\"Unpack data from a bitstring into individual measurement results.\n\n Args:\n data: Packed measurement results, in the form <rep0><rep1>...\n where each repetition is <key0_0>..<key0_{size0-1}><key1_0>...\n with bits packed in little-endian order in each byte.\n repetitions: number of repetitions.\n key_sizes: Keys and sizes of the measurements in the data.\n\n Returns:\n Dict mapping measurement key to a 2D array of boolean results. Each\n array has shape (repetitions, size) with size for that measurement.\n \"\"\"\n bits_per_rep = sum(size for _, size in key_sizes)\n total_bits = repetitions * bits_per_rep\n\n byte_arr = np.frombuffer(data, dtype='uint8').reshape((len(data), 1))\n bits = np.unpackbits(byte_arr, axis=1)[:, ::-1].reshape(-1)\n bits = bits[:total_bits].reshape((repetitions, bits_per_rep))\n\n results = {}\n ofs = 0\n for key, size in key_sizes:\n results[key] = bits[:, ofs:ofs + size]\n ofs += size\n\n return results\n", "path": "cirq/google/programs.py"}], "after_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Dict, Iterable, Sequence, Tuple, TYPE_CHECKING\n\nimport numpy as np\n\nfrom cirq.api.google.v1 import operations_pb2\nfrom cirq.google import xmon_gates, xmon_gate_ext\nfrom cirq.google.xmon_device import XmonDevice\nfrom cirq.schedules import Schedule, ScheduledOperation\nfrom cirq.value import Timestamp\n\nif TYPE_CHECKING:\n from typing import Optional # pylint: disable=unused-import\n\n\ndef schedule_to_proto(schedule: Schedule) -> Iterable[operations_pb2.Operation]:\n \"\"\"Convert a schedule into protobufs.\n\n Args:\n schedule: The schedule to convert to protobufs. Must contain only gates\n that can be cast to xmon gates.\n\n Yields:\n operations_pb2.Operation\n \"\"\"\n last_time_picos = None # type: Optional[int]\n for so in schedule.scheduled_operations:\n gate = xmon_gate_ext.cast(xmon_gates.XmonGate, so.operation.gate)\n op = gate.to_proto(*so.operation.qubits)\n time_picos = so.time.raw_picos()\n if last_time_picos is None:\n op.incremental_delay_picoseconds = time_picos\n else:\n op.incremental_delay_picoseconds = time_picos - last_time_picos\n last_time_picos = time_picos\n yield op\n\n\ndef schedule_from_proto(\n device: XmonDevice,\n ops: Iterable[operations_pb2.Operation],\n) -> Schedule:\n \"\"\"Convert protobufs into a Schedule for the given device.\"\"\"\n scheduled_ops = []\n last_time_picos = 0\n for op in ops:\n time_picos = last_time_picos + op.incremental_delay_picoseconds\n last_time_picos = time_picos\n xmon_op = xmon_gates.XmonGate.from_proto(op)\n scheduled_ops.append(ScheduledOperation.op_at_on(\n operation=xmon_op,\n time=Timestamp(picos=time_picos),\n device=device,\n ))\n return Schedule(device, scheduled_ops)\n\n\ndef pack_results(measurements: Sequence[Tuple[str, np.ndarray]]) -> bytes:\n \"\"\"Pack measurement results into a byte string.\n\n Args:\n measurements: A sequence of tuples, one for each measurement, consisting\n of a string key and an array of boolean data. The data should be\n a 2-D array indexed by (repetition, qubit_index). All data for all\n measurements must have the same number of repetitions.\n\n Returns:\n Packed bytes, as described in the unpack_results docstring below.\n\n Raises:\n ValueError if the measurement data do not have the compatible shapes.\n \"\"\"\n if not measurements:\n return b''\n\n shapes = [(key, np.shape(data)) for key, data in measurements]\n if not all(len(shape) == 2 for _, shape in shapes):\n raise ValueError(\"Expected 2-D data: shapes={}\".format(shapes))\n\n reps = shapes[0][1][0]\n if not all(shape[0] == reps for _, shape in shapes):\n raise ValueError(\n \"Expected same reps for all keys: shapes={}\".format(shapes))\n\n bits = np.hstack(np.asarray(data, dtype=bool) for _, data in measurements)\n bits = bits.reshape(-1)\n\n # Pad length to multiple of 8 if needed.\n remainder = len(bits) % 8\n if remainder:\n bits = np.pad(bits, (0, 8 - remainder), 'constant')\n\n # Pack in little-endian bit order.\n bits = bits.reshape((-1, 8))[:, ::-1]\n byte_arr = np.packbits(bits, axis=1).reshape(-1)\n\n return byte_arr.tobytes()\n\n\ndef unpack_results(\n data: bytes,\n repetitions: int,\n key_sizes: Sequence[Tuple[str, int]]\n) -> Dict[str, np.ndarray]:\n \"\"\"Unpack data from a bitstring into individual measurement results.\n\n Args:\n data: Packed measurement results, in the form <rep0><rep1>...\n where each repetition is <key0_0>..<key0_{size0-1}><key1_0>...\n with bits packed in little-endian order in each byte.\n repetitions: number of repetitions.\n key_sizes: Keys and sizes of the measurements in the data.\n\n Returns:\n Dict mapping measurement key to a 2D array of boolean results. Each\n array has shape (repetitions, size) with size for that measurement.\n \"\"\"\n bits_per_rep = sum(size for _, size in key_sizes)\n total_bits = repetitions * bits_per_rep\n\n byte_arr = np.frombuffer(data, dtype='uint8').reshape((len(data), 1))\n bits = np.unpackbits(byte_arr, axis=1)[:, ::-1].reshape(-1).astype(bool)\n bits = bits[:total_bits].reshape((repetitions, bits_per_rep))\n\n results = {}\n ofs = 0\n for key, size in key_sizes:\n results[key] = bits[:, ofs:ofs + size]\n ofs += size\n\n return results\n", "path": "cirq/google/programs.py"}]}
1,908
154
gh_patches_debug_30606
rasdani/github-patches
git_diff
streamlink__streamlink-5444
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- plugins.lrt: stream is reported Forbidden (though plays if opened manually) ### Checklist - [X] This is a plugin issue and not a different kind of issue - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Latest stable release ### Description When trying to open https://www.lrt.lt/mediateka/tiesiogiai/lrt-televizija or https://www.lrt.lt/mediateka/tiesiogiai/lrt-plius, an error is reported (see the log below). However, if I try to manually pass the m3u8 URL mentioned in the error to `mpv`, like this (the URL taken from the log below, note the absence of the `%0A` at the end of it): mpv https://af5dcb595ac445ab94d7da3af2ebb360.dlvr1.net/lrt_hd/master.m3u8?RxKc3mPWTMxjM1SuDkHZeW1Fw3jEx0oqyryrSQODiHo-Bs31UZVEBEPkLtrdbPKVKrlorJgTLUnSwqks_5Y1QrSQRYfbtlWddOuLrpnY9-kuyM_3QE_yBbqwzhre ...then, after a few ffmpeg errors and warnings, it does open. The error started to appear a few days ago, worked perfectly before that (so, probably, they changed something at their side). Thanks. ### Debug log ```text [cli][debug] OS: Linux-5.15.0-76-generic-x86_64-with-glibc2.35 [cli][debug] Python: 3.11.3 [cli][debug] Streamlink: 5.5.1 [cli][debug] Dependencies: [cli][debug] certifi: 2023.5.7 [cli][debug] isodate: 0.6.1 [cli][debug] lxml: 4.9.2 [cli][debug] pycountry: 22.3.5 [cli][debug] pycryptodome: 3.18.0 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.31.0 [cli][debug] urllib3: 2.0.2 [cli][debug] websocket-client: 1.5.2 [cli][debug] Arguments: [cli][debug] url=https://www.lrt.lt/mediateka/tiesiogiai/lrt-televizija [cli][debug] --loglevel=debug [cli][info] Found matching plugin lrt for URL https://www.lrt.lt/mediateka/tiesiogiai/lrt-televizija [utils.l10n][debug] Language code: en_US error: Unable to open URL: https://af5dcb595ac445ab94d7da3af2ebb360.dlvr1.net/lrt_hd/master.m3u8?RxKc3mPWTMxjM1SuDkHZeW1Fw3jEx0oqyryrSQODiHo-Bs31UZVEBEPkLtrdbPKVKrlorJgTLUnSwqks_5Y1QrSQRYfbtlWddOuLrpnY9-kuyM_3QE_yBbqwzhre (403 Client Error: Forbidden for url: https://af5dcb595ac445ab94d7da3af2ebb360.dlvr1.net/lrt_hd/master.m3u8?RxKc3mPWTMxjM1SuDkHZeW1Fw3jEx0oqyryrSQODiHo-Bs31UZVEBEPkLtrdbPKVKrlorJgTLUnSwqks_5Y1QrSQRYfbtlWddOuLrpnY9-kuyM_3QE_yBbqwzhre%0A) ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `src/streamlink/plugins/lrt.py` Content: ``` 1 """ 2 $description Live TV channels from LRT, a Lithuanian public, state-owned broadcaster. 3 $url lrt.lt 4 $type live 5 """ 6 7 import logging 8 import re 9 10 from streamlink.plugin import Plugin, pluginmatcher 11 from streamlink.stream.hls import HLSStream 12 13 14 log = logging.getLogger(__name__) 15 16 17 @pluginmatcher(re.compile( 18 r"https?://(?:www\.)?lrt\.lt/mediateka/tiesiogiai/", 19 )) 20 class LRT(Plugin): 21 _video_id_re = re.compile(r"""var\svideo_id\s*=\s*["'](?P<video_id>\w+)["']""") 22 API_URL = "https://www.lrt.lt/servisai/stream_url/live/get_live_url.php?channel={0}" 23 24 def _get_streams(self): 25 page = self.session.http.get(self.url) 26 m = self._video_id_re.search(page.text) 27 if m: 28 video_id = m.group("video_id") 29 data = self.session.http.get(self.API_URL.format(video_id)).json() 30 hls_url = data["response"]["data"]["content"] 31 32 yield from HLSStream.parse_variant_playlist(self.session, hls_url).items() 33 else: 34 log.debug("No match for video_id regex") 35 36 37 __plugin__ = LRT 38 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/src/streamlink/plugins/lrt.py b/src/streamlink/plugins/lrt.py --- a/src/streamlink/plugins/lrt.py +++ b/src/streamlink/plugins/lrt.py @@ -4,34 +4,42 @@ $type live """ -import logging import re from streamlink.plugin import Plugin, pluginmatcher +from streamlink.plugin.api import validate from streamlink.stream.hls import HLSStream -log = logging.getLogger(__name__) - - @pluginmatcher(re.compile( r"https?://(?:www\.)?lrt\.lt/mediateka/tiesiogiai/", )) class LRT(Plugin): - _video_id_re = re.compile(r"""var\svideo_id\s*=\s*["'](?P<video_id>\w+)["']""") - API_URL = "https://www.lrt.lt/servisai/stream_url/live/get_live_url.php?channel={0}" - def _get_streams(self): - page = self.session.http.get(self.url) - m = self._video_id_re.search(page.text) - if m: - video_id = m.group("video_id") - data = self.session.http.get(self.API_URL.format(video_id)).json() - hls_url = data["response"]["data"]["content"] - - yield from HLSStream.parse_variant_playlist(self.session, hls_url).items() - else: - log.debug("No match for video_id regex") + token_url = self.session.http.get(self.url, schema=validate.Schema( + re.compile(r"""var\s+tokenURL\s*=\s*(?P<q>["'])(?P<url>https://\S+)(?P=q)"""), + validate.none_or_all(validate.get("url")), + )) + if not token_url: + return + + hls_url = self.session.http.get(token_url, schema=validate.Schema( + validate.parse_json(), + { + "response": { + "data": { + "content": validate.all( + str, + validate.transform(lambda url: url.strip()), + validate.url(path=validate.endswith(".m3u8")), + ), + }, + }, + }, + validate.get(("response", "data", "content")), + )) + + return HLSStream.parse_variant_playlist(self.session, hls_url) __plugin__ = LRT
{"golden_diff": "diff --git a/src/streamlink/plugins/lrt.py b/src/streamlink/plugins/lrt.py\n--- a/src/streamlink/plugins/lrt.py\n+++ b/src/streamlink/plugins/lrt.py\n@@ -4,34 +4,42 @@\n $type live\n \"\"\"\n \n-import logging\n import re\n \n from streamlink.plugin import Plugin, pluginmatcher\n+from streamlink.plugin.api import validate\n from streamlink.stream.hls import HLSStream\n \n \n-log = logging.getLogger(__name__)\n-\n-\n @pluginmatcher(re.compile(\n r\"https?://(?:www\\.)?lrt\\.lt/mediateka/tiesiogiai/\",\n ))\n class LRT(Plugin):\n- _video_id_re = re.compile(r\"\"\"var\\svideo_id\\s*=\\s*[\"'](?P<video_id>\\w+)[\"']\"\"\")\n- API_URL = \"https://www.lrt.lt/servisai/stream_url/live/get_live_url.php?channel={0}\"\n-\n def _get_streams(self):\n- page = self.session.http.get(self.url)\n- m = self._video_id_re.search(page.text)\n- if m:\n- video_id = m.group(\"video_id\")\n- data = self.session.http.get(self.API_URL.format(video_id)).json()\n- hls_url = data[\"response\"][\"data\"][\"content\"]\n-\n- yield from HLSStream.parse_variant_playlist(self.session, hls_url).items()\n- else:\n- log.debug(\"No match for video_id regex\")\n+ token_url = self.session.http.get(self.url, schema=validate.Schema(\n+ re.compile(r\"\"\"var\\s+tokenURL\\s*=\\s*(?P<q>[\"'])(?P<url>https://\\S+)(?P=q)\"\"\"),\n+ validate.none_or_all(validate.get(\"url\")),\n+ ))\n+ if not token_url:\n+ return\n+\n+ hls_url = self.session.http.get(token_url, schema=validate.Schema(\n+ validate.parse_json(),\n+ {\n+ \"response\": {\n+ \"data\": {\n+ \"content\": validate.all(\n+ str,\n+ validate.transform(lambda url: url.strip()),\n+ validate.url(path=validate.endswith(\".m3u8\")),\n+ ),\n+ },\n+ },\n+ },\n+ validate.get((\"response\", \"data\", \"content\")),\n+ ))\n+\n+ return HLSStream.parse_variant_playlist(self.session, hls_url)\n \n \n __plugin__ = LRT\n", "issue": "plugins.lrt: stream is reported Forbidden (though plays if opened manually)\n### Checklist\n\n- [X] This is a plugin issue and not a different kind of issue\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\n\n### Streamlink version\n\nLatest stable release\n\n### Description\n\nWhen trying to open https://www.lrt.lt/mediateka/tiesiogiai/lrt-televizija or https://www.lrt.lt/mediateka/tiesiogiai/lrt-plius, an error is reported (see the log below).\r\n\r\nHowever, if I try to manually pass the m3u8 URL mentioned in the error to `mpv`, like this (the URL taken from the log below, note the absence of the `%0A` at the end of it):\r\n\r\n mpv https://af5dcb595ac445ab94d7da3af2ebb360.dlvr1.net/lrt_hd/master.m3u8?RxKc3mPWTMxjM1SuDkHZeW1Fw3jEx0oqyryrSQODiHo-Bs31UZVEBEPkLtrdbPKVKrlorJgTLUnSwqks_5Y1QrSQRYfbtlWddOuLrpnY9-kuyM_3QE_yBbqwzhre\r\n\r\n...then, after a few ffmpeg errors and warnings, it does open.\r\n\r\nThe error started to appear a few days ago, worked perfectly before that (so, probably, they changed something at their side).\r\n\r\nThanks.\n\n### Debug log\n\n```text\n[cli][debug] OS: Linux-5.15.0-76-generic-x86_64-with-glibc2.35\r\n[cli][debug] Python: 3.11.3\r\n[cli][debug] Streamlink: 5.5.1\r\n[cli][debug] Dependencies:\r\n[cli][debug] certifi: 2023.5.7\r\n[cli][debug] isodate: 0.6.1\r\n[cli][debug] lxml: 4.9.2\r\n[cli][debug] pycountry: 22.3.5\r\n[cli][debug] pycryptodome: 3.18.0\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.31.0\r\n[cli][debug] urllib3: 2.0.2\r\n[cli][debug] websocket-client: 1.5.2\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://www.lrt.lt/mediateka/tiesiogiai/lrt-televizija\r\n[cli][debug] --loglevel=debug\r\n[cli][info] Found matching plugin lrt for URL https://www.lrt.lt/mediateka/tiesiogiai/lrt-televizija\r\n[utils.l10n][debug] Language code: en_US\r\nerror: Unable to open URL: https://af5dcb595ac445ab94d7da3af2ebb360.dlvr1.net/lrt_hd/master.m3u8?RxKc3mPWTMxjM1SuDkHZeW1Fw3jEx0oqyryrSQODiHo-Bs31UZVEBEPkLtrdbPKVKrlorJgTLUnSwqks_5Y1QrSQRYfbtlWddOuLrpnY9-kuyM_3QE_yBbqwzhre\r\n (403 Client Error: Forbidden for url: https://af5dcb595ac445ab94d7da3af2ebb360.dlvr1.net/lrt_hd/master.m3u8?RxKc3mPWTMxjM1SuDkHZeW1Fw3jEx0oqyryrSQODiHo-Bs31UZVEBEPkLtrdbPKVKrlorJgTLUnSwqks_5Y1QrSQRYfbtlWddOuLrpnY9-kuyM_3QE_yBbqwzhre%0A)\n```\n\n", "before_files": [{"content": "\"\"\"\n$description Live TV channels from LRT, a Lithuanian public, state-owned broadcaster.\n$url lrt.lt\n$type live\n\"\"\"\n\nimport logging\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:www\\.)?lrt\\.lt/mediateka/tiesiogiai/\",\n))\nclass LRT(Plugin):\n _video_id_re = re.compile(r\"\"\"var\\svideo_id\\s*=\\s*[\"'](?P<video_id>\\w+)[\"']\"\"\")\n API_URL = \"https://www.lrt.lt/servisai/stream_url/live/get_live_url.php?channel={0}\"\n\n def _get_streams(self):\n page = self.session.http.get(self.url)\n m = self._video_id_re.search(page.text)\n if m:\n video_id = m.group(\"video_id\")\n data = self.session.http.get(self.API_URL.format(video_id)).json()\n hls_url = data[\"response\"][\"data\"][\"content\"]\n\n yield from HLSStream.parse_variant_playlist(self.session, hls_url).items()\n else:\n log.debug(\"No match for video_id regex\")\n\n\n__plugin__ = LRT\n", "path": "src/streamlink/plugins/lrt.py"}], "after_files": [{"content": "\"\"\"\n$description Live TV channels from LRT, a Lithuanian public, state-owned broadcaster.\n$url lrt.lt\n$type live\n\"\"\"\n\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:www\\.)?lrt\\.lt/mediateka/tiesiogiai/\",\n))\nclass LRT(Plugin):\n def _get_streams(self):\n token_url = self.session.http.get(self.url, schema=validate.Schema(\n re.compile(r\"\"\"var\\s+tokenURL\\s*=\\s*(?P<q>[\"'])(?P<url>https://\\S+)(?P=q)\"\"\"),\n validate.none_or_all(validate.get(\"url\")),\n ))\n if not token_url:\n return\n\n hls_url = self.session.http.get(token_url, schema=validate.Schema(\n validate.parse_json(),\n {\n \"response\": {\n \"data\": {\n \"content\": validate.all(\n str,\n validate.transform(lambda url: url.strip()),\n validate.url(path=validate.endswith(\".m3u8\")),\n ),\n },\n },\n },\n validate.get((\"response\", \"data\", \"content\")),\n ))\n\n return HLSStream.parse_variant_playlist(self.session, hls_url)\n\n\n__plugin__ = LRT\n", "path": "src/streamlink/plugins/lrt.py"}]}
1,662
525
gh_patches_debug_16743
rasdani/github-patches
git_diff
ytdl-org__youtube-dl-20646
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [YourPorn] Domain changed to sxyprn.com YourPorn.sexy Domain changed to sxyprn.com which breaks the extractor and then falls back to the generic downloader which gets the thumbnail instead of video ``` youtube-dl --verbose 'https://sxyprn.com/post/5cad9f8a26d51.html?sk=Many-Vids&so=0&ss=latest' [debug] System config: [] [debug] Custom config: [] [debug] Command-line args: ['--verbose', 'https://sxyprn.com/post/5cad9f8a26d51.html?sk=Many-Vids&so=0&ss=latest'] [debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8 [debug] youtube-dl version 2019.04.07 [debug] Python version 3.7.3 (CPython) - Linux-5.0.7-arch1-1-ARCH-x86_64-with-arch-Arch-Linux [debug] Proxy map: {} [generic] 5cad9f8a26d51: Requesting header WARNING: Falling back on generic information extractor. [generic] 5cad9f8a26d51: Downloading webpage [generic] 5cad9f8a26d51: Extracting information [generic] playlist Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn: Collected 12 video ids (downloading 12 of them) [download] Downloading video 1 of 12 [debug] Default format spec: bestvideo+bestaudio/best [debug] Invoking downloader on 'http://s10.trafficdeposit.com//blog/vid/5a6534e03bbb4/5cab7d0bd2a31/vidthumb.mp4' 1-1.mp4 [download] 100% of 249.05KiB in 00:00 [download] Downloading video 2 of 12 [debug] Default format spec: bestvideo+bestaudio/best [debug] Invoking downloader on 'http://s18.trafficdeposit.com//blog/vid/5c7aa58c8c195/5caa34cd5927b/vidthumb.mp4' 1-2.mp4 [download] Downloading video 3 of 12 [debug] Default format spec: bestvideo+bestaudio/best [debug] Invoking downloader on 'http://s14.trafficdeposit.com//blog/vid/5ba53b584947a/5cacd0a1ad653/vidthumb.mp4' 1-3.mp4 [download] 100% of 123.50KiB in 00:00 [download] Downloading video 4 of 12 [debug] Default format spec: bestvideo+bestaudio/best [debug] Invoking downloader on 'http://s19.trafficdeposit.com//blog/vid/5ab2a85d635c4/5ca9a3d3675fb/vidthumb.mp4' 1-4.mp4 [download] 100% of 172.44KiB in 00:00 [download] Downloading video 5 of 12 [debug] Default format spec: bestvideo+bestaudio/best [debug] Invoking downloader on 'http://s19.trafficdeposit.com//blog/vid/5ba53b584947a/5cab801a05f9b/vidthumb.mp4' 1-5.mp4 [download] 100% of 118.19KiB in 00:00 [debug] Default format spec: bestvideo+bestaudio/best [debug] Invoking downloader on 'http://s4.trafficdeposit.com//blog/vid/5b68a7a3c3d95/5cad071036ebd/vidthumb.mp4' [download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (6)-5cad9f8a26d51-6.mp4 [download] 100% of 135.05KiB in 00:00 [download] Downloading video 7 of 12 [debug] Default format spec: bestvideo+bestaudio/best [debug] Invoking downloader on 'http://s20.trafficdeposit.com//blog/vid/59b613729e694/5cada7480fcec/vidthumb.mp4' [download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (7)-5cad9f8a26d51-7.mp4 [download] 100% of 176.89KiB in 00:00 [download] Downloading video 8 of 12 [debug] Default format spec: bestvideo+bestaudio/best [debug] Invoking downloader on 'http://s8.trafficdeposit.com//blog/vid/5ab2a85d635c4/5cad9dce1958a/vidthumb.mp4' [download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (8)-5cad9f8a26d51-8.mp4 [download] 100% of 155.98KiB in 00:00 [download] Downloading video 9 of 12 [debug] Default format spec: bestvideo+bestaudio/best [debug] Invoking downloader on 'http://s15.trafficdeposit.com//blog/vid/5ab2a85d635c4/5cad966a5596a/vidthumb.mp4' [download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (9)-5cad9f8a26d51-9.mp4 [download] 100% of 380.13KiB in 00:01 [download] Downloading video 10 of 12 [debug] Default format spec: bestvideo+bestaudio/best [debug] Invoking downloader on 'http://s13.trafficdeposit.com//blog/vid/5ba53b584947a/5cad35c5ce1e4/vidthumb.mp4' [download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (10)-5cad9f8a26d51-10.mp4 [download] 100% of 254.63KiB in 00:00 [download] Downloading video 11 of 12 [debug] Default format spec: bestvideo+bestaudio/best [debug] Invoking downloader on 'http://s4.trafficdeposit.com//blog/vid/5ba53b584947a/5cad2dd3ee706/vidthumb.mp4' [download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (11)-5cad9f8a26d51-11.mp4 [download] 100% of 272.29KiB in 00:00 [download] Downloading video 12 of 12 [debug] Default format spec: bestvideo+bestaudio/best [debug] Invoking downloader on 'http://s18.trafficdeposit.com//blog/vid/899334366634979328/5cad288db659a/vidthumb.mp4' [download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (12)-5cad9f8a26d51-12.mp4 [download] 100% of 279.43KiB in 00:00 [download] Finished downloading playlist: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `youtube_dl/extractor/yourporn.py` Content: ``` 1 from __future__ import unicode_literals 2 3 from .common import InfoExtractor 4 from ..utils import ( 5 parse_duration, 6 urljoin, 7 ) 8 9 10 class YourPornIE(InfoExtractor): 11 _VALID_URL = r'https?://(?:www\.)?yourporn\.sexy/post/(?P<id>[^/?#&.]+)' 12 _TEST = { 13 'url': 'https://yourporn.sexy/post/57ffcb2e1179b.html', 14 'md5': '6f8682b6464033d87acaa7a8ff0c092e', 15 'info_dict': { 16 'id': '57ffcb2e1179b', 17 'ext': 'mp4', 18 'title': 'md5:c9f43630bd968267672651ba905a7d35', 19 'thumbnail': r're:^https?://.*\.jpg$', 20 'duration': 165, 21 'age_limit': 18, 22 }, 23 'params': { 24 'skip_download': True, 25 }, 26 } 27 28 def _real_extract(self, url): 29 video_id = self._match_id(url) 30 31 webpage = self._download_webpage(url, video_id) 32 33 video_url = urljoin(url, self._parse_json( 34 self._search_regex( 35 r'data-vnfo=(["\'])(?P<data>{.+?})\1', webpage, 'data info', 36 group='data'), 37 video_id)[video_id]).replace('/cdn/', '/cdn4/') 38 39 title = (self._search_regex( 40 r'<[^>]+\bclass=["\']PostEditTA[^>]+>([^<]+)', webpage, 'title', 41 default=None) or self._og_search_description(webpage)).strip() 42 thumbnail = self._og_search_thumbnail(webpage) 43 duration = parse_duration(self._search_regex( 44 r'duration\s*:\s*<[^>]+>([\d:]+)', webpage, 'duration', 45 default=None)) 46 47 return { 48 'id': video_id, 49 'url': video_url, 50 'title': title, 51 'thumbnail': thumbnail, 52 'duration': duration, 53 'age_limit': 18, 54 } 55 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/youtube_dl/extractor/yourporn.py b/youtube_dl/extractor/yourporn.py --- a/youtube_dl/extractor/yourporn.py +++ b/youtube_dl/extractor/yourporn.py @@ -8,8 +8,8 @@ class YourPornIE(InfoExtractor): - _VALID_URL = r'https?://(?:www\.)?yourporn\.sexy/post/(?P<id>[^/?#&.]+)' - _TEST = { + _VALID_URL = r'https?://(?:www\.)?(?:yourporn\.sexy|sxyprn\.com)/post/(?P<id>[^/?#&.]+)' + _TESTS = [{ 'url': 'https://yourporn.sexy/post/57ffcb2e1179b.html', 'md5': '6f8682b6464033d87acaa7a8ff0c092e', 'info_dict': { @@ -23,7 +23,10 @@ 'params': { 'skip_download': True, }, - } + }, { + 'url': 'https://sxyprn.com/post/57ffcb2e1179b.html', + 'only_matching': True, + }] def _real_extract(self, url): video_id = self._match_id(url)
{"golden_diff": "diff --git a/youtube_dl/extractor/yourporn.py b/youtube_dl/extractor/yourporn.py\n--- a/youtube_dl/extractor/yourporn.py\n+++ b/youtube_dl/extractor/yourporn.py\n@@ -8,8 +8,8 @@\n \n \n class YourPornIE(InfoExtractor):\n- _VALID_URL = r'https?://(?:www\\.)?yourporn\\.sexy/post/(?P<id>[^/?#&.]+)'\n- _TEST = {\n+ _VALID_URL = r'https?://(?:www\\.)?(?:yourporn\\.sexy|sxyprn\\.com)/post/(?P<id>[^/?#&.]+)'\n+ _TESTS = [{\n 'url': 'https://yourporn.sexy/post/57ffcb2e1179b.html',\n 'md5': '6f8682b6464033d87acaa7a8ff0c092e',\n 'info_dict': {\n@@ -23,7 +23,10 @@\n 'params': {\n 'skip_download': True,\n },\n- }\n+ }, {\n+ 'url': 'https://sxyprn.com/post/57ffcb2e1179b.html',\n+ 'only_matching': True,\n+ }]\n \n def _real_extract(self, url):\n video_id = self._match_id(url)\n", "issue": "[YourPorn] Domain changed to sxyprn.com\nYourPorn.sexy Domain changed to sxyprn.com which breaks the extractor and then falls back to the generic downloader which gets the thumbnail instead of video \r\n```\r\nyoutube-dl --verbose 'https://sxyprn.com/post/5cad9f8a26d51.html?sk=Many-Vids&so=0&ss=latest' \r\n[debug] System config: []\r\n[debug] Custom config: []\r\n[debug] Command-line args: ['--verbose', 'https://sxyprn.com/post/5cad9f8a26d51.html?sk=Many-Vids&so=0&ss=latest']\r\n[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8\r\n[debug] youtube-dl version 2019.04.07\r\n[debug] Python version 3.7.3 (CPython) - Linux-5.0.7-arch1-1-ARCH-x86_64-with-arch-Arch-Linux\r\n[debug] Proxy map: {}\r\n[generic] 5cad9f8a26d51: Requesting header\r\nWARNING: Falling back on generic information extractor.\r\n[generic] 5cad9f8a26d51: Downloading webpage\r\n[generic] 5cad9f8a26d51: Extracting information\r\n[generic] playlist Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn: Collected 12 video ids (downloading 12 of them)\r\n[download] Downloading video 1 of 12\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on 'http://s10.trafficdeposit.com//blog/vid/5a6534e03bbb4/5cab7d0bd2a31/vidthumb.mp4'\r\n1-1.mp4\r\n[download] 100% of 249.05KiB in 00:00\r\n[download] Downloading video 2 of 12\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on 'http://s18.trafficdeposit.com//blog/vid/5c7aa58c8c195/5caa34cd5927b/vidthumb.mp4'\r\n1-2.mp4\r\n[download] Downloading video 3 of 12\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on 'http://s14.trafficdeposit.com//blog/vid/5ba53b584947a/5cacd0a1ad653/vidthumb.mp4'\r\n1-3.mp4\r\n[download] 100% of 123.50KiB in 00:00\r\n[download] Downloading video 4 of 12\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on 'http://s19.trafficdeposit.com//blog/vid/5ab2a85d635c4/5ca9a3d3675fb/vidthumb.mp4'\r\n1-4.mp4\r\n[download] 100% of 172.44KiB in 00:00\r\n[download] Downloading video 5 of 12\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on 'http://s19.trafficdeposit.com//blog/vid/5ba53b584947a/5cab801a05f9b/vidthumb.mp4'\r\n1-5.mp4\r\n[download] 100% of 118.19KiB in 00:00\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on 'http://s4.trafficdeposit.com//blog/vid/5b68a7a3c3d95/5cad071036ebd/vidthumb.mp4'\r\n[download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (6)-5cad9f8a26d51-6.mp4\r\n[download] 100% of 135.05KiB in 00:00\r\n[download] Downloading video 7 of 12\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on 'http://s20.trafficdeposit.com//blog/vid/59b613729e694/5cada7480fcec/vidthumb.mp4'\r\n[download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (7)-5cad9f8a26d51-7.mp4\r\n[download] 100% of 176.89KiB in 00:00\r\n[download] Downloading video 8 of 12\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on 'http://s8.trafficdeposit.com//blog/vid/5ab2a85d635c4/5cad9dce1958a/vidthumb.mp4'\r\n[download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (8)-5cad9f8a26d51-8.mp4\r\n[download] 100% of 155.98KiB in 00:00\r\n[download] Downloading video 9 of 12\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on 'http://s15.trafficdeposit.com//blog/vid/5ab2a85d635c4/5cad966a5596a/vidthumb.mp4'\r\n[download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (9)-5cad9f8a26d51-9.mp4\r\n[download] 100% of 380.13KiB in 00:01\r\n[download] Downloading video 10 of 12\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on 'http://s13.trafficdeposit.com//blog/vid/5ba53b584947a/5cad35c5ce1e4/vidthumb.mp4'\r\n[download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (10)-5cad9f8a26d51-10.mp4\r\n[download] 100% of 254.63KiB in 00:00\r\n[download] Downloading video 11 of 12\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on 'http://s4.trafficdeposit.com//blog/vid/5ba53b584947a/5cad2dd3ee706/vidthumb.mp4'\r\n[download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (11)-5cad9f8a26d51-11.mp4\r\n[download] 100% of 272.29KiB in 00:00\r\n[download] Downloading video 12 of 12\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on 'http://s18.trafficdeposit.com//blog/vid/899334366634979328/5cad288db659a/vidthumb.mp4'\r\n[download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (12)-5cad9f8a26d51-12.mp4\r\n[download] 100% of 279.43KiB in 00:00\r\n[download] Finished downloading playlist: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn\r\n```\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nfrom .common import InfoExtractor\nfrom ..utils import (\n parse_duration,\n urljoin,\n)\n\n\nclass YourPornIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?yourporn\\.sexy/post/(?P<id>[^/?#&.]+)'\n _TEST = {\n 'url': 'https://yourporn.sexy/post/57ffcb2e1179b.html',\n 'md5': '6f8682b6464033d87acaa7a8ff0c092e',\n 'info_dict': {\n 'id': '57ffcb2e1179b',\n 'ext': 'mp4',\n 'title': 'md5:c9f43630bd968267672651ba905a7d35',\n 'thumbnail': r're:^https?://.*\\.jpg$',\n 'duration': 165,\n 'age_limit': 18,\n },\n 'params': {\n 'skip_download': True,\n },\n }\n\n def _real_extract(self, url):\n video_id = self._match_id(url)\n\n webpage = self._download_webpage(url, video_id)\n\n video_url = urljoin(url, self._parse_json(\n self._search_regex(\n r'data-vnfo=([\"\\'])(?P<data>{.+?})\\1', webpage, 'data info',\n group='data'),\n video_id)[video_id]).replace('/cdn/', '/cdn4/')\n\n title = (self._search_regex(\n r'<[^>]+\\bclass=[\"\\']PostEditTA[^>]+>([^<]+)', webpage, 'title',\n default=None) or self._og_search_description(webpage)).strip()\n thumbnail = self._og_search_thumbnail(webpage)\n duration = parse_duration(self._search_regex(\n r'duration\\s*:\\s*<[^>]+>([\\d:]+)', webpage, 'duration',\n default=None))\n\n return {\n 'id': video_id,\n 'url': video_url,\n 'title': title,\n 'thumbnail': thumbnail,\n 'duration': duration,\n 'age_limit': 18,\n }\n", "path": "youtube_dl/extractor/yourporn.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nfrom .common import InfoExtractor\nfrom ..utils import (\n parse_duration,\n urljoin,\n)\n\n\nclass YourPornIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?(?:yourporn\\.sexy|sxyprn\\.com)/post/(?P<id>[^/?#&.]+)'\n _TESTS = [{\n 'url': 'https://yourporn.sexy/post/57ffcb2e1179b.html',\n 'md5': '6f8682b6464033d87acaa7a8ff0c092e',\n 'info_dict': {\n 'id': '57ffcb2e1179b',\n 'ext': 'mp4',\n 'title': 'md5:c9f43630bd968267672651ba905a7d35',\n 'thumbnail': r're:^https?://.*\\.jpg$',\n 'duration': 165,\n 'age_limit': 18,\n },\n 'params': {\n 'skip_download': True,\n },\n }, {\n 'url': 'https://sxyprn.com/post/57ffcb2e1179b.html',\n 'only_matching': True,\n }]\n\n def _real_extract(self, url):\n video_id = self._match_id(url)\n\n webpage = self._download_webpage(url, video_id)\n\n video_url = urljoin(url, self._parse_json(\n self._search_regex(\n r'data-vnfo=([\"\\'])(?P<data>{.+?})\\1', webpage, 'data info',\n group='data'),\n video_id)[video_id]).replace('/cdn/', '/cdn4/')\n\n title = (self._search_regex(\n r'<[^>]+\\bclass=[\"\\']PostEditTA[^>]+>([^<]+)', webpage, 'title',\n default=None) or self._og_search_description(webpage)).strip()\n thumbnail = self._og_search_thumbnail(webpage)\n duration = parse_duration(self._search_regex(\n r'duration\\s*:\\s*<[^>]+>([\\d:]+)', webpage, 'duration',\n default=None))\n\n return {\n 'id': video_id,\n 'url': video_url,\n 'title': title,\n 'thumbnail': thumbnail,\n 'duration': duration,\n 'age_limit': 18,\n }\n", "path": "youtube_dl/extractor/yourporn.py"}]}
2,950
318
gh_patches_debug_56707
rasdani/github-patches
git_diff
openshift__openshift-ansible-2630
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- maximum recursion depth exceeded -- related to callback/default.py Running the `ansible-playbook -b --become-user root -i ansible-ose-inventory /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml` i am getting ``` statically included: /usr/share/ansible/openshift-ansible/roles/openshift_hosted/tasks/registry/registry.yml statically included: /usr/share/ansible/openshift-ansible/roles/openshift_metrics/tasks/install.yml ERROR! Unexpected Exception: maximum recursion depth exceeded while calling a Python object the full traceback was: Traceback (most recent call last): File "/bin/ansible-playbook", line 103, in <module> exit_code = cli.run() File "/usr/lib/python2.7/site-packages/ansible/cli/playbook.py", line 159, in run results = pbex.run() File "/usr/lib/python2.7/site-packages/ansible/executor/playbook_executor.py", line 89, in run self._tqm.load_callbacks() File "/usr/lib/python2.7/site-packages/ansible/executor/task_queue_manager.py", line 172, in load_callbacks self._stdout_callback = callback_loader.get(self._stdout_callback) File "/usr/lib/python2.7/site-packages/ansible/plugins/__init__.py", line 358, in get obj = obj(*args, **kwargs) File "/usr/lib/python2.7/site-packages/ansible/plugins/callback/default.py", line 41, in __init__ super(CallbackModule, self).__init__() ... super(CallbackModule, self).__init__() File "/usr/lib/python2.7/site-packages/ansible/plugins/callback/default.py", line 41, in __init__ super(CallbackModule, self).__init__() File "/usr/lib/python2.7/site-packages/ansible/plugins/callback/default.py", line 41, in __init__ super(CallbackModule, self).__init__() RuntimeError: maximum recursion depth exceeded while calling a Python object ``` ##### Version ``` atomic-openshift-utils-3.3.37-1.git.0.10ff25b.el7.noarch openshift-ansible-3.3.37-1.git.0.10ff25b.el7.noarch ``` The playbooks are installed from AtomicOpenShift/3.3/2016-10-18.2 The 3.4 has same problem. 3.2 Doesn't ``` openshift-ansible.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle openshift-ansible-callback-plugins.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle openshift-ansible-docs.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle openshift-ansible-filter-plugins.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle openshift-ansible-lookup-plugins.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle openshift-ansible-playbooks.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle openshift-ansible-roles.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle ansible-playbook 2.2.0.0 config file = /root/ansible.cfg configured module search path = Default w/o overrides ``` ##### Steps To Reproduce In description ##### Current Result Infinite recursion with ansible 2.2.0.0 No problem with ansible 2.1.2.0 The difference seems to be that the 2.1.2.0 do not have the `__init__` in the ``` /usr/lib/python2.7/site-packages/ansible/plugins/callback/default.py ``` ``` class CallbackModule(CallbackBase): ... def __init__(self): self._play = None self._last_task_banner = None super(CallbackModule, self).__init__() ``` If I remove it from the same file on the old ansible, deployment seems to work. Though I have no idea why it get's to the infinite recursion. It doesn't make sense to me. ##### Expected Result No problems with the infinite recursion ##### Additional Information Red Hat Enterprise Linux Server release 7.2 (Maipo) The inventory file ``` [OSEv3:children] masters nodes [OSEv3:vars] deployment_type=openshift-enterprise ansible_ssh_user=cloud-user ansible_sudo=true ansible_sudo_user=root openshift_use_manageiq=True #use_cluster_metrics=true openshift_additional_repos=[{'id': 'ose-devel', 'name': 'ose-devel', 'baseurl': 'http://download.xxx.redhat.com/rcm-guest/puddles/RHAOS/AtomicOpenShift-errata/3.3/latest/RH7-RHAOS-3.3/x86_64/os/', 'enabled': 1, 'gpgcheck': 0, 'skip_if_unavailable': 1}, {'id':'rhel-extras-candidate','name':'rhel-extras-candidate','baseurl':'http://download.xxx..redhat.com/brewroot/repos/extras-rhel-7.2-candidate/latest/x86_64/', 'enabled': 1, 'gpgcheck': 0, 'skip_if_unavailable': 1}] openshift_docker_additional_registries=brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888 openshift_docker_insecure_registries=brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888 [masters] ose3-master-08w85 openshift_scheduleable=True openshift_hostname=ose3-master-08w85 openshift_public_hostname=ose3-master-08w85 [nodes] ose3-master-08w85 openshift_node_labels="{'region':'infra','zone':'default'}" openshift_hostname=ose3-master-08w85 openshift_public_hostname=ose3-master-08w85 ose3-node0-08w85 openshift_node_labels="{'region':'primary','zone':'east'}" openshift_hostname=ose3-node0-08w85 openshift_public_hostname=ose3-node0-08w85 ose3-node1-08w85 openshift_node_labels="{'region':'primary','zone':'west'}" openshift_hostname=ose3-node1-08w85 openshift_public_hostname=ose3-node1-08w85 ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `callback_plugins/default.py` Content: ``` 1 '''Plugin to override the default output logic.''' 2 3 # upstream: https://gist.github.com/cliffano/9868180 4 5 # This program is free software: you can redistribute it and/or modify 6 # it under the terms of the GNU General Public License as published by 7 # the Free Software Foundation, either version 3 of the License, or 8 # (at your option) any later version. 9 # 10 # This program is distributed in the hope that it will be useful, 11 # but WITHOUT ANY WARRANTY; without even the implied warranty of 12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 # GNU General Public License for more details. 14 # 15 # You should have received a copy of the GNU General Public License 16 # along with this program. If not, see <http://www.gnu.org/licenses/>. 17 18 19 # For some reason this has to be done 20 import imp 21 import os 22 23 ANSIBLE_PATH = imp.find_module('ansible')[1] 24 DEFAULT_PATH = os.path.join(ANSIBLE_PATH, 'plugins/callback/default.py') 25 DEFAULT_MODULE = imp.load_source( 26 'ansible.plugins.callback.default', 27 DEFAULT_PATH 28 ) 29 30 try: 31 from ansible.plugins.callback import CallbackBase 32 BASECLASS = CallbackBase 33 except ImportError: # < ansible 2.1 34 BASECLASS = DEFAULT_MODULE.CallbackModule 35 36 37 class CallbackModule(DEFAULT_MODULE.CallbackModule): # pylint: disable=too-few-public-methods,no-init 38 ''' 39 Override for the default callback module. 40 41 Render std err/out outside of the rest of the result which it prints with 42 indentation. 43 ''' 44 CALLBACK_VERSION = 2.0 45 CALLBACK_TYPE = 'stdout' 46 CALLBACK_NAME = 'default' 47 48 def _dump_results(self, result): 49 '''Return the text to output for a result.''' 50 result['_ansible_verbose_always'] = True 51 52 save = {} 53 for key in ['stdout', 'stdout_lines', 'stderr', 'stderr_lines', 'msg']: 54 if key in result: 55 save[key] = result.pop(key) 56 57 output = BASECLASS._dump_results(self, result) # pylint: disable=protected-access 58 59 for key in ['stdout', 'stderr', 'msg']: 60 if key in save and save[key]: 61 output += '\n\n%s:\n\n%s\n' % (key.upper(), save[key]) 62 63 for key, value in save.items(): 64 result[key] = value 65 66 return output 67 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/callback_plugins/default.py b/callback_plugins/default.py --- a/callback_plugins/default.py +++ b/callback_plugins/default.py @@ -45,6 +45,9 @@ CALLBACK_TYPE = 'stdout' CALLBACK_NAME = 'default' + def __init__(self, *args, **kwargs): + BASECLASS.__init__(self, *args, **kwargs) + def _dump_results(self, result): '''Return the text to output for a result.''' result['_ansible_verbose_always'] = True
{"golden_diff": "diff --git a/callback_plugins/default.py b/callback_plugins/default.py\n--- a/callback_plugins/default.py\n+++ b/callback_plugins/default.py\n@@ -45,6 +45,9 @@\n CALLBACK_TYPE = 'stdout'\n CALLBACK_NAME = 'default'\n \n+ def __init__(self, *args, **kwargs):\n+ BASECLASS.__init__(self, *args, **kwargs)\n+\n def _dump_results(self, result):\n '''Return the text to output for a result.'''\n result['_ansible_verbose_always'] = True\n", "issue": "maximum recursion depth exceeded -- related to callback/default.py\nRunning the `ansible-playbook -b --become-user root -i ansible-ose-inventory /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml`\n\ni am getting\n\n```\nstatically included: /usr/share/ansible/openshift-ansible/roles/openshift_hosted/tasks/registry/registry.yml\nstatically included: /usr/share/ansible/openshift-ansible/roles/openshift_metrics/tasks/install.yml\nERROR! Unexpected Exception: maximum recursion depth exceeded while calling a Python object\nthe full traceback was:\n\nTraceback (most recent call last):\n File \"/bin/ansible-playbook\", line 103, in <module>\n exit_code = cli.run()\n File \"/usr/lib/python2.7/site-packages/ansible/cli/playbook.py\", line 159, in run\n results = pbex.run()\n File \"/usr/lib/python2.7/site-packages/ansible/executor/playbook_executor.py\", line 89, in run\n self._tqm.load_callbacks()\n File \"/usr/lib/python2.7/site-packages/ansible/executor/task_queue_manager.py\", line 172, in load_callbacks\n self._stdout_callback = callback_loader.get(self._stdout_callback)\n File \"/usr/lib/python2.7/site-packages/ansible/plugins/__init__.py\", line 358, in get\n obj = obj(*args, **kwargs)\n File \"/usr/lib/python2.7/site-packages/ansible/plugins/callback/default.py\", line 41, in __init__\n super(CallbackModule, self).__init__()\n...\n super(CallbackModule, self).__init__()\n File \"/usr/lib/python2.7/site-packages/ansible/plugins/callback/default.py\", line 41, in __init__\n super(CallbackModule, self).__init__()\n File \"/usr/lib/python2.7/site-packages/ansible/plugins/callback/default.py\", line 41, in __init__\n super(CallbackModule, self).__init__()\nRuntimeError: maximum recursion depth exceeded while calling a Python object\n```\n##### Version\n\n```\natomic-openshift-utils-3.3.37-1.git.0.10ff25b.el7.noarch\nopenshift-ansible-3.3.37-1.git.0.10ff25b.el7.noarch\n```\n\nThe playbooks are installed from AtomicOpenShift/3.3/2016-10-18.2\nThe 3.4 has same problem. 3.2 Doesn't\n\n```\nopenshift-ansible.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle\nopenshift-ansible-callback-plugins.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle\nopenshift-ansible-docs.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle\nopenshift-ansible-filter-plugins.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle\nopenshift-ansible-lookup-plugins.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle\nopenshift-ansible-playbooks.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle\nopenshift-ansible-roles.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle\n\nansible-playbook 2.2.0.0\n config file = /root/ansible.cfg\n configured module search path = Default w/o overrides\n```\n##### Steps To Reproduce\n\nIn description\n##### Current Result\n\nInfinite recursion with ansible 2.2.0.0\nNo problem with ansible 2.1.2.0\n\n The difference seems to be that the 2.1.2.0 do not have the `__init__` in the\n\n```\n /usr/lib/python2.7/site-packages/ansible/plugins/callback/default.py\n```\n\n```\nclass CallbackModule(CallbackBase):\n...\n def __init__(self):\n\n self._play = None\n self._last_task_banner = None\n super(CallbackModule, self).__init__()\n```\n\nIf I remove it from the same file on the old ansible, deployment seems\nto work. Though I have no idea why it get's to the infinite recursion.\nIt doesn't make sense to me.\n##### Expected Result\n\nNo problems with the infinite recursion\n##### Additional Information\n\nRed Hat Enterprise Linux Server release 7.2 (Maipo)\n\nThe inventory file\n\n```\n[OSEv3:children]\nmasters\nnodes\n\n[OSEv3:vars]\ndeployment_type=openshift-enterprise\nansible_ssh_user=cloud-user\nansible_sudo=true\nansible_sudo_user=root\nopenshift_use_manageiq=True\n#use_cluster_metrics=true\n\nopenshift_additional_repos=[{'id': 'ose-devel', 'name': 'ose-devel', 'baseurl': 'http://download.xxx.redhat.com/rcm-guest/puddles/RHAOS/AtomicOpenShift-errata/3.3/latest/RH7-RHAOS-3.3/x86_64/os/', 'enabled': 1, 'gpgcheck': 0, 'skip_if_unavailable': 1}, {'id':'rhel-extras-candidate','name':'rhel-extras-candidate','baseurl':'http://download.xxx..redhat.com/brewroot/repos/extras-rhel-7.2-candidate/latest/x86_64/', 'enabled': 1, 'gpgcheck': 0, 'skip_if_unavailable': 1}]\nopenshift_docker_additional_registries=brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888\nopenshift_docker_insecure_registries=brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888\n\n[masters]\nose3-master-08w85 openshift_scheduleable=True openshift_hostname=ose3-master-08w85 openshift_public_hostname=ose3-master-08w85\n\n[nodes]\nose3-master-08w85 openshift_node_labels=\"{'region':'infra','zone':'default'}\" openshift_hostname=ose3-master-08w85 openshift_public_hostname=ose3-master-08w85\n\nose3-node0-08w85 openshift_node_labels=\"{'region':'primary','zone':'east'}\" openshift_hostname=ose3-node0-08w85 openshift_public_hostname=ose3-node0-08w85\nose3-node1-08w85 openshift_node_labels=\"{'region':'primary','zone':'west'}\" openshift_hostname=ose3-node1-08w85 openshift_public_hostname=ose3-node1-08w85\n```\n\n", "before_files": [{"content": "'''Plugin to override the default output logic.'''\n\n# upstream: https://gist.github.com/cliffano/9868180\n\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n\n\n# For some reason this has to be done\nimport imp\nimport os\n\nANSIBLE_PATH = imp.find_module('ansible')[1]\nDEFAULT_PATH = os.path.join(ANSIBLE_PATH, 'plugins/callback/default.py')\nDEFAULT_MODULE = imp.load_source(\n 'ansible.plugins.callback.default',\n DEFAULT_PATH\n)\n\ntry:\n from ansible.plugins.callback import CallbackBase\n BASECLASS = CallbackBase\nexcept ImportError: # < ansible 2.1\n BASECLASS = DEFAULT_MODULE.CallbackModule\n\n\nclass CallbackModule(DEFAULT_MODULE.CallbackModule): # pylint: disable=too-few-public-methods,no-init\n '''\n Override for the default callback module.\n\n Render std err/out outside of the rest of the result which it prints with\n indentation.\n '''\n CALLBACK_VERSION = 2.0\n CALLBACK_TYPE = 'stdout'\n CALLBACK_NAME = 'default'\n\n def _dump_results(self, result):\n '''Return the text to output for a result.'''\n result['_ansible_verbose_always'] = True\n\n save = {}\n for key in ['stdout', 'stdout_lines', 'stderr', 'stderr_lines', 'msg']:\n if key in result:\n save[key] = result.pop(key)\n\n output = BASECLASS._dump_results(self, result) # pylint: disable=protected-access\n\n for key in ['stdout', 'stderr', 'msg']:\n if key in save and save[key]:\n output += '\\n\\n%s:\\n\\n%s\\n' % (key.upper(), save[key])\n\n for key, value in save.items():\n result[key] = value\n\n return output\n", "path": "callback_plugins/default.py"}], "after_files": [{"content": "'''Plugin to override the default output logic.'''\n\n# upstream: https://gist.github.com/cliffano/9868180\n\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n\n\n# For some reason this has to be done\nimport imp\nimport os\n\nANSIBLE_PATH = imp.find_module('ansible')[1]\nDEFAULT_PATH = os.path.join(ANSIBLE_PATH, 'plugins/callback/default.py')\nDEFAULT_MODULE = imp.load_source(\n 'ansible.plugins.callback.default',\n DEFAULT_PATH\n)\n\ntry:\n from ansible.plugins.callback import CallbackBase\n BASECLASS = CallbackBase\nexcept ImportError: # < ansible 2.1\n BASECLASS = DEFAULT_MODULE.CallbackModule\n\n\nclass CallbackModule(DEFAULT_MODULE.CallbackModule): # pylint: disable=too-few-public-methods,no-init\n '''\n Override for the default callback module.\n\n Render std err/out outside of the rest of the result which it prints with\n indentation.\n '''\n CALLBACK_VERSION = 2.0\n CALLBACK_TYPE = 'stdout'\n CALLBACK_NAME = 'default'\n\n def __init__(self, *args, **kwargs):\n BASECLASS.__init__(self, *args, **kwargs)\n\n def _dump_results(self, result):\n '''Return the text to output for a result.'''\n result['_ansible_verbose_always'] = True\n\n save = {}\n for key in ['stdout', 'stdout_lines', 'stderr', 'stderr_lines', 'msg']:\n if key in result:\n save[key] = result.pop(key)\n\n output = BASECLASS._dump_results(self, result) # pylint: disable=protected-access\n\n for key in ['stdout', 'stderr', 'msg']:\n if key in save and save[key]:\n output += '\\n\\n%s:\\n\\n%s\\n' % (key.upper(), save[key])\n\n for key, value in save.items():\n result[key] = value\n\n return output\n", "path": "callback_plugins/default.py"}]}
2,494
116
gh_patches_debug_39191
rasdani/github-patches
git_diff
wandb__wandb-516
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- use six for configparser for py2 compat --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `wandb/settings.py` Content: ``` 1 import os 2 import configparser 3 4 import wandb.util as util 5 from wandb import core, env, wandb_dir 6 7 8 class Settings(object): 9 """Global W&B settings stored under $WANDB_CONFIG_DIR/settings. 10 """ 11 12 DEFAULT_SECTION = "default" 13 14 def __init__(self, load_settings=True): 15 config_dir = os.environ.get(env.CONFIG_DIR, os.path.join(os.path.expanduser("~"), ".config", "wandb")) 16 17 # Ensure the config directory and settings file both exist. 18 util.mkdir_exists_ok(config_dir) 19 util.mkdir_exists_ok(wandb_dir()) 20 21 self._global_settings_path = os.path.join(config_dir, 'settings') 22 self._global_settings = Settings._settings_wth_defaults({}) 23 24 self._local_settings_path = os.path.join(wandb_dir(), 'settings') 25 self._local_settings = Settings._settings_wth_defaults({}) 26 27 if load_settings: 28 self._global_settings.read([self._global_settings_path]) 29 self._local_settings.read([self._local_settings_path]) 30 31 def get(self, section, key, fallback=configparser._UNSET): 32 # Try the local settings first. If we can't find the key, then try the global settings. 33 # If a fallback is provided, return it if we can't find the key in either the local or global 34 # settings. 35 try: 36 return self._local_settings.get(section, key) 37 except configparser.NoOptionError: 38 return self._global_settings.get(section, key, fallback=fallback) 39 40 def set(self, section, key, value, globally=False): 41 def write_setting(settings, settings_path): 42 if not settings.has_section(section): 43 settings.add_section(section) 44 settings.set(section, key, str(value)) 45 with open(settings_path, "w+") as f: 46 settings.write(f) 47 48 if globally: 49 write_setting(self._global_settings, self._global_settings_path) 50 else: 51 write_setting(self._local_settings, self._local_settings_path) 52 53 def clear(self, section, key, globally=False): 54 def clear_setting(settings, settings_path): 55 settings.remove_option(section, key) 56 with open(settings_path, "w+") as f: 57 settings.write(f) 58 59 if globally: 60 clear_setting(self._global_settings, self._global_settings_path) 61 else: 62 clear_setting(self._local_settings, self._local_settings_path) 63 64 def items(self, section=None): 65 section = section if section is not None else Settings.DEFAULT_SECTION 66 67 result = {'section': section} 68 69 try: 70 if section in self._global_settings.sections(): 71 for option in self._global_settings.options(section): 72 result[option] = self._global_settings.get(section, option) 73 if section in self._local_settings.sections(): 74 for option in self._local_settings.options(section): 75 result[option] = self._local_settings.get(section, option) 76 except configparser.InterpolationSyntaxError: 77 core.termwarn("Unable to parse settings file") 78 79 return result 80 81 @staticmethod 82 def _settings_wth_defaults(default_settings): 83 config = configparser.ConfigParser() 84 config.add_section(Settings.DEFAULT_SECTION) 85 for key, value in default_settings.items(): 86 config.set(Settings.DEFAULT_SECTION, key, str(value)) 87 return config 88 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/wandb/settings.py b/wandb/settings.py --- a/wandb/settings.py +++ b/wandb/settings.py @@ -1,5 +1,6 @@ import os -import configparser + +from six.moves import configparser import wandb.util as util from wandb import core, env, wandb_dir @@ -9,7 +10,9 @@ """Global W&B settings stored under $WANDB_CONFIG_DIR/settings. """ - DEFAULT_SECTION = "default" + DEFAULT_SECTION = "client" + + _UNSET = object() def __init__(self, load_settings=True): config_dir = os.environ.get(env.CONFIG_DIR, os.path.join(os.path.expanduser("~"), ".config", "wandb")) @@ -19,23 +22,29 @@ util.mkdir_exists_ok(wandb_dir()) self._global_settings_path = os.path.join(config_dir, 'settings') - self._global_settings = Settings._settings_wth_defaults({}) + self._global_settings = Settings._settings() self._local_settings_path = os.path.join(wandb_dir(), 'settings') - self._local_settings = Settings._settings_wth_defaults({}) + self._local_settings = Settings._settings() if load_settings: self._global_settings.read([self._global_settings_path]) self._local_settings.read([self._local_settings_path]) - def get(self, section, key, fallback=configparser._UNSET): + def get(self, section, key, fallback=_UNSET): # Try the local settings first. If we can't find the key, then try the global settings. # If a fallback is provided, return it if we can't find the key in either the local or global # settings. try: return self._local_settings.get(section, key) except configparser.NoOptionError: - return self._global_settings.get(section, key, fallback=fallback) + try: + return self._global_settings.get(section, key) + except configparser.NoOptionError: + if fallback is not Settings._UNSET: + return fallback + else: + raise def set(self, section, key, value, globally=False): def write_setting(settings, settings_path): @@ -79,7 +88,7 @@ return result @staticmethod - def _settings_wth_defaults(default_settings): + def _settings(default_settings={}): config = configparser.ConfigParser() config.add_section(Settings.DEFAULT_SECTION) for key, value in default_settings.items():
{"golden_diff": "diff --git a/wandb/settings.py b/wandb/settings.py\n--- a/wandb/settings.py\n+++ b/wandb/settings.py\n@@ -1,5 +1,6 @@\n import os\n-import configparser\n+\n+from six.moves import configparser\n \n import wandb.util as util\n from wandb import core, env, wandb_dir\n@@ -9,7 +10,9 @@\n \"\"\"Global W&B settings stored under $WANDB_CONFIG_DIR/settings.\n \"\"\"\n \n- DEFAULT_SECTION = \"default\"\n+ DEFAULT_SECTION = \"client\"\n+\n+ _UNSET = object()\n \n def __init__(self, load_settings=True):\n config_dir = os.environ.get(env.CONFIG_DIR, os.path.join(os.path.expanduser(\"~\"), \".config\", \"wandb\"))\n@@ -19,23 +22,29 @@\n util.mkdir_exists_ok(wandb_dir())\n \n self._global_settings_path = os.path.join(config_dir, 'settings')\n- self._global_settings = Settings._settings_wth_defaults({})\n+ self._global_settings = Settings._settings()\n \n self._local_settings_path = os.path.join(wandb_dir(), 'settings')\n- self._local_settings = Settings._settings_wth_defaults({})\n+ self._local_settings = Settings._settings()\n \n if load_settings:\n self._global_settings.read([self._global_settings_path])\n self._local_settings.read([self._local_settings_path])\n \n- def get(self, section, key, fallback=configparser._UNSET):\n+ def get(self, section, key, fallback=_UNSET):\n # Try the local settings first. If we can't find the key, then try the global settings.\n # If a fallback is provided, return it if we can't find the key in either the local or global\n # settings.\n try:\n return self._local_settings.get(section, key)\n except configparser.NoOptionError:\n- return self._global_settings.get(section, key, fallback=fallback)\n+ try:\n+ return self._global_settings.get(section, key)\n+ except configparser.NoOptionError:\n+ if fallback is not Settings._UNSET:\n+ return fallback\n+ else:\n+ raise\n \n def set(self, section, key, value, globally=False):\n def write_setting(settings, settings_path):\n@@ -79,7 +88,7 @@\n return result\n \n @staticmethod\n- def _settings_wth_defaults(default_settings):\n+ def _settings(default_settings={}):\n config = configparser.ConfigParser()\n config.add_section(Settings.DEFAULT_SECTION)\n for key, value in default_settings.items():\n", "issue": "use six for configparser for py2 compat\n\n", "before_files": [{"content": "import os\nimport configparser\n\nimport wandb.util as util\nfrom wandb import core, env, wandb_dir\n\n\nclass Settings(object):\n \"\"\"Global W&B settings stored under $WANDB_CONFIG_DIR/settings.\n \"\"\"\n\n DEFAULT_SECTION = \"default\"\n\n def __init__(self, load_settings=True):\n config_dir = os.environ.get(env.CONFIG_DIR, os.path.join(os.path.expanduser(\"~\"), \".config\", \"wandb\"))\n\n # Ensure the config directory and settings file both exist.\n util.mkdir_exists_ok(config_dir)\n util.mkdir_exists_ok(wandb_dir())\n\n self._global_settings_path = os.path.join(config_dir, 'settings')\n self._global_settings = Settings._settings_wth_defaults({})\n\n self._local_settings_path = os.path.join(wandb_dir(), 'settings')\n self._local_settings = Settings._settings_wth_defaults({})\n\n if load_settings:\n self._global_settings.read([self._global_settings_path])\n self._local_settings.read([self._local_settings_path])\n\n def get(self, section, key, fallback=configparser._UNSET):\n # Try the local settings first. If we can't find the key, then try the global settings.\n # If a fallback is provided, return it if we can't find the key in either the local or global\n # settings.\n try:\n return self._local_settings.get(section, key)\n except configparser.NoOptionError:\n return self._global_settings.get(section, key, fallback=fallback)\n\n def set(self, section, key, value, globally=False):\n def write_setting(settings, settings_path):\n if not settings.has_section(section):\n settings.add_section(section)\n settings.set(section, key, str(value))\n with open(settings_path, \"w+\") as f:\n settings.write(f)\n\n if globally:\n write_setting(self._global_settings, self._global_settings_path)\n else:\n write_setting(self._local_settings, self._local_settings_path)\n\n def clear(self, section, key, globally=False):\n def clear_setting(settings, settings_path):\n settings.remove_option(section, key)\n with open(settings_path, \"w+\") as f:\n settings.write(f)\n\n if globally:\n clear_setting(self._global_settings, self._global_settings_path)\n else:\n clear_setting(self._local_settings, self._local_settings_path)\n\n def items(self, section=None):\n section = section if section is not None else Settings.DEFAULT_SECTION\n\n result = {'section': section}\n\n try:\n if section in self._global_settings.sections():\n for option in self._global_settings.options(section):\n result[option] = self._global_settings.get(section, option)\n if section in self._local_settings.sections():\n for option in self._local_settings.options(section):\n result[option] = self._local_settings.get(section, option)\n except configparser.InterpolationSyntaxError:\n core.termwarn(\"Unable to parse settings file\")\n\n return result\n\n @staticmethod\n def _settings_wth_defaults(default_settings):\n config = configparser.ConfigParser()\n config.add_section(Settings.DEFAULT_SECTION)\n for key, value in default_settings.items():\n config.set(Settings.DEFAULT_SECTION, key, str(value))\n return config\n", "path": "wandb/settings.py"}], "after_files": [{"content": "import os\n\nfrom six.moves import configparser\n\nimport wandb.util as util\nfrom wandb import core, env, wandb_dir\n\n\nclass Settings(object):\n \"\"\"Global W&B settings stored under $WANDB_CONFIG_DIR/settings.\n \"\"\"\n\n DEFAULT_SECTION = \"client\"\n\n _UNSET = object()\n\n def __init__(self, load_settings=True):\n config_dir = os.environ.get(env.CONFIG_DIR, os.path.join(os.path.expanduser(\"~\"), \".config\", \"wandb\"))\n\n # Ensure the config directory and settings file both exist.\n util.mkdir_exists_ok(config_dir)\n util.mkdir_exists_ok(wandb_dir())\n\n self._global_settings_path = os.path.join(config_dir, 'settings')\n self._global_settings = Settings._settings()\n\n self._local_settings_path = os.path.join(wandb_dir(), 'settings')\n self._local_settings = Settings._settings()\n\n if load_settings:\n self._global_settings.read([self._global_settings_path])\n self._local_settings.read([self._local_settings_path])\n\n def get(self, section, key, fallback=_UNSET):\n # Try the local settings first. If we can't find the key, then try the global settings.\n # If a fallback is provided, return it if we can't find the key in either the local or global\n # settings.\n try:\n return self._local_settings.get(section, key)\n except configparser.NoOptionError:\n try:\n return self._global_settings.get(section, key)\n except configparser.NoOptionError:\n if fallback is not Settings._UNSET:\n return fallback\n else:\n raise\n\n def set(self, section, key, value, globally=False):\n def write_setting(settings, settings_path):\n if not settings.has_section(section):\n settings.add_section(section)\n settings.set(section, key, str(value))\n with open(settings_path, \"w+\") as f:\n settings.write(f)\n\n if globally:\n write_setting(self._global_settings, self._global_settings_path)\n else:\n write_setting(self._local_settings, self._local_settings_path)\n\n def clear(self, section, key, globally=False):\n def clear_setting(settings, settings_path):\n settings.remove_option(section, key)\n with open(settings_path, \"w+\") as f:\n settings.write(f)\n\n if globally:\n clear_setting(self._global_settings, self._global_settings_path)\n else:\n clear_setting(self._local_settings, self._local_settings_path)\n\n def items(self, section=None):\n section = section if section is not None else Settings.DEFAULT_SECTION\n\n result = {'section': section}\n\n try:\n if section in self._global_settings.sections():\n for option in self._global_settings.options(section):\n result[option] = self._global_settings.get(section, option)\n if section in self._local_settings.sections():\n for option in self._local_settings.options(section):\n result[option] = self._local_settings.get(section, option)\n except configparser.InterpolationSyntaxError:\n core.termwarn(\"Unable to parse settings file\")\n\n return result\n\n @staticmethod\n def _settings(default_settings={}):\n config = configparser.ConfigParser()\n config.add_section(Settings.DEFAULT_SECTION)\n for key, value in default_settings.items():\n config.set(Settings.DEFAULT_SECTION, key, str(value))\n return config\n", "path": "wandb/settings.py"}]}
1,131
567
gh_patches_debug_24433
rasdani/github-patches
git_diff
opensearch-project__opensearch-build-1652
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [BUG] Only send `-d` to core components, instead of everything. As of now, build workflow will not send `-d` if user does not specify `--distribution`. However, if user specify things such as `--distribution rpm` and try to build core+all plugins, `-d` will be sent to plugins as well. The plugin build script does not know how to interpret `-d` thus fail. ``` + echo 'Invalid option: -?' Invalid option: -? + exit 1 2022-02-17 23:58:36 ERROR Error building common-utils, retry with: ./build.sh manifests/1.3.0/opensearch-1.3.0.yml --component common-utils Traceback (most recent call last): File "./src/run_build.py", line 79, in <module> sys.exit(main()) File "./src/run_build.py", line 67, in main builder.build(build_recorder) File "/local/home/zhujiaxi/opensearch-build-peterzhuamazon/src/build_workflow/builder_from_source.py", line 49, in build self.git_repo.execute(build_command) File "/local/home/zhujiaxi/opensearch-build-peterzhuamazon/src/git/git_repository.py", line 83, in execute subprocess.check_call(command, cwd=cwd, shell=True) File "/usr/lib64/python3.7/subprocess.py", line 363, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command 'bash /local/home/zhujiaxi/opensearch-build-peterzhuamazon/scripts/components/common-utils/build.sh -v 1.3.0 -p linux -a x64 -d rpm -s false -o builds' returned non-zero exit status 1. ``` Need to add a condition where if component != OpenSearch/OpenSearch-Dashboards, then `-d` will not be sent even if not None. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `src/build_workflow/builder_from_source.py` Content: ``` 1 # SPDX-License-Identifier: Apache-2.0 2 # 3 # The OpenSearch Contributors require contributions made to 4 # this file be licensed under the Apache-2.0 license or a 5 # compatible open source license. 6 7 import os 8 9 from build_workflow.build_recorder import BuildRecorder 10 from build_workflow.builder import Builder 11 from git.git_repository import GitRepository 12 from paths.script_finder import ScriptFinder 13 14 """ 15 This class is responsible for executing the build for a component and passing the results to a build recorder. 16 It will notify the build recorder of build information such as repository and git ref, and any artifacts generated by the build. 17 Artifacts found in "<build root>/artifacts/<maven|plugins|libs|dist|core-plugins>" will be recognized and recorded. 18 """ 19 20 21 class BuilderFromSource(Builder): 22 def checkout(self, work_dir: str) -> None: 23 self.git_repo = GitRepository( 24 self.component.repository, 25 self.component.ref, 26 os.path.join(work_dir, self.component.name), 27 self.component.working_directory, 28 ) 29 30 def build(self, build_recorder: BuildRecorder) -> None: 31 build_script = ScriptFinder.find_build_script(self.target.name, self.component.name, self.git_repo.working_directory) 32 33 build_command = " ".join( 34 filter( 35 None, 36 [ 37 "bash", 38 build_script, 39 f"-v {self.target.version}", 40 f"-p {self.target.platform}", 41 f"-a {self.target.architecture}", 42 f"-d {self.target.distribution}" if self.target.distribution else None, 43 f"-s {str(self.target.snapshot).lower()}", 44 f"-o {self.output_path}", 45 ] 46 ) 47 ) 48 49 self.git_repo.execute(build_command) 50 build_recorder.record_component(self.component.name, self.git_repo) 51 52 def export_artifacts(self, build_recorder: BuildRecorder) -> None: 53 artifacts_path = os.path.join(self.git_repo.working_directory, self.output_path) 54 for artifact_type in ["maven", "dist", "plugins", "libs", "core-plugins"]: 55 for dir, _, files in os.walk(os.path.join(artifacts_path, artifact_type)): 56 for file_name in files: 57 absolute_path = os.path.join(dir, file_name) 58 relative_path = os.path.relpath(absolute_path, artifacts_path) 59 build_recorder.record_artifact(self.component.name, artifact_type, relative_path, absolute_path) 60 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/src/build_workflow/builder_from_source.py b/src/build_workflow/builder_from_source.py --- a/src/build_workflow/builder_from_source.py +++ b/src/build_workflow/builder_from_source.py @@ -28,6 +28,11 @@ ) def build(self, build_recorder: BuildRecorder) -> None: + + # List of components whose build scripts support `-d` parameter + # Bundled plugins do not need `-d` as they are java based zips + DISTRIBUTION_SUPPORTED_COMPONENTS = ["OpenSearch", "OpenSearch-Dashboards"] + build_script = ScriptFinder.find_build_script(self.target.name, self.component.name, self.git_repo.working_directory) build_command = " ".join( @@ -39,7 +44,7 @@ f"-v {self.target.version}", f"-p {self.target.platform}", f"-a {self.target.architecture}", - f"-d {self.target.distribution}" if self.target.distribution else None, + f"-d {self.target.distribution}" if self.target.distribution and (self.component.name in DISTRIBUTION_SUPPORTED_COMPONENTS) else None, f"-s {str(self.target.snapshot).lower()}", f"-o {self.output_path}", ]
{"golden_diff": "diff --git a/src/build_workflow/builder_from_source.py b/src/build_workflow/builder_from_source.py\n--- a/src/build_workflow/builder_from_source.py\n+++ b/src/build_workflow/builder_from_source.py\n@@ -28,6 +28,11 @@\n )\n \n def build(self, build_recorder: BuildRecorder) -> None:\n+\n+ # List of components whose build scripts support `-d` parameter\n+ # Bundled plugins do not need `-d` as they are java based zips\n+ DISTRIBUTION_SUPPORTED_COMPONENTS = [\"OpenSearch\", \"OpenSearch-Dashboards\"]\n+\n build_script = ScriptFinder.find_build_script(self.target.name, self.component.name, self.git_repo.working_directory)\n \n build_command = \" \".join(\n@@ -39,7 +44,7 @@\n f\"-v {self.target.version}\",\n f\"-p {self.target.platform}\",\n f\"-a {self.target.architecture}\",\n- f\"-d {self.target.distribution}\" if self.target.distribution else None,\n+ f\"-d {self.target.distribution}\" if self.target.distribution and (self.component.name in DISTRIBUTION_SUPPORTED_COMPONENTS) else None,\n f\"-s {str(self.target.snapshot).lower()}\",\n f\"-o {self.output_path}\",\n ]\n", "issue": "[BUG] Only send `-d` to core components, instead of everything.\nAs of now, build workflow will not send `-d` if user does not specify `--distribution`.\r\nHowever, if user specify things such as `--distribution rpm` and try to build core+all plugins,\r\n`-d` will be sent to plugins as well.\r\n\r\nThe plugin build script does not know how to interpret `-d` thus fail.\r\n```\r\n+ echo 'Invalid option: -?'\r\nInvalid option: -?\r\n+ exit 1\r\n2022-02-17 23:58:36 ERROR Error building common-utils, retry with: ./build.sh manifests/1.3.0/opensearch-1.3.0.yml --component common-utils\r\nTraceback (most recent call last):\r\n File \"./src/run_build.py\", line 79, in <module>\r\n sys.exit(main())\r\n File \"./src/run_build.py\", line 67, in main\r\n builder.build(build_recorder)\r\n File \"/local/home/zhujiaxi/opensearch-build-peterzhuamazon/src/build_workflow/builder_from_source.py\", line 49, in build\r\n self.git_repo.execute(build_command)\r\n File \"/local/home/zhujiaxi/opensearch-build-peterzhuamazon/src/git/git_repository.py\", line 83, in execute\r\n subprocess.check_call(command, cwd=cwd, shell=True)\r\n File \"/usr/lib64/python3.7/subprocess.py\", line 363, in check_call\r\n raise CalledProcessError(retcode, cmd)\r\nsubprocess.CalledProcessError: Command 'bash /local/home/zhujiaxi/opensearch-build-peterzhuamazon/scripts/components/common-utils/build.sh -v 1.3.0 -p linux -a x64 -d rpm -s false -o builds' returned non-zero exit status 1.\r\n```\r\n\r\nNeed to add a condition where if component != OpenSearch/OpenSearch-Dashboards, then `-d` will not be sent even if not None.\n", "before_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport os\n\nfrom build_workflow.build_recorder import BuildRecorder\nfrom build_workflow.builder import Builder\nfrom git.git_repository import GitRepository\nfrom paths.script_finder import ScriptFinder\n\n\"\"\"\nThis class is responsible for executing the build for a component and passing the results to a build recorder.\nIt will notify the build recorder of build information such as repository and git ref, and any artifacts generated by the build.\nArtifacts found in \"<build root>/artifacts/<maven|plugins|libs|dist|core-plugins>\" will be recognized and recorded.\n\"\"\"\n\n\nclass BuilderFromSource(Builder):\n def checkout(self, work_dir: str) -> None:\n self.git_repo = GitRepository(\n self.component.repository,\n self.component.ref,\n os.path.join(work_dir, self.component.name),\n self.component.working_directory,\n )\n\n def build(self, build_recorder: BuildRecorder) -> None:\n build_script = ScriptFinder.find_build_script(self.target.name, self.component.name, self.git_repo.working_directory)\n\n build_command = \" \".join(\n filter(\n None,\n [\n \"bash\",\n build_script,\n f\"-v {self.target.version}\",\n f\"-p {self.target.platform}\",\n f\"-a {self.target.architecture}\",\n f\"-d {self.target.distribution}\" if self.target.distribution else None,\n f\"-s {str(self.target.snapshot).lower()}\",\n f\"-o {self.output_path}\",\n ]\n )\n )\n\n self.git_repo.execute(build_command)\n build_recorder.record_component(self.component.name, self.git_repo)\n\n def export_artifacts(self, build_recorder: BuildRecorder) -> None:\n artifacts_path = os.path.join(self.git_repo.working_directory, self.output_path)\n for artifact_type in [\"maven\", \"dist\", \"plugins\", \"libs\", \"core-plugins\"]:\n for dir, _, files in os.walk(os.path.join(artifacts_path, artifact_type)):\n for file_name in files:\n absolute_path = os.path.join(dir, file_name)\n relative_path = os.path.relpath(absolute_path, artifacts_path)\n build_recorder.record_artifact(self.component.name, artifact_type, relative_path, absolute_path)\n", "path": "src/build_workflow/builder_from_source.py"}], "after_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport os\n\nfrom build_workflow.build_recorder import BuildRecorder\nfrom build_workflow.builder import Builder\nfrom git.git_repository import GitRepository\nfrom paths.script_finder import ScriptFinder\n\n\"\"\"\nThis class is responsible for executing the build for a component and passing the results to a build recorder.\nIt will notify the build recorder of build information such as repository and git ref, and any artifacts generated by the build.\nArtifacts found in \"<build root>/artifacts/<maven|plugins|libs|dist|core-plugins>\" will be recognized and recorded.\n\"\"\"\n\n\nclass BuilderFromSource(Builder):\n def checkout(self, work_dir: str) -> None:\n self.git_repo = GitRepository(\n self.component.repository,\n self.component.ref,\n os.path.join(work_dir, self.component.name),\n self.component.working_directory,\n )\n\n def build(self, build_recorder: BuildRecorder) -> None:\n\n # List of components whose build scripts support `-d` parameter\n # Bundled plugins do not need `-d` as they are java based zips\n DISTRIBUTION_SUPPORTED_COMPONENTS = [\"OpenSearch\", \"OpenSearch-Dashboards\"]\n\n build_script = ScriptFinder.find_build_script(self.target.name, self.component.name, self.git_repo.working_directory)\n\n build_command = \" \".join(\n filter(\n None,\n [\n \"bash\",\n build_script,\n f\"-v {self.target.version}\",\n f\"-p {self.target.platform}\",\n f\"-a {self.target.architecture}\",\n f\"-d {self.target.distribution}\" if self.target.distribution and (self.component.name in DISTRIBUTION_SUPPORTED_COMPONENTS) else None,\n f\"-s {str(self.target.snapshot).lower()}\",\n f\"-o {self.output_path}\",\n ]\n )\n )\n\n self.git_repo.execute(build_command)\n build_recorder.record_component(self.component.name, self.git_repo)\n\n def export_artifacts(self, build_recorder: BuildRecorder) -> None:\n artifacts_path = os.path.join(self.git_repo.working_directory, self.output_path)\n for artifact_type in [\"maven\", \"dist\", \"plugins\", \"libs\", \"core-plugins\"]:\n for dir, _, files in os.walk(os.path.join(artifacts_path, artifact_type)):\n for file_name in files:\n absolute_path = os.path.join(dir, file_name)\n relative_path = os.path.relpath(absolute_path, artifacts_path)\n build_recorder.record_artifact(self.component.name, artifact_type, relative_path, absolute_path)\n", "path": "src/build_workflow/builder_from_source.py"}]}
1,325
278
gh_patches_debug_18245
rasdani/github-patches
git_diff
streamlink__streamlink-338
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- TVCatchup addon not working anymore root@ovh2:/data# streamlink http://tvcatchup.com/watch/channel4 [cli][info] streamlink is running as root! Be careful! [cli][info] Found matching plugin tvcatchup for URL http://tvcatchup.com/watch/channel4 error: No streams found on this URL: http://tvcatchup.com/watch/channel4 root@ovh2:/data# streamlink --plugins [cli][info] streamlink is running as root! Be careful! Loaded plugins: adultswim, afreeca, afreecatv, aftonbladet, alieztv, antenna, ard_live, ard_mediathek, artetv, atresplayer, azubutv, bambuser, beam, beattv, bigo, bilibili, bliptv, chaturbate, cinergroup, connectcast, crunchyroll, cybergame, dailymotion, dingittv, disney_de, dmcloud, dmcloud_embed, dogan, dogus, dommune, douyutv, dplay, drdk, euronews, expressen, filmon, filmon_us, foxtr, furstream, gaminglive, gomexp, goodgame, hitbox, itvplayer, kanal7, letontv, livecodingtv, livestation, livestream, media_ccc_de, mediaklikk, meerkat, mips, mlgtv, nhkworld, nineanime, nos, npo, nrk, oldlivestream, openrectv, orf_tvthek, pandatv, periscope, picarto, piczel, powerapp, rtlxl, rtve, ruv, seemeplay, servustv, speedrunslive, sportschau, ssh101, stream, streamboat, streamingvideoprovider, streamlive, streamme, streamupcom, svtplay, tga, tigerdile, trt, turkuvaz, tv360, tv3cat, tv4play, tv8, tvcatchup, tvplayer, twitch, ustreamtv, vaughnlive, veetle, vgtv, viagame, viasat, viasat_embed, vidio, wattv, webtv, weeb, younow, youtube, zdf_mediathek --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `src/streamlink/plugins/tvcatchup.py` Content: ``` 1 import re 2 3 from streamlink.plugin import Plugin 4 from streamlink.plugin.api import http 5 from streamlink.stream import HLSStream 6 7 USER_AGENT = "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36" 8 _url_re = re.compile("http://(?:www\.)?tvcatchup.com/watch/\w+") 9 _stream_re = re.compile(r"\"(?P<stream_url>https?://.*m3u8\?.*clientKey=[^\"]*)\";") 10 11 12 class TVCatchup(Plugin): 13 @classmethod 14 def can_handle_url(cls, url): 15 return _url_re.match(url) 16 17 def _get_streams(self): 18 """ 19 Finds the streams from tvcatchup.com. 20 """ 21 http.headers.update({"User-Agent": USER_AGENT}) 22 res = http.get(self.url) 23 24 match = _stream_re.search(res.text, re.IGNORECASE | re.MULTILINE) 25 26 if match: 27 stream_url = match.groupdict()["stream_url"] 28 29 if stream_url: 30 if "_adp" in stream_url: 31 return HLSStream.parse_variant_playlist(self.session, stream_url) 32 else: 33 return {'576p': HLSStream(self.session, stream_url)} 34 35 36 __plugin__ = TVCatchup 37 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/src/streamlink/plugins/tvcatchup.py b/src/streamlink/plugins/tvcatchup.py --- a/src/streamlink/plugins/tvcatchup.py +++ b/src/streamlink/plugins/tvcatchup.py @@ -6,7 +6,7 @@ USER_AGENT = "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36" _url_re = re.compile("http://(?:www\.)?tvcatchup.com/watch/\w+") -_stream_re = re.compile(r"\"(?P<stream_url>https?://.*m3u8\?.*clientKey=[^\"]*)\";") +_stream_re = re.compile(r'''(?P<q>["'])(?P<stream_url>https?://.*m3u8\?.*clientKey=.*?)(?P=q)''') class TVCatchup(Plugin): @@ -24,7 +24,7 @@ match = _stream_re.search(res.text, re.IGNORECASE | re.MULTILINE) if match: - stream_url = match.groupdict()["stream_url"] + stream_url = match.group("stream_url") if stream_url: if "_adp" in stream_url:
{"golden_diff": "diff --git a/src/streamlink/plugins/tvcatchup.py b/src/streamlink/plugins/tvcatchup.py\n--- a/src/streamlink/plugins/tvcatchup.py\n+++ b/src/streamlink/plugins/tvcatchup.py\n@@ -6,7 +6,7 @@\n \n USER_AGENT = \"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36\"\n _url_re = re.compile(\"http://(?:www\\.)?tvcatchup.com/watch/\\w+\")\n-_stream_re = re.compile(r\"\\\"(?P<stream_url>https?://.*m3u8\\?.*clientKey=[^\\\"]*)\\\";\")\n+_stream_re = re.compile(r'''(?P<q>[\"'])(?P<stream_url>https?://.*m3u8\\?.*clientKey=.*?)(?P=q)''')\n \n \n class TVCatchup(Plugin):\n@@ -24,7 +24,7 @@\n match = _stream_re.search(res.text, re.IGNORECASE | re.MULTILINE)\n \n if match:\n- stream_url = match.groupdict()[\"stream_url\"]\n+ stream_url = match.group(\"stream_url\")\n \n if stream_url:\n if \"_adp\" in stream_url:\n", "issue": "TVCatchup addon not working anymore\nroot@ovh2:/data# streamlink http://tvcatchup.com/watch/channel4\r\n[cli][info] streamlink is running as root! Be careful!\r\n[cli][info] Found matching plugin tvcatchup for URL http://tvcatchup.com/watch/channel4\r\nerror: No streams found on this URL: http://tvcatchup.com/watch/channel4\r\nroot@ovh2:/data# streamlink --plugins\r\n[cli][info] streamlink is running as root! Be careful!\r\nLoaded plugins: adultswim, afreeca, afreecatv, aftonbladet, alieztv, antenna, ard_live, ard_mediathek, artetv, atresplayer, azubutv, bambuser, beam, beattv, bigo, bilibili, bliptv, chaturbate, cinergroup, connectcast, crunchyroll, cybergame, dailymotion, dingittv, disney_de, dmcloud, dmcloud_embed, dogan, dogus, dommune, douyutv, dplay, drdk, euronews, expressen, filmon, filmon_us, foxtr, furstream, gaminglive, gomexp, goodgame, hitbox, itvplayer, kanal7, letontv, livecodingtv, livestation, livestream, media_ccc_de, mediaklikk, meerkat, mips, mlgtv, nhkworld, nineanime, nos, npo, nrk, oldlivestream, openrectv, orf_tvthek, pandatv, periscope, picarto, piczel, powerapp, rtlxl, rtve, ruv, seemeplay, servustv, speedrunslive, sportschau, ssh101, stream, streamboat, streamingvideoprovider, streamlive, streamme, streamupcom, svtplay, tga, tigerdile, trt, turkuvaz, tv360, tv3cat, tv4play, tv8, tvcatchup, tvplayer, twitch, ustreamtv, vaughnlive, veetle, vgtv, viagame, viasat, viasat_embed, vidio, wattv, webtv, weeb, younow, youtube, zdf_mediathek\r\n\n", "before_files": [{"content": "import re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http\nfrom streamlink.stream import HLSStream\n\nUSER_AGENT = \"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36\"\n_url_re = re.compile(\"http://(?:www\\.)?tvcatchup.com/watch/\\w+\")\n_stream_re = re.compile(r\"\\\"(?P<stream_url>https?://.*m3u8\\?.*clientKey=[^\\\"]*)\\\";\")\n\n\nclass TVCatchup(Plugin):\n @classmethod\n def can_handle_url(cls, url):\n return _url_re.match(url)\n\n def _get_streams(self):\n \"\"\"\n Finds the streams from tvcatchup.com.\n \"\"\"\n http.headers.update({\"User-Agent\": USER_AGENT})\n res = http.get(self.url)\n\n match = _stream_re.search(res.text, re.IGNORECASE | re.MULTILINE)\n\n if match:\n stream_url = match.groupdict()[\"stream_url\"]\n\n if stream_url:\n if \"_adp\" in stream_url:\n return HLSStream.parse_variant_playlist(self.session, stream_url)\n else:\n return {'576p': HLSStream(self.session, stream_url)}\n\n\n__plugin__ = TVCatchup\n", "path": "src/streamlink/plugins/tvcatchup.py"}], "after_files": [{"content": "import re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http\nfrom streamlink.stream import HLSStream\n\nUSER_AGENT = \"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36\"\n_url_re = re.compile(\"http://(?:www\\.)?tvcatchup.com/watch/\\w+\")\n_stream_re = re.compile(r'''(?P<q>[\"'])(?P<stream_url>https?://.*m3u8\\?.*clientKey=.*?)(?P=q)''')\n\n\nclass TVCatchup(Plugin):\n @classmethod\n def can_handle_url(cls, url):\n return _url_re.match(url)\n\n def _get_streams(self):\n \"\"\"\n Finds the streams from tvcatchup.com.\n \"\"\"\n http.headers.update({\"User-Agent\": USER_AGENT})\n res = http.get(self.url)\n\n match = _stream_re.search(res.text, re.IGNORECASE | re.MULTILINE)\n\n if match:\n stream_url = match.group(\"stream_url\")\n\n if stream_url:\n if \"_adp\" in stream_url:\n return HLSStream.parse_variant_playlist(self.session, stream_url)\n else:\n return {'576p': HLSStream(self.session, stream_url)}\n\n\n__plugin__ = TVCatchup\n", "path": "src/streamlink/plugins/tvcatchup.py"}]}
1,144
296
gh_patches_debug_4935
rasdani/github-patches
git_diff
quantumlib__Cirq-4249
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Push to PyPi failing ``` error in cirq setup command: 'extras_require' must be a dictionary whose values are strings or lists of strings containing valid project/version requirement specifiers. ``` See https://github.com/quantumlib/Cirq/runs/2851981344 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `setup.py` Content: ``` 1 # Copyright 2018 The Cirq Developers 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # https://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 import io 16 import os 17 from setuptools import setup 18 19 # This reads the __version__ variable from cirq/_version.py 20 __version__ = '' 21 22 from dev_tools import modules 23 from dev_tools.requirements import explode 24 25 exec(open('cirq-core/cirq/_version.py').read()) 26 27 name = 'cirq' 28 29 description = ( 30 'A framework for creating, editing, and invoking ' 31 'Noisy Intermediate Scale Quantum (NISQ) circuits.' 32 ) 33 34 # README file as long_description. 35 long_description = io.open('README.rst', encoding='utf-8').read() 36 37 # If CIRQ_PRE_RELEASE_VERSION is set then we update the version to this value. 38 # It is assumed that it ends with one of `.devN`, `.aN`, `.bN`, `.rcN` and hence 39 # it will be a pre-release version on PyPi. See 40 # https://packaging.python.org/guides/distributing-packages-using-setuptools/#pre-release-versioning 41 # for more details. 42 if 'CIRQ_PRE_RELEASE_VERSION' in os.environ: 43 __version__ = os.environ['CIRQ_PRE_RELEASE_VERSION'] 44 long_description = ( 45 "**This is a development version of Cirq and may be " 46 "unstable.**\n\n**For the latest stable release of Cirq " 47 "see**\n`here <https://pypi.org/project/cirq>`__.\n\n" + long_description 48 ) 49 50 # Sanity check 51 assert __version__, 'Version string cannot be empty' 52 53 # This is a pure metapackage that installs all our packages 54 requirements = [f'{p.name}=={p.version}' for p in modules.list_modules()] 55 56 dev_requirements = explode('dev_tools/requirements/deps/dev-tools.txt') 57 dev_requirements = [r.strip() for r in dev_requirements] 58 59 setup( 60 name=name, 61 version=__version__, 62 url='http://github.com/quantumlib/cirq', 63 author='The Cirq Developers', 64 author_email='[email protected]', 65 python_requires='>=3.6.0', 66 install_requires=requirements, 67 extras_require={ 68 'dev_env': dev_requirements, 69 }, 70 license='Apache 2', 71 description=description, 72 long_description=long_description, 73 ) 74 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -54,7 +54,9 @@ requirements = [f'{p.name}=={p.version}' for p in modules.list_modules()] dev_requirements = explode('dev_tools/requirements/deps/dev-tools.txt') -dev_requirements = [r.strip() for r in dev_requirements] + +# filter out direct urls (https://github.com/pypa/pip/issues/6301) +dev_requirements = [r.strip() for r in dev_requirements if "git+http" not in r] setup( name=name,
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -54,7 +54,9 @@\n requirements = [f'{p.name}=={p.version}' for p in modules.list_modules()]\n \n dev_requirements = explode('dev_tools/requirements/deps/dev-tools.txt')\n-dev_requirements = [r.strip() for r in dev_requirements]\n+\n+# filter out direct urls (https://github.com/pypa/pip/issues/6301)\n+dev_requirements = [r.strip() for r in dev_requirements if \"git+http\" not in r]\n \n setup(\n name=name,\n", "issue": "Push to PyPi failing\n```\r\nerror in cirq setup command: 'extras_require' must be a dictionary whose values are strings or lists of strings containing valid project/version requirement specifiers.\r\n```\r\n\r\nSee https://github.com/quantumlib/Cirq/runs/2851981344\r\n\n", "before_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\nfrom setuptools import setup\n\n# This reads the __version__ variable from cirq/_version.py\n__version__ = ''\n\nfrom dev_tools import modules\nfrom dev_tools.requirements import explode\n\nexec(open('cirq-core/cirq/_version.py').read())\n\nname = 'cirq'\n\ndescription = (\n 'A framework for creating, editing, and invoking '\n 'Noisy Intermediate Scale Quantum (NISQ) circuits.'\n)\n\n# README file as long_description.\nlong_description = io.open('README.rst', encoding='utf-8').read()\n\n# If CIRQ_PRE_RELEASE_VERSION is set then we update the version to this value.\n# It is assumed that it ends with one of `.devN`, `.aN`, `.bN`, `.rcN` and hence\n# it will be a pre-release version on PyPi. See\n# https://packaging.python.org/guides/distributing-packages-using-setuptools/#pre-release-versioning\n# for more details.\nif 'CIRQ_PRE_RELEASE_VERSION' in os.environ:\n __version__ = os.environ['CIRQ_PRE_RELEASE_VERSION']\n long_description = (\n \"**This is a development version of Cirq and may be \"\n \"unstable.**\\n\\n**For the latest stable release of Cirq \"\n \"see**\\n`here <https://pypi.org/project/cirq>`__.\\n\\n\" + long_description\n )\n\n# Sanity check\nassert __version__, 'Version string cannot be empty'\n\n# This is a pure metapackage that installs all our packages\nrequirements = [f'{p.name}=={p.version}' for p in modules.list_modules()]\n\ndev_requirements = explode('dev_tools/requirements/deps/dev-tools.txt')\ndev_requirements = [r.strip() for r in dev_requirements]\n\nsetup(\n name=name,\n version=__version__,\n url='http://github.com/quantumlib/cirq',\n author='The Cirq Developers',\n author_email='[email protected]',\n python_requires='>=3.6.0',\n install_requires=requirements,\n extras_require={\n 'dev_env': dev_requirements,\n },\n license='Apache 2',\n description=description,\n long_description=long_description,\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\nfrom setuptools import setup\n\n# This reads the __version__ variable from cirq/_version.py\n__version__ = ''\n\nfrom dev_tools import modules\nfrom dev_tools.requirements import explode\n\nexec(open('cirq-core/cirq/_version.py').read())\n\nname = 'cirq'\n\ndescription = (\n 'A framework for creating, editing, and invoking '\n 'Noisy Intermediate Scale Quantum (NISQ) circuits.'\n)\n\n# README file as long_description.\nlong_description = io.open('README.rst', encoding='utf-8').read()\n\n# If CIRQ_PRE_RELEASE_VERSION is set then we update the version to this value.\n# It is assumed that it ends with one of `.devN`, `.aN`, `.bN`, `.rcN` and hence\n# it will be a pre-release version on PyPi. See\n# https://packaging.python.org/guides/distributing-packages-using-setuptools/#pre-release-versioning\n# for more details.\nif 'CIRQ_PRE_RELEASE_VERSION' in os.environ:\n __version__ = os.environ['CIRQ_PRE_RELEASE_VERSION']\n long_description = (\n \"**This is a development version of Cirq and may be \"\n \"unstable.**\\n\\n**For the latest stable release of Cirq \"\n \"see**\\n`here <https://pypi.org/project/cirq>`__.\\n\\n\" + long_description\n )\n\n# Sanity check\nassert __version__, 'Version string cannot be empty'\n\n# This is a pure metapackage that installs all our packages\nrequirements = [f'{p.name}=={p.version}' for p in modules.list_modules()]\n\ndev_requirements = explode('dev_tools/requirements/deps/dev-tools.txt')\n\n# filter out direct urls (https://github.com/pypa/pip/issues/6301)\ndev_requirements = [r.strip() for r in dev_requirements if \"git+http\" not in r]\n\nsetup(\n name=name,\n version=__version__,\n url='http://github.com/quantumlib/cirq',\n author='The Cirq Developers',\n author_email='[email protected]',\n python_requires='>=3.6.0',\n install_requires=requirements,\n extras_require={\n 'dev_env': dev_requirements,\n },\n license='Apache 2',\n description=description,\n long_description=long_description,\n)\n", "path": "setup.py"}]}
1,081
134
gh_patches_debug_6783
rasdani/github-patches
git_diff
pytorch__ignite-1048
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Bug adding handler in case of decoration + class function + filtered event ## 🐛 Bug description I would like to report a bug using handler defined by decorated function in a class with filtered event. The following code reproduces all possible situations to add an handler defined w/wo decoration in a class or not, w/wo engine (or args), using an event w/wo filter ```python engine = Engine(lambda e, b: b) # decorator def decorated(fun): @functools.wraps(fun) def wrapper(*args, **kwargs): return fun(*args, **kwargs) return wrapper # handler as a function def foo(): print("foo") # handler as a decorated function @decorated def decorated_foo(): print("decorated_foo") # register handler as a function -- OK engine.add_event_handler(Events.EPOCH_STARTED, foo) # register handler as a function with filter -- OK engine.add_event_handler(Events.EPOCH_STARTED(every=2), foo) # register handler as a decorated function -- OK engine.add_event_handler(Events.EPOCH_STARTED, decorated_foo) # register handler as a decorated function with filter -- OK engine.add_event_handler(Events.EPOCH_STARTED(every=2), decorated_foo) # handler as a function with engine (here args) def foo_args(args): print("foo_args", args) # handler as a decorated function with engine @decorated def decorated_foo_args(args): print("decorated_foo_args", args) # register handler as a function with engine -- OK engine.add_event_handler(Events.EPOCH_STARTED, foo_args) # register handler as a function with engine and filter -- OK engine.add_event_handler(Events.EPOCH_STARTED(every=2), foo_args) # register handler as a decorated function with engine -- OK engine.add_event_handler(Events.EPOCH_STARTED, decorated_foo_args) # register handler as a decorated function with engine and filter -- OK engine.add_event_handler(Events.EPOCH_STARTED(every=2), decorated_foo_args) class Foo: # handler as a class function (ie method) def foo(self): print("foo") # handler as a decorated method @decorated def decorated_foo(self): print("decorated_foo") # handler as a method with engine def foo_args(self, args): print("foo_args", args) # handler as a decorated method with engine @decorated def decorated_foo_args(self, args): print("decorated_foo_args", args) foo = Foo() # register handler as a method -- OK engine.add_event_handler(Events.EPOCH_STARTED, foo.foo) # register handler as a method with filter -- OK engine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.foo) # register handler as a decorated method -- OK engine.add_event_handler(Events.EPOCH_STARTED, foo.decorated_foo) # register handler as a decorated method with filter -- OK engine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.decorated_foo) # register handler as a method with engine -- OK engine.add_event_handler(Events.EPOCH_STARTED, foo.foo_args) # register handler as a method with engine and filter -- OK engine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.foo_args) # register handler as a decorated method with engine -- OK engine.add_event_handler(Events.EPOCH_STARTED, foo.decorated_foo_args) # register handler as a decorated method with engine and filter -- FAILED engine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.decorated_foo_args) engine.run([0]) ``` The error is ``` Error adding <function Foo.decorated_foo_args at 0x1229b6af0> 'handler': takes parameters ['self', 'args'] but will be called with [](missing a required argument: 'self'). ``` Why ? First, a handler defined with a filtered event is wrapped with decoration. See https://github.com/sdesrozis/ignite/blob/93be57aa3f71ce601391d59096c3b430c4d9487b/ignite/engine/engine.py#L198. Note that `functools.wraps` is used to fit the signature of the related handler. The failed case is decorated method with engine. So, I guess `functools.wraps` works perfectly and catch `self` and `engine` as arguments. But the signature checking search (using `inspect.signature`) fails because missing `self`... See signature checking https://github.com/pytorch/ignite/blob/0de7156bb284bd01d788252469a3b386f10abbd7/ignite/engine/utils.py#L5 I think this is related to `follow_wrapped=True` argument of `inspect.signature`. ## Environment - PyTorch Version (e.g., 1.4): 1.5 - Ignite Version (e.g., 0.3.0): 0.4 - OS (e.g., Linux): MacOS - How you installed Ignite (`conda`, `pip`, source): Honda - Python version: 3.7 - Any other relevant information: Bug adding handler in case of decoration + class function + filtered event ## 🐛 Bug description I would like to report a bug using handler defined by decorated function in a class with filtered event. The following code reproduces all possible situations to add an handler defined w/wo decoration in a class or not, w/wo engine (or args), using an event w/wo filter ```python engine = Engine(lambda e, b: b) # decorator def decorated(fun): @functools.wraps(fun) def wrapper(*args, **kwargs): return fun(*args, **kwargs) return wrapper # handler as a function def foo(): print("foo") # handler as a decorated function @decorated def decorated_foo(): print("decorated_foo") # register handler as a function -- OK engine.add_event_handler(Events.EPOCH_STARTED, foo) # register handler as a function with filter -- OK engine.add_event_handler(Events.EPOCH_STARTED(every=2), foo) # register handler as a decorated function -- OK engine.add_event_handler(Events.EPOCH_STARTED, decorated_foo) # register handler as a decorated function with filter -- OK engine.add_event_handler(Events.EPOCH_STARTED(every=2), decorated_foo) # handler as a function with engine (here args) def foo_args(args): print("foo_args", args) # handler as a decorated function with engine @decorated def decorated_foo_args(args): print("decorated_foo_args", args) # register handler as a function with engine -- OK engine.add_event_handler(Events.EPOCH_STARTED, foo_args) # register handler as a function with engine and filter -- OK engine.add_event_handler(Events.EPOCH_STARTED(every=2), foo_args) # register handler as a decorated function with engine -- OK engine.add_event_handler(Events.EPOCH_STARTED, decorated_foo_args) # register handler as a decorated function with engine and filter -- OK engine.add_event_handler(Events.EPOCH_STARTED(every=2), decorated_foo_args) class Foo: # handler as a class function (ie method) def foo(self): print("foo") # handler as a decorated method @decorated def decorated_foo(self): print("decorated_foo") # handler as a method with engine def foo_args(self, args): print("foo_args", args) # handler as a decorated method with engine @decorated def decorated_foo_args(self, args): print("decorated_foo_args", args) foo = Foo() # register handler as a method -- OK engine.add_event_handler(Events.EPOCH_STARTED, foo.foo) # register handler as a method with filter -- OK engine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.foo) # register handler as a decorated method -- OK engine.add_event_handler(Events.EPOCH_STARTED, foo.decorated_foo) # register handler as a decorated method with filter -- OK engine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.decorated_foo) # register handler as a method with engine -- OK engine.add_event_handler(Events.EPOCH_STARTED, foo.foo_args) # register handler as a method with engine and filter -- OK engine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.foo_args) # register handler as a decorated method with engine -- OK engine.add_event_handler(Events.EPOCH_STARTED, foo.decorated_foo_args) # register handler as a decorated method with engine and filter -- FAILED engine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.decorated_foo_args) engine.run([0]) ``` The error is ``` Error adding <function Foo.decorated_foo_args at 0x1229b6af0> 'handler': takes parameters ['self', 'args'] but will be called with [](missing a required argument: 'self'). ``` Why ? First, a handler defined with a filtered event is wrapped with decoration. See https://github.com/sdesrozis/ignite/blob/93be57aa3f71ce601391d59096c3b430c4d9487b/ignite/engine/engine.py#L198. Note that `functools.wraps` is used to fit the signature of the related handler. The failed case is decorated method with engine. So, I guess `functools.wraps` works perfectly and catch `self` and `engine` as arguments. But the signature checking search (using `inspect.signature`) fails because missing `self`... See signature checking https://github.com/pytorch/ignite/blob/0de7156bb284bd01d788252469a3b386f10abbd7/ignite/engine/utils.py#L5 I think this is related to `follow_wrapped=True` argument of `inspect.signature`. ## Environment - PyTorch Version (e.g., 1.4): 1.5 - Ignite Version (e.g., 0.3.0): 0.4 - OS (e.g., Linux): MacOS - How you installed Ignite (`conda`, `pip`, source): Honda - Python version: 3.7 - Any other relevant information: --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `ignite/engine/utils.py` Content: ``` 1 import inspect 2 from typing import Callable 3 4 5 def _check_signature(fn: Callable, fn_description: str, *args, **kwargs) -> None: 6 signature = inspect.signature(fn) 7 try: # try without engine 8 signature.bind(*args, **kwargs) 9 except TypeError as exc: 10 fn_params = list(signature.parameters) 11 exception_msg = str(exc) 12 passed_params = list(args) + list(kwargs) 13 raise ValueError( 14 "Error adding {} '{}': " 15 "takes parameters {} but will be called with {}" 16 "({}).".format(fn, fn_description, fn_params, passed_params, exception_msg) 17 ) 18 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/ignite/engine/utils.py b/ignite/engine/utils.py --- a/ignite/engine/utils.py +++ b/ignite/engine/utils.py @@ -3,7 +3,11 @@ def _check_signature(fn: Callable, fn_description: str, *args, **kwargs) -> None: - signature = inspect.signature(fn) + # if handler with filter, check the handler rather than the decorator + if hasattr(fn, "_parent"): + signature = inspect.signature(fn._parent()) + else: + signature = inspect.signature(fn) try: # try without engine signature.bind(*args, **kwargs) except TypeError as exc:
{"golden_diff": "diff --git a/ignite/engine/utils.py b/ignite/engine/utils.py\n--- a/ignite/engine/utils.py\n+++ b/ignite/engine/utils.py\n@@ -3,7 +3,11 @@\n \n \n def _check_signature(fn: Callable, fn_description: str, *args, **kwargs) -> None:\n- signature = inspect.signature(fn)\n+ # if handler with filter, check the handler rather than the decorator\n+ if hasattr(fn, \"_parent\"):\n+ signature = inspect.signature(fn._parent())\n+ else:\n+ signature = inspect.signature(fn)\n try: # try without engine\n signature.bind(*args, **kwargs)\n except TypeError as exc:\n", "issue": "Bug adding handler in case of decoration + class function + filtered event\n## \ud83d\udc1b Bug description\r\n\r\nI would like to report a bug using handler defined by decorated function in a class with filtered event.\r\n\r\nThe following code reproduces all possible situations to add an handler defined w/wo decoration in a class or not, w/wo engine (or args), using an event w/wo filter\r\n\r\n```python\r\nengine = Engine(lambda e, b: b)\r\n\r\n# decorator\r\ndef decorated(fun):\r\n @functools.wraps(fun)\r\n def wrapper(*args, **kwargs):\r\n return fun(*args, **kwargs)\r\n return wrapper\r\n\r\n# handler as a function\r\ndef foo():\r\n print(\"foo\")\r\n\r\n# handler as a decorated function\r\n@decorated\r\ndef decorated_foo():\r\n print(\"decorated_foo\")\r\n\r\n# register handler as a function -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, foo)\r\n# register handler as a function with filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), foo)\r\n# register handler as a decorated function -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, decorated_foo)\r\n# register handler as a decorated function with filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), decorated_foo)\r\n\r\n\r\n# handler as a function with engine (here args)\r\ndef foo_args(args):\r\n print(\"foo_args\", args)\r\n\r\n\r\n# handler as a decorated function with engine \r\n@decorated\r\ndef decorated_foo_args(args):\r\n print(\"decorated_foo_args\", args)\r\n\r\n# register handler as a function with engine -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, foo_args)\r\n# register handler as a function with engine and filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), foo_args)\r\n# register handler as a decorated function with engine -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, decorated_foo_args)\r\n# register handler as a decorated function with engine and filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), decorated_foo_args)\r\n\r\nclass Foo:\r\n # handler as a class function (ie method)\r\n def foo(self):\r\n print(\"foo\")\r\n\r\n # handler as a decorated method\r\n @decorated\r\n def decorated_foo(self):\r\n print(\"decorated_foo\")\r\n\r\n # handler as a method with engine\r\n def foo_args(self, args):\r\n print(\"foo_args\", args)\r\n\r\n # handler as a decorated method with engine\r\n @decorated\r\n def decorated_foo_args(self, args):\r\n print(\"decorated_foo_args\", args)\r\n\r\n\r\nfoo = Foo()\r\n\r\n# register handler as a method -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, foo.foo)\r\n# register handler as a method with filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.foo)\r\n# register handler as a decorated method -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, foo.decorated_foo)\r\n# register handler as a decorated method with filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.decorated_foo)\r\n# register handler as a method with engine -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, foo.foo_args)\r\n# register handler as a method with engine and filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.foo_args)\r\n# register handler as a decorated method with engine -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, foo.decorated_foo_args)\r\n\r\n# register handler as a decorated method with engine and filter -- FAILED\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.decorated_foo_args)\r\n\r\nengine.run([0])\r\n```\r\n\r\nThe error is\r\n```\r\nError adding <function Foo.decorated_foo_args at 0x1229b6af0> 'handler': takes parameters ['self', 'args'] but will be called with [](missing a required argument: 'self').\r\n```\r\n\r\nWhy ? \r\n\r\nFirst, a handler defined with a filtered event is wrapped with decoration. See https://github.com/sdesrozis/ignite/blob/93be57aa3f71ce601391d59096c3b430c4d9487b/ignite/engine/engine.py#L198. Note that `functools.wraps` is used to fit the signature of the related handler.\r\n\r\nThe failed case is decorated method with engine. So, I guess `functools.wraps` works perfectly and catch `self` and `engine` as arguments. But the signature checking search (using `inspect.signature`) fails because missing `self`... \r\n\r\nSee signature checking\r\nhttps://github.com/pytorch/ignite/blob/0de7156bb284bd01d788252469a3b386f10abbd7/ignite/engine/utils.py#L5\r\n\r\nI think this is related to `follow_wrapped=True` argument of `inspect.signature`.\r\n\r\n## Environment\r\n\r\n - PyTorch Version (e.g., 1.4): 1.5\r\n - Ignite Version (e.g., 0.3.0): 0.4\r\n - OS (e.g., Linux): MacOS\r\n - How you installed Ignite (`conda`, `pip`, source): Honda\r\n - Python version: 3.7\r\n - Any other relevant information:\r\n\r\n\nBug adding handler in case of decoration + class function + filtered event\n## \ud83d\udc1b Bug description\r\n\r\nI would like to report a bug using handler defined by decorated function in a class with filtered event.\r\n\r\nThe following code reproduces all possible situations to add an handler defined w/wo decoration in a class or not, w/wo engine (or args), using an event w/wo filter\r\n\r\n```python\r\nengine = Engine(lambda e, b: b)\r\n\r\n# decorator\r\ndef decorated(fun):\r\n @functools.wraps(fun)\r\n def wrapper(*args, **kwargs):\r\n return fun(*args, **kwargs)\r\n return wrapper\r\n\r\n# handler as a function\r\ndef foo():\r\n print(\"foo\")\r\n\r\n# handler as a decorated function\r\n@decorated\r\ndef decorated_foo():\r\n print(\"decorated_foo\")\r\n\r\n# register handler as a function -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, foo)\r\n# register handler as a function with filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), foo)\r\n# register handler as a decorated function -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, decorated_foo)\r\n# register handler as a decorated function with filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), decorated_foo)\r\n\r\n\r\n# handler as a function with engine (here args)\r\ndef foo_args(args):\r\n print(\"foo_args\", args)\r\n\r\n\r\n# handler as a decorated function with engine \r\n@decorated\r\ndef decorated_foo_args(args):\r\n print(\"decorated_foo_args\", args)\r\n\r\n# register handler as a function with engine -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, foo_args)\r\n# register handler as a function with engine and filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), foo_args)\r\n# register handler as a decorated function with engine -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, decorated_foo_args)\r\n# register handler as a decorated function with engine and filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), decorated_foo_args)\r\n\r\nclass Foo:\r\n # handler as a class function (ie method)\r\n def foo(self):\r\n print(\"foo\")\r\n\r\n # handler as a decorated method\r\n @decorated\r\n def decorated_foo(self):\r\n print(\"decorated_foo\")\r\n\r\n # handler as a method with engine\r\n def foo_args(self, args):\r\n print(\"foo_args\", args)\r\n\r\n # handler as a decorated method with engine\r\n @decorated\r\n def decorated_foo_args(self, args):\r\n print(\"decorated_foo_args\", args)\r\n\r\n\r\nfoo = Foo()\r\n\r\n# register handler as a method -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, foo.foo)\r\n# register handler as a method with filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.foo)\r\n# register handler as a decorated method -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, foo.decorated_foo)\r\n# register handler as a decorated method with filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.decorated_foo)\r\n# register handler as a method with engine -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, foo.foo_args)\r\n# register handler as a method with engine and filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.foo_args)\r\n# register handler as a decorated method with engine -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, foo.decorated_foo_args)\r\n\r\n# register handler as a decorated method with engine and filter -- FAILED\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.decorated_foo_args)\r\n\r\nengine.run([0])\r\n```\r\n\r\nThe error is\r\n```\r\nError adding <function Foo.decorated_foo_args at 0x1229b6af0> 'handler': takes parameters ['self', 'args'] but will be called with [](missing a required argument: 'self').\r\n```\r\n\r\nWhy ? \r\n\r\nFirst, a handler defined with a filtered event is wrapped with decoration. See https://github.com/sdesrozis/ignite/blob/93be57aa3f71ce601391d59096c3b430c4d9487b/ignite/engine/engine.py#L198. Note that `functools.wraps` is used to fit the signature of the related handler.\r\n\r\nThe failed case is decorated method with engine. So, I guess `functools.wraps` works perfectly and catch `self` and `engine` as arguments. But the signature checking search (using `inspect.signature`) fails because missing `self`... \r\n\r\nSee signature checking\r\nhttps://github.com/pytorch/ignite/blob/0de7156bb284bd01d788252469a3b386f10abbd7/ignite/engine/utils.py#L5\r\n\r\nI think this is related to `follow_wrapped=True` argument of `inspect.signature`.\r\n\r\n## Environment\r\n\r\n - PyTorch Version (e.g., 1.4): 1.5\r\n - Ignite Version (e.g., 0.3.0): 0.4\r\n - OS (e.g., Linux): MacOS\r\n - How you installed Ignite (`conda`, `pip`, source): Honda\r\n - Python version: 3.7\r\n - Any other relevant information:\r\n\r\n\n", "before_files": [{"content": "import inspect\nfrom typing import Callable\n\n\ndef _check_signature(fn: Callable, fn_description: str, *args, **kwargs) -> None:\n signature = inspect.signature(fn)\n try: # try without engine\n signature.bind(*args, **kwargs)\n except TypeError as exc:\n fn_params = list(signature.parameters)\n exception_msg = str(exc)\n passed_params = list(args) + list(kwargs)\n raise ValueError(\n \"Error adding {} '{}': \"\n \"takes parameters {} but will be called with {}\"\n \"({}).\".format(fn, fn_description, fn_params, passed_params, exception_msg)\n )\n", "path": "ignite/engine/utils.py"}], "after_files": [{"content": "import inspect\nfrom typing import Callable\n\n\ndef _check_signature(fn: Callable, fn_description: str, *args, **kwargs) -> None:\n # if handler with filter, check the handler rather than the decorator\n if hasattr(fn, \"_parent\"):\n signature = inspect.signature(fn._parent())\n else:\n signature = inspect.signature(fn)\n try: # try without engine\n signature.bind(*args, **kwargs)\n except TypeError as exc:\n fn_params = list(signature.parameters)\n exception_msg = str(exc)\n passed_params = list(args) + list(kwargs)\n raise ValueError(\n \"Error adding {} '{}': \"\n \"takes parameters {} but will be called with {}\"\n \"({}).\".format(fn, fn_description, fn_params, passed_params, exception_msg)\n )\n", "path": "ignite/engine/utils.py"}]}
2,747
145