problem_id
stringlengths
18
22
source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
13
58
prompt
stringlengths
1.1k
25.4k
golden_diff
stringlengths
145
5.13k
verification_info
stringlengths
582
39.1k
num_tokens
int64
271
4.1k
num_tokens_diff
int64
47
1.02k
gh_patches_debug_58136
rasdani/github-patches
git_diff
liqd__a4-meinberlin-4730
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- No "moderation tasks" filter in participatory budget (one phase) **URL:** https://meinberlin-dev.liqd.net/projekte/module/burgerhaushalt/?mode=list (list view) or https://meinberlin-dev.liqd.net/dashboard/projects/burgerhaushalt-spandau/basic/ (dashboard) **user:** Moderator, Admin **expected behaviour:** When using participatory budget with one phase i want to be able to set up moderation tasks for the discussion of ideas and want to filter ideas with an filter "open moderationtasks" **behaviour:** There is no filter "moderation tasks" in the list view of ideas in participatory budget (one phase) nor is there the possibility to create moderation tasks in the dashboard of the project **important screensize:** no **device & browser:** Mac/Windows Chrome, Edge Firefox, Iphone, Samsung Galaxy 20 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `meinberlin/apps/moderationtasks/dashboard.py` Content: ``` 1 from django.utils.translation import gettext_lazy as _ 2 3 from adhocracy4.dashboard import ModuleFormSetComponent 4 from adhocracy4.dashboard import components 5 6 from . import forms 7 8 9 class ModerationTasksComponent(ModuleFormSetComponent): 10 identifier = 'moderation_tasks' 11 weight = 15 12 label = _('Moderation Tasks') 13 14 form_title = _('Edit moderation tasks') 15 form_class = forms.ModerationTasksFormSet 16 form_template_name = \ 17 'meinberlin_moderationtasks/moderation_tasks_form.html' 18 19 def is_effective(self, module): 20 return module.blueprint_type in ['PB1', 'PB2', 'PB3'] 21 22 23 components.register_module(ModerationTasksComponent()) 24 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/meinberlin/apps/moderationtasks/dashboard.py b/meinberlin/apps/moderationtasks/dashboard.py --- a/meinberlin/apps/moderationtasks/dashboard.py +++ b/meinberlin/apps/moderationtasks/dashboard.py @@ -17,7 +17,7 @@ 'meinberlin_moderationtasks/moderation_tasks_form.html' def is_effective(self, module): - return module.blueprint_type in ['PB1', 'PB2', 'PB3'] + return module.blueprint_type in ['PB', 'PB2', 'PB3'] components.register_module(ModerationTasksComponent())
{"golden_diff": "diff --git a/meinberlin/apps/moderationtasks/dashboard.py b/meinberlin/apps/moderationtasks/dashboard.py\n--- a/meinberlin/apps/moderationtasks/dashboard.py\n+++ b/meinberlin/apps/moderationtasks/dashboard.py\n@@ -17,7 +17,7 @@\n 'meinberlin_moderationtasks/moderation_tasks_form.html'\n \n def is_effective(self, module):\n- return module.blueprint_type in ['PB1', 'PB2', 'PB3']\n+ return module.blueprint_type in ['PB', 'PB2', 'PB3']\n \n \n components.register_module(ModerationTasksComponent())\n", "issue": "No \"moderation tasks\" filter in participatory budget (one phase)\n**URL:** https://meinberlin-dev.liqd.net/projekte/module/burgerhaushalt/?mode=list (list view)\r\nor https://meinberlin-dev.liqd.net/dashboard/projects/burgerhaushalt-spandau/basic/ (dashboard)\r\n**user:** Moderator, Admin\r\n**expected behaviour:** When using participatory budget with one phase i want to be able to set up moderation tasks for the discussion of ideas and want to filter ideas with an filter \"open moderationtasks\"\r\n**behaviour:** There is no filter \"moderation tasks\" in the list view of ideas in participatory budget (one phase) nor is there the possibility to create moderation tasks in the dashboard of the project\r\n**important screensize:** no\r\n**device & browser:** Mac/Windows Chrome, Edge Firefox, Iphone, Samsung Galaxy 20\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "from django.utils.translation import gettext_lazy as _\n\nfrom adhocracy4.dashboard import ModuleFormSetComponent\nfrom adhocracy4.dashboard import components\n\nfrom . import forms\n\n\nclass ModerationTasksComponent(ModuleFormSetComponent):\n identifier = 'moderation_tasks'\n weight = 15\n label = _('Moderation Tasks')\n\n form_title = _('Edit moderation tasks')\n form_class = forms.ModerationTasksFormSet\n form_template_name = \\\n 'meinberlin_moderationtasks/moderation_tasks_form.html'\n\n def is_effective(self, module):\n return module.blueprint_type in ['PB1', 'PB2', 'PB3']\n\n\ncomponents.register_module(ModerationTasksComponent())\n", "path": "meinberlin/apps/moderationtasks/dashboard.py"}], "after_files": [{"content": "from django.utils.translation import gettext_lazy as _\n\nfrom adhocracy4.dashboard import ModuleFormSetComponent\nfrom adhocracy4.dashboard import components\n\nfrom . import forms\n\n\nclass ModerationTasksComponent(ModuleFormSetComponent):\n identifier = 'moderation_tasks'\n weight = 15\n label = _('Moderation Tasks')\n\n form_title = _('Edit moderation tasks')\n form_class = forms.ModerationTasksFormSet\n form_template_name = \\\n 'meinberlin_moderationtasks/moderation_tasks_form.html'\n\n def is_effective(self, module):\n return module.blueprint_type in ['PB', 'PB2', 'PB3']\n\n\ncomponents.register_module(ModerationTasksComponent())\n", "path": "meinberlin/apps/moderationtasks/dashboard.py"}]}
650
139
gh_patches_debug_61599
rasdani/github-patches
git_diff
beetbox__beets-3159
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- BadFiles plugin crashes beets with latest git master ### Problem If the `badfiles` plugin is activated, beets crashes when starting an import task. Running this command in verbose (`-vv`) mode: ```sh $ beet -vv import --write /data/music user configuration: /home/jan/.config/beets/config.yaml data directory: /home/jan/.config/beets plugin paths: Sending event: pluginload artresizer: method is (2, (7, 0, 8)) lyrics: Disabling google source: no API key configured. library database: /home/jan/beets.db library directory: /data/music Sending event: library_opened Traceback (most recent call last): File "/home/jan/.local/bin/beet", line 11, in <module> load_entry_point('beets', 'console_scripts', 'beet')() File "/data/jan/Projects/beets/beets/ui/__init__.py", line 1266, in main _raw_main(args) File "/data/jan/Projects/beets/beets/ui/__init__.py", line 1253, in _raw_main subcommand.func(lib, suboptions, subargs) File "/data/jan/Projects/beets/beets/ui/commands.py", line 955, in import_func import_files(lib, paths, query) File "/data/jan/Projects/beets/beets/ui/commands.py", line 925, in import_files session.run() File "/data/jan/Projects/beets/beets/importer.py", line 316, in run for stage_func in plugins.early_import_stages(): File "/data/jan/Projects/beets/beets/plugins.py", line 426, in early_import_stages stages += plugin.get_early_import_stages() File "/data/jan/Projects/beets/beets/plugins.py", line 112, in get_early_import_stages return self._set_stage_log_level(self.early_import_stages) AttributeError: 'BadFiles' object has no attribute 'early_import_stages' ``` ### Setup * OS: Arch Linux * Python version: 3.7.2 * beets version: be118b92 * Turning off plugins made problem go away (yes/no): Yes (Disabling the `badfiles` plugin suffices) My configuration (output of `beet config`) is: https://gist.github.com/Holzhaus/500b790c06fe2250ac9182bd8a6760da --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `beetsplug/badfiles.py` Content: ``` 1 # -*- coding: utf-8 -*- 2 # This file is part of beets. 3 # Copyright 2016, François-Xavier Thomas. 4 # 5 # Permission is hereby granted, free of charge, to any person obtaining 6 # a copy of this software and associated documentation files (the 7 # "Software"), to deal in the Software without restriction, including 8 # without limitation the rights to use, copy, modify, merge, publish, 9 # distribute, sublicense, and/or sell copies of the Software, and to 10 # permit persons to whom the Software is furnished to do so, subject to 11 # the following conditions: 12 # 13 # The above copyright notice and this permission notice shall be 14 # included in all copies or substantial portions of the Software. 15 16 """Use command-line tools to check for audio file corruption. 17 """ 18 19 from __future__ import division, absolute_import, print_function 20 21 from subprocess import check_output, CalledProcessError, list2cmdline, STDOUT 22 23 import shlex 24 import os 25 import errno 26 import sys 27 import six 28 from beets.plugins import BeetsPlugin 29 from beets.ui import Subcommand 30 from beets.util import displayable_path, confit, par_map 31 from beets import ui 32 33 34 class CheckerCommandException(Exception): 35 """Raised when running a checker failed. 36 37 Attributes: 38 checker: Checker command name. 39 path: Path to the file being validated. 40 errno: Error number from the checker execution error. 41 msg: Message from the checker execution error. 42 """ 43 44 def __init__(self, cmd, oserror): 45 self.checker = cmd[0] 46 self.path = cmd[-1] 47 self.errno = oserror.errno 48 self.msg = str(oserror) 49 50 51 class BadFiles(BeetsPlugin): 52 def __init__(self): 53 self.verbose = False 54 55 def run_command(self, cmd): 56 self._log.debug(u"running command: {}", 57 displayable_path(list2cmdline(cmd))) 58 try: 59 output = check_output(cmd, stderr=STDOUT) 60 errors = 0 61 status = 0 62 except CalledProcessError as e: 63 output = e.output 64 errors = 1 65 status = e.returncode 66 except OSError as e: 67 raise CheckerCommandException(cmd, e) 68 output = output.decode(sys.getfilesystemencoding()) 69 return status, errors, [line for line in output.split("\n") if line] 70 71 def check_mp3val(self, path): 72 status, errors, output = self.run_command(["mp3val", path]) 73 if status == 0: 74 output = [line for line in output if line.startswith("WARNING:")] 75 errors = len(output) 76 return status, errors, output 77 78 def check_flac(self, path): 79 return self.run_command(["flac", "-wst", path]) 80 81 def check_custom(self, command): 82 def checker(path): 83 cmd = shlex.split(command) 84 cmd.append(path) 85 return self.run_command(cmd) 86 return checker 87 88 def get_checker(self, ext): 89 ext = ext.lower() 90 try: 91 command = self.config['commands'].get(dict).get(ext) 92 except confit.NotFoundError: 93 command = None 94 if command: 95 return self.check_custom(command) 96 if ext == "mp3": 97 return self.check_mp3val 98 if ext == "flac": 99 return self.check_flac 100 101 def check_item(self, item): 102 # First, check whether the path exists. If not, the user 103 # should probably run `beet update` to cleanup your library. 104 dpath = displayable_path(item.path) 105 self._log.debug(u"checking path: {}", dpath) 106 if not os.path.exists(item.path): 107 ui.print_(u"{}: file does not exist".format( 108 ui.colorize('text_error', dpath))) 109 110 # Run the checker against the file if one is found 111 ext = os.path.splitext(item.path)[1][1:].decode('utf8', 'ignore') 112 checker = self.get_checker(ext) 113 if not checker: 114 self._log.error(u"no checker specified in the config for {}", 115 ext) 116 return 117 path = item.path 118 if not isinstance(path, six.text_type): 119 path = item.path.decode(sys.getfilesystemencoding()) 120 try: 121 status, errors, output = checker(path) 122 except CheckerCommandException as e: 123 if e.errno == errno.ENOENT: 124 self._log.error( 125 u"command not found: {} when validating file: {}", 126 e.checker, 127 e.path 128 ) 129 else: 130 self._log.error(u"error invoking {}: {}", e.checker, e.msg) 131 return 132 if status > 0: 133 ui.print_(u"{}: checker exited with status {}" 134 .format(ui.colorize('text_error', dpath), status)) 135 for line in output: 136 ui.print_(u" {}".format(displayable_path(line))) 137 elif errors > 0: 138 ui.print_(u"{}: checker found {} errors or warnings" 139 .format(ui.colorize('text_warning', dpath), errors)) 140 for line in output: 141 ui.print_(u" {}".format(displayable_path(line))) 142 elif self.verbose: 143 ui.print_(u"{}: ok".format(ui.colorize('text_success', dpath))) 144 145 def command(self, lib, opts, args): 146 # Get items from arguments 147 items = lib.items(ui.decargs(args)) 148 self.verbose = opts.verbose 149 par_map(self.check_item, items) 150 151 def commands(self): 152 bad_command = Subcommand('bad', 153 help=u'check for corrupt or missing files') 154 bad_command.parser.add_option( 155 u'-v', u'--verbose', 156 action='store_true', default=False, dest='verbose', 157 help=u'view results for both the bad and uncorrupted files' 158 ) 159 bad_command.func = self.command 160 return [bad_command] 161 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/beetsplug/badfiles.py b/beetsplug/badfiles.py --- a/beetsplug/badfiles.py +++ b/beetsplug/badfiles.py @@ -50,6 +50,7 @@ class BadFiles(BeetsPlugin): def __init__(self): + super(BadFiles, self).__init__() self.verbose = False def run_command(self, cmd):
{"golden_diff": "diff --git a/beetsplug/badfiles.py b/beetsplug/badfiles.py\n--- a/beetsplug/badfiles.py\n+++ b/beetsplug/badfiles.py\n@@ -50,6 +50,7 @@\n \n class BadFiles(BeetsPlugin):\n def __init__(self):\n+ super(BadFiles, self).__init__()\n self.verbose = False\n \n def run_command(self, cmd):\n", "issue": "BadFiles plugin crashes beets with latest git master\n### Problem\r\n\r\nIf the `badfiles` plugin is activated, beets crashes when starting an import task.\r\n\r\nRunning this command in verbose (`-vv`) mode:\r\n\r\n```sh\r\n$ beet -vv import --write /data/music\r\nuser configuration: /home/jan/.config/beets/config.yaml\r\ndata directory: /home/jan/.config/beets\r\nplugin paths:\r\nSending event: pluginload\r\nartresizer: method is (2, (7, 0, 8))\r\nlyrics: Disabling google source: no API key configured.\r\nlibrary database: /home/jan/beets.db\r\nlibrary directory: /data/music\r\nSending event: library_opened\r\nTraceback (most recent call last):\r\n File \"/home/jan/.local/bin/beet\", line 11, in <module>\r\n load_entry_point('beets', 'console_scripts', 'beet')()\r\n File \"/data/jan/Projects/beets/beets/ui/__init__.py\", line 1266, in main\r\n _raw_main(args)\r\n File \"/data/jan/Projects/beets/beets/ui/__init__.py\", line 1253, in _raw_main\r\n subcommand.func(lib, suboptions, subargs)\r\n File \"/data/jan/Projects/beets/beets/ui/commands.py\", line 955, in import_func\r\n import_files(lib, paths, query)\r\n File \"/data/jan/Projects/beets/beets/ui/commands.py\", line 925, in import_files\r\n session.run()\r\n File \"/data/jan/Projects/beets/beets/importer.py\", line 316, in run\r\n for stage_func in plugins.early_import_stages():\r\n File \"/data/jan/Projects/beets/beets/plugins.py\", line 426, in early_import_stages\r\n stages += plugin.get_early_import_stages()\r\n File \"/data/jan/Projects/beets/beets/plugins.py\", line 112, in get_early_import_stages\r\n return self._set_stage_log_level(self.early_import_stages)\r\nAttributeError: 'BadFiles' object has no attribute 'early_import_stages'\r\n```\r\n\r\n### Setup\r\n\r\n* OS: Arch Linux\r\n* Python version: 3.7.2\r\n* beets version: be118b92\r\n* Turning off plugins made problem go away (yes/no): Yes (Disabling the `badfiles` plugin suffices)\r\n\r\nMy configuration (output of `beet config`) is: https://gist.github.com/Holzhaus/500b790c06fe2250ac9182bd8a6760da\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This file is part of beets.\n# Copyright 2016, Fran\u00e7ois-Xavier Thomas.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"Use command-line tools to check for audio file corruption.\n\"\"\"\n\nfrom __future__ import division, absolute_import, print_function\n\nfrom subprocess import check_output, CalledProcessError, list2cmdline, STDOUT\n\nimport shlex\nimport os\nimport errno\nimport sys\nimport six\nfrom beets.plugins import BeetsPlugin\nfrom beets.ui import Subcommand\nfrom beets.util import displayable_path, confit, par_map\nfrom beets import ui\n\n\nclass CheckerCommandException(Exception):\n \"\"\"Raised when running a checker failed.\n\n Attributes:\n checker: Checker command name.\n path: Path to the file being validated.\n errno: Error number from the checker execution error.\n msg: Message from the checker execution error.\n \"\"\"\n\n def __init__(self, cmd, oserror):\n self.checker = cmd[0]\n self.path = cmd[-1]\n self.errno = oserror.errno\n self.msg = str(oserror)\n\n\nclass BadFiles(BeetsPlugin):\n def __init__(self):\n self.verbose = False\n\n def run_command(self, cmd):\n self._log.debug(u\"running command: {}\",\n displayable_path(list2cmdline(cmd)))\n try:\n output = check_output(cmd, stderr=STDOUT)\n errors = 0\n status = 0\n except CalledProcessError as e:\n output = e.output\n errors = 1\n status = e.returncode\n except OSError as e:\n raise CheckerCommandException(cmd, e)\n output = output.decode(sys.getfilesystemencoding())\n return status, errors, [line for line in output.split(\"\\n\") if line]\n\n def check_mp3val(self, path):\n status, errors, output = self.run_command([\"mp3val\", path])\n if status == 0:\n output = [line for line in output if line.startswith(\"WARNING:\")]\n errors = len(output)\n return status, errors, output\n\n def check_flac(self, path):\n return self.run_command([\"flac\", \"-wst\", path])\n\n def check_custom(self, command):\n def checker(path):\n cmd = shlex.split(command)\n cmd.append(path)\n return self.run_command(cmd)\n return checker\n\n def get_checker(self, ext):\n ext = ext.lower()\n try:\n command = self.config['commands'].get(dict).get(ext)\n except confit.NotFoundError:\n command = None\n if command:\n return self.check_custom(command)\n if ext == \"mp3\":\n return self.check_mp3val\n if ext == \"flac\":\n return self.check_flac\n\n def check_item(self, item):\n # First, check whether the path exists. If not, the user\n # should probably run `beet update` to cleanup your library.\n dpath = displayable_path(item.path)\n self._log.debug(u\"checking path: {}\", dpath)\n if not os.path.exists(item.path):\n ui.print_(u\"{}: file does not exist\".format(\n ui.colorize('text_error', dpath)))\n\n # Run the checker against the file if one is found\n ext = os.path.splitext(item.path)[1][1:].decode('utf8', 'ignore')\n checker = self.get_checker(ext)\n if not checker:\n self._log.error(u\"no checker specified in the config for {}\",\n ext)\n return\n path = item.path\n if not isinstance(path, six.text_type):\n path = item.path.decode(sys.getfilesystemencoding())\n try:\n status, errors, output = checker(path)\n except CheckerCommandException as e:\n if e.errno == errno.ENOENT:\n self._log.error(\n u\"command not found: {} when validating file: {}\",\n e.checker,\n e.path\n )\n else:\n self._log.error(u\"error invoking {}: {}\", e.checker, e.msg)\n return\n if status > 0:\n ui.print_(u\"{}: checker exited with status {}\"\n .format(ui.colorize('text_error', dpath), status))\n for line in output:\n ui.print_(u\" {}\".format(displayable_path(line)))\n elif errors > 0:\n ui.print_(u\"{}: checker found {} errors or warnings\"\n .format(ui.colorize('text_warning', dpath), errors))\n for line in output:\n ui.print_(u\" {}\".format(displayable_path(line)))\n elif self.verbose:\n ui.print_(u\"{}: ok\".format(ui.colorize('text_success', dpath)))\n\n def command(self, lib, opts, args):\n # Get items from arguments\n items = lib.items(ui.decargs(args))\n self.verbose = opts.verbose\n par_map(self.check_item, items)\n\n def commands(self):\n bad_command = Subcommand('bad',\n help=u'check for corrupt or missing files')\n bad_command.parser.add_option(\n u'-v', u'--verbose',\n action='store_true', default=False, dest='verbose',\n help=u'view results for both the bad and uncorrupted files'\n )\n bad_command.func = self.command\n return [bad_command]\n", "path": "beetsplug/badfiles.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# This file is part of beets.\n# Copyright 2016, Fran\u00e7ois-Xavier Thomas.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"Use command-line tools to check for audio file corruption.\n\"\"\"\n\nfrom __future__ import division, absolute_import, print_function\n\nfrom subprocess import check_output, CalledProcessError, list2cmdline, STDOUT\n\nimport shlex\nimport os\nimport errno\nimport sys\nimport six\nfrom beets.plugins import BeetsPlugin\nfrom beets.ui import Subcommand\nfrom beets.util import displayable_path, confit, par_map\nfrom beets import ui\n\n\nclass CheckerCommandException(Exception):\n \"\"\"Raised when running a checker failed.\n\n Attributes:\n checker: Checker command name.\n path: Path to the file being validated.\n errno: Error number from the checker execution error.\n msg: Message from the checker execution error.\n \"\"\"\n\n def __init__(self, cmd, oserror):\n self.checker = cmd[0]\n self.path = cmd[-1]\n self.errno = oserror.errno\n self.msg = str(oserror)\n\n\nclass BadFiles(BeetsPlugin):\n def __init__(self):\n super(BadFiles, self).__init__()\n self.verbose = False\n\n def run_command(self, cmd):\n self._log.debug(u\"running command: {}\",\n displayable_path(list2cmdline(cmd)))\n try:\n output = check_output(cmd, stderr=STDOUT)\n errors = 0\n status = 0\n except CalledProcessError as e:\n output = e.output\n errors = 1\n status = e.returncode\n except OSError as e:\n raise CheckerCommandException(cmd, e)\n output = output.decode(sys.getfilesystemencoding())\n return status, errors, [line for line in output.split(\"\\n\") if line]\n\n def check_mp3val(self, path):\n status, errors, output = self.run_command([\"mp3val\", path])\n if status == 0:\n output = [line for line in output if line.startswith(\"WARNING:\")]\n errors = len(output)\n return status, errors, output\n\n def check_flac(self, path):\n return self.run_command([\"flac\", \"-wst\", path])\n\n def check_custom(self, command):\n def checker(path):\n cmd = shlex.split(command)\n cmd.append(path)\n return self.run_command(cmd)\n return checker\n\n def get_checker(self, ext):\n ext = ext.lower()\n try:\n command = self.config['commands'].get(dict).get(ext)\n except confit.NotFoundError:\n command = None\n if command:\n return self.check_custom(command)\n if ext == \"mp3\":\n return self.check_mp3val\n if ext == \"flac\":\n return self.check_flac\n\n def check_item(self, item):\n # First, check whether the path exists. If not, the user\n # should probably run `beet update` to cleanup your library.\n dpath = displayable_path(item.path)\n self._log.debug(u\"checking path: {}\", dpath)\n if not os.path.exists(item.path):\n ui.print_(u\"{}: file does not exist\".format(\n ui.colorize('text_error', dpath)))\n\n # Run the checker against the file if one is found\n ext = os.path.splitext(item.path)[1][1:].decode('utf8', 'ignore')\n checker = self.get_checker(ext)\n if not checker:\n self._log.error(u\"no checker specified in the config for {}\",\n ext)\n return\n path = item.path\n if not isinstance(path, six.text_type):\n path = item.path.decode(sys.getfilesystemencoding())\n try:\n status, errors, output = checker(path)\n except CheckerCommandException as e:\n if e.errno == errno.ENOENT:\n self._log.error(\n u\"command not found: {} when validating file: {}\",\n e.checker,\n e.path\n )\n else:\n self._log.error(u\"error invoking {}: {}\", e.checker, e.msg)\n return\n if status > 0:\n ui.print_(u\"{}: checker exited with status {}\"\n .format(ui.colorize('text_error', dpath), status))\n for line in output:\n ui.print_(u\" {}\".format(displayable_path(line)))\n elif errors > 0:\n ui.print_(u\"{}: checker found {} errors or warnings\"\n .format(ui.colorize('text_warning', dpath), errors))\n for line in output:\n ui.print_(u\" {}\".format(displayable_path(line)))\n elif self.verbose:\n ui.print_(u\"{}: ok\".format(ui.colorize('text_success', dpath)))\n\n def command(self, lib, opts, args):\n # Get items from arguments\n items = lib.items(ui.decargs(args))\n self.verbose = opts.verbose\n par_map(self.check_item, items)\n\n def commands(self):\n bad_command = Subcommand('bad',\n help=u'check for corrupt or missing files')\n bad_command.parser.add_option(\n u'-v', u'--verbose',\n action='store_true', default=False, dest='verbose',\n help=u'view results for both the bad and uncorrupted files'\n )\n bad_command.func = self.command\n return [bad_command]\n", "path": "beetsplug/badfiles.py"}]}
2,501
92
gh_patches_debug_30055
rasdani/github-patches
git_diff
pytorch__torchdynamo-193
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Make torchdynamo not import third party package in `skipfiles.py` @xuzhao9 in https://github.com/facebookresearch/torchdynamo/issues/107#issuecomment-1095681515 found that the following line makes alexnet 18% slower: https://github.com/jansel/torchdynamo/blob/bf90b8cdbacf35944fa8c12185b1823dc5cb90bb/torchdynamo/skipfiles.py#L123 It seems importing: "networkx", "omegaconf", "onnx", "pandas", and "sklearn" cause performance issues. TorchDynamo is only importing these modules to find the filename, which is also a bit wasteful. We should rewrite `skipfiles.py` to use [find_spec](https://docs.python.org/3/library/importlib.html#importlib.abc.PathEntryFinder.find_spec) instead, so we don't need to import unused packages. Also, I think we can cut down the list of modules in skipfiles dramatically. Most of those were added when TorchDynamo didn't automatically skip backends and supported much less of python, so likely many (most?) can be removed. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `torchdynamo/skipfiles.py` Content: ``` 1 import abc 2 import collections 3 import contextlib 4 import copy 5 import copyreg 6 import dataclasses 7 import enum 8 import functools 9 import importlib 10 import inspect 11 import linecache 12 import logging 13 import multiprocessing 14 import operator 15 import os 16 import posixpath 17 import random 18 import re 19 import selectors 20 import signal 21 import tempfile 22 import threading 23 import tokenize 24 import traceback 25 import types 26 import typing 27 import unittest 28 import weakref 29 30 import _collections_abc 31 import _weakrefset 32 import torch 33 34 35 def _module_dir(m: types.ModuleType): 36 return re.sub(r"__init__.py$", "", m.__file__) 37 38 39 SKIP_DIRS = [ 40 # torch.* 41 _module_dir(torch), 42 # torchdynamo.* 43 os.path.dirname(__file__) + "/", 44 "<frozen importlib", 45 "<__array_function__ internals>", 46 ] + [ 47 # skip some standard libs 48 _module_dir(m) 49 for m in ( 50 abc, 51 collections, 52 contextlib, 53 copy, 54 copyreg, 55 dataclasses, 56 enum, 57 functools, 58 importlib, 59 inspect, 60 linecache, 61 logging, 62 multiprocessing, 63 operator, 64 os, 65 posixpath, 66 random, 67 re, 68 selectors, 69 signal, 70 tempfile, 71 threading, 72 tokenize, 73 traceback, 74 types, 75 typing, 76 unittest, 77 weakref, 78 _collections_abc, 79 _weakrefset, 80 ) 81 ] 82 SKIP_DIRS_RE = None # set in add() below 83 FILENAME_ALLOWLIST = { 84 torch.nn.Sequential.__init__.__code__.co_filename, 85 } 86 87 88 def add(module: types.ModuleType): 89 assert isinstance(module, types.ModuleType) 90 global SKIP_DIRS_RE 91 name = module.__file__ 92 if name is None: 93 return 94 SKIP_DIRS.append(_module_dir(module)) 95 SKIP_DIRS_RE = re.compile(f"^({'|'.join(map(re.escape, SKIP_DIRS))})") 96 97 98 def check(filename, allow_torch=False): 99 """Should skip this file?""" 100 if filename is None: 101 return True 102 if filename in FILENAME_ALLOWLIST: 103 return False 104 if allow_torch and is_torch(filename): 105 return False 106 return bool(SKIP_DIRS_RE.match(filename)) 107 108 109 # skip common third party libs 110 for _name in ( 111 "functorch", 112 "intel_extension_for_pytorch", 113 "networkx", 114 "numpy", 115 "omegaconf", 116 "onnx", 117 "onnxruntime", 118 "onnx_tf", 119 "pandas", 120 "sklearn", 121 "tabulate", 122 "tensorflow", 123 "tensorrt", 124 "torch2trt", 125 "tqdm", 126 "tree", 127 "tvm", 128 "fx2trt_oss", 129 ): 130 try: 131 add(importlib.import_module(_name)) 132 except (ImportError, TypeError): 133 pass 134 135 136 def is_torch_inline_allowed(filename): 137 return filename.startswith(_module_dir(torch.nn)) or filename.startswith( 138 _module_dir(torch.distributions) 139 ) 140 141 142 def is_torch(filename): 143 return filename.startswith(_module_dir(torch)) 144 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/torchdynamo/skipfiles.py b/torchdynamo/skipfiles.py --- a/torchdynamo/skipfiles.py +++ b/torchdynamo/skipfiles.py @@ -32,8 +32,12 @@ import torch +def _strip_init_py(s): + return re.sub(r"__init__.py$", "", s) + + def _module_dir(m: types.ModuleType): - return re.sub(r"__init__.py$", "", m.__file__) + return _strip_init_py(m.__file__) SKIP_DIRS = [ @@ -79,22 +83,32 @@ _weakrefset, ) ] -SKIP_DIRS_RE = None # set in add() below FILENAME_ALLOWLIST = { torch.nn.Sequential.__init__.__code__.co_filename, } +SKIP_DIRS_RE = None -def add(module: types.ModuleType): - assert isinstance(module, types.ModuleType) +def _recompile_re(): global SKIP_DIRS_RE - name = module.__file__ - if name is None: - return - SKIP_DIRS.append(_module_dir(module)) SKIP_DIRS_RE = re.compile(f"^({'|'.join(map(re.escape, SKIP_DIRS))})") +def add(import_name: str): + if isinstance(import_name, types.ModuleType): + return add(import_name.__name__) + assert isinstance(import_name, str) + module_spec = importlib.util.find_spec(import_name) + if not module_spec: + return + origin = module_spec.origin + if origin is None: + return + global SKIP_DIRS_RE + SKIP_DIRS.append(_strip_init_py(origin)) + _recompile_re() + + def check(filename, allow_torch=False): """Should skip this file?""" if filename is None: @@ -127,10 +141,9 @@ "tvm", "fx2trt_oss", ): - try: - add(importlib.import_module(_name)) - except (ImportError, TypeError): - pass + add(_name) + +_recompile_re() def is_torch_inline_allowed(filename):
{"golden_diff": "diff --git a/torchdynamo/skipfiles.py b/torchdynamo/skipfiles.py\n--- a/torchdynamo/skipfiles.py\n+++ b/torchdynamo/skipfiles.py\n@@ -32,8 +32,12 @@\n import torch\n \n \n+def _strip_init_py(s):\n+ return re.sub(r\"__init__.py$\", \"\", s)\n+\n+\n def _module_dir(m: types.ModuleType):\n- return re.sub(r\"__init__.py$\", \"\", m.__file__)\n+ return _strip_init_py(m.__file__)\n \n \n SKIP_DIRS = [\n@@ -79,22 +83,32 @@\n _weakrefset,\n )\n ]\n-SKIP_DIRS_RE = None # set in add() below\n FILENAME_ALLOWLIST = {\n torch.nn.Sequential.__init__.__code__.co_filename,\n }\n+SKIP_DIRS_RE = None\n \n \n-def add(module: types.ModuleType):\n- assert isinstance(module, types.ModuleType)\n+def _recompile_re():\n global SKIP_DIRS_RE\n- name = module.__file__\n- if name is None:\n- return\n- SKIP_DIRS.append(_module_dir(module))\n SKIP_DIRS_RE = re.compile(f\"^({'|'.join(map(re.escape, SKIP_DIRS))})\")\n \n \n+def add(import_name: str):\n+ if isinstance(import_name, types.ModuleType):\n+ return add(import_name.__name__)\n+ assert isinstance(import_name, str)\n+ module_spec = importlib.util.find_spec(import_name)\n+ if not module_spec:\n+ return\n+ origin = module_spec.origin\n+ if origin is None:\n+ return\n+ global SKIP_DIRS_RE\n+ SKIP_DIRS.append(_strip_init_py(origin))\n+ _recompile_re()\n+\n+\n def check(filename, allow_torch=False):\n \"\"\"Should skip this file?\"\"\"\n if filename is None:\n@@ -127,10 +141,9 @@\n \"tvm\",\n \"fx2trt_oss\",\n ):\n- try:\n- add(importlib.import_module(_name))\n- except (ImportError, TypeError):\n- pass\n+ add(_name)\n+\n+_recompile_re()\n \n \n def is_torch_inline_allowed(filename):\n", "issue": "Make torchdynamo not import third party package in `skipfiles.py`\n@xuzhao9 in https://github.com/facebookresearch/torchdynamo/issues/107#issuecomment-1095681515 found that the following line makes alexnet 18% slower: \r\n\r\nhttps://github.com/jansel/torchdynamo/blob/bf90b8cdbacf35944fa8c12185b1823dc5cb90bb/torchdynamo/skipfiles.py#L123\r\n\r\nIt seems importing: \"networkx\", \"omegaconf\", \"onnx\", \"pandas\", and \"sklearn\" cause performance issues.\r\n\r\nTorchDynamo is only importing these modules to find the filename, which is also a bit wasteful. We should rewrite `skipfiles.py` to use [find_spec](https://docs.python.org/3/library/importlib.html#importlib.abc.PathEntryFinder.find_spec) instead, so we don't need to import unused packages.\r\n\r\nAlso, I think we can cut down the list of modules in skipfiles dramatically. Most of those were added when TorchDynamo didn't automatically skip backends and supported much less of python, so likely many (most?) can be removed.\r\n\n", "before_files": [{"content": "import abc\nimport collections\nimport contextlib\nimport copy\nimport copyreg\nimport dataclasses\nimport enum\nimport functools\nimport importlib\nimport inspect\nimport linecache\nimport logging\nimport multiprocessing\nimport operator\nimport os\nimport posixpath\nimport random\nimport re\nimport selectors\nimport signal\nimport tempfile\nimport threading\nimport tokenize\nimport traceback\nimport types\nimport typing\nimport unittest\nimport weakref\n\nimport _collections_abc\nimport _weakrefset\nimport torch\n\n\ndef _module_dir(m: types.ModuleType):\n return re.sub(r\"__init__.py$\", \"\", m.__file__)\n\n\nSKIP_DIRS = [\n # torch.*\n _module_dir(torch),\n # torchdynamo.*\n os.path.dirname(__file__) + \"/\",\n \"<frozen importlib\",\n \"<__array_function__ internals>\",\n] + [\n # skip some standard libs\n _module_dir(m)\n for m in (\n abc,\n collections,\n contextlib,\n copy,\n copyreg,\n dataclasses,\n enum,\n functools,\n importlib,\n inspect,\n linecache,\n logging,\n multiprocessing,\n operator,\n os,\n posixpath,\n random,\n re,\n selectors,\n signal,\n tempfile,\n threading,\n tokenize,\n traceback,\n types,\n typing,\n unittest,\n weakref,\n _collections_abc,\n _weakrefset,\n )\n]\nSKIP_DIRS_RE = None # set in add() below\nFILENAME_ALLOWLIST = {\n torch.nn.Sequential.__init__.__code__.co_filename,\n}\n\n\ndef add(module: types.ModuleType):\n assert isinstance(module, types.ModuleType)\n global SKIP_DIRS_RE\n name = module.__file__\n if name is None:\n return\n SKIP_DIRS.append(_module_dir(module))\n SKIP_DIRS_RE = re.compile(f\"^({'|'.join(map(re.escape, SKIP_DIRS))})\")\n\n\ndef check(filename, allow_torch=False):\n \"\"\"Should skip this file?\"\"\"\n if filename is None:\n return True\n if filename in FILENAME_ALLOWLIST:\n return False\n if allow_torch and is_torch(filename):\n return False\n return bool(SKIP_DIRS_RE.match(filename))\n\n\n# skip common third party libs\nfor _name in (\n \"functorch\",\n \"intel_extension_for_pytorch\",\n \"networkx\",\n \"numpy\",\n \"omegaconf\",\n \"onnx\",\n \"onnxruntime\",\n \"onnx_tf\",\n \"pandas\",\n \"sklearn\",\n \"tabulate\",\n \"tensorflow\",\n \"tensorrt\",\n \"torch2trt\",\n \"tqdm\",\n \"tree\",\n \"tvm\",\n \"fx2trt_oss\",\n):\n try:\n add(importlib.import_module(_name))\n except (ImportError, TypeError):\n pass\n\n\ndef is_torch_inline_allowed(filename):\n return filename.startswith(_module_dir(torch.nn)) or filename.startswith(\n _module_dir(torch.distributions)\n )\n\n\ndef is_torch(filename):\n return filename.startswith(_module_dir(torch))\n", "path": "torchdynamo/skipfiles.py"}], "after_files": [{"content": "import abc\nimport collections\nimport contextlib\nimport copy\nimport copyreg\nimport dataclasses\nimport enum\nimport functools\nimport importlib\nimport inspect\nimport linecache\nimport logging\nimport multiprocessing\nimport operator\nimport os\nimport posixpath\nimport random\nimport re\nimport selectors\nimport signal\nimport tempfile\nimport threading\nimport tokenize\nimport traceback\nimport types\nimport typing\nimport unittest\nimport weakref\n\nimport _collections_abc\nimport _weakrefset\nimport torch\n\n\ndef _strip_init_py(s):\n return re.sub(r\"__init__.py$\", \"\", s)\n\n\ndef _module_dir(m: types.ModuleType):\n return _strip_init_py(m.__file__)\n\n\nSKIP_DIRS = [\n # torch.*\n _module_dir(torch),\n # torchdynamo.*\n os.path.dirname(__file__) + \"/\",\n \"<frozen importlib\",\n \"<__array_function__ internals>\",\n] + [\n # skip some standard libs\n _module_dir(m)\n for m in (\n abc,\n collections,\n contextlib,\n copy,\n copyreg,\n dataclasses,\n enum,\n functools,\n importlib,\n inspect,\n linecache,\n logging,\n multiprocessing,\n operator,\n os,\n posixpath,\n random,\n re,\n selectors,\n signal,\n tempfile,\n threading,\n tokenize,\n traceback,\n types,\n typing,\n unittest,\n weakref,\n _collections_abc,\n _weakrefset,\n )\n]\nFILENAME_ALLOWLIST = {\n torch.nn.Sequential.__init__.__code__.co_filename,\n}\nSKIP_DIRS_RE = None\n\n\ndef _recompile_re():\n global SKIP_DIRS_RE\n SKIP_DIRS_RE = re.compile(f\"^({'|'.join(map(re.escape, SKIP_DIRS))})\")\n\n\ndef add(import_name: str):\n if isinstance(import_name, types.ModuleType):\n return add(import_name.__name__)\n assert isinstance(import_name, str)\n module_spec = importlib.util.find_spec(import_name)\n if not module_spec:\n return\n origin = module_spec.origin\n if origin is None:\n return\n global SKIP_DIRS_RE\n SKIP_DIRS.append(_strip_init_py(origin))\n _recompile_re()\n\n\ndef check(filename, allow_torch=False):\n \"\"\"Should skip this file?\"\"\"\n if filename is None:\n return True\n if filename in FILENAME_ALLOWLIST:\n return False\n if allow_torch and is_torch(filename):\n return False\n return bool(SKIP_DIRS_RE.match(filename))\n\n\n# skip common third party libs\nfor _name in (\n \"functorch\",\n \"intel_extension_for_pytorch\",\n \"networkx\",\n \"numpy\",\n \"omegaconf\",\n \"onnx\",\n \"onnxruntime\",\n \"onnx_tf\",\n \"pandas\",\n \"sklearn\",\n \"tabulate\",\n \"tensorflow\",\n \"tensorrt\",\n \"torch2trt\",\n \"tqdm\",\n \"tree\",\n \"tvm\",\n \"fx2trt_oss\",\n):\n add(_name)\n\n_recompile_re()\n\n\ndef is_torch_inline_allowed(filename):\n return filename.startswith(_module_dir(torch.nn)) or filename.startswith(\n _module_dir(torch.distributions)\n )\n\n\ndef is_torch(filename):\n return filename.startswith(_module_dir(torch))\n", "path": "torchdynamo/skipfiles.py"}]}
1,531
485
gh_patches_debug_12000
rasdani/github-patches
git_diff
strawberry-graphql__strawberry-3099
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Unable to use Unions with Generics ## Describe the Bug Not sure if it is a bug or something non supported but when passing a union to a generic type strawberry is unable to initialize the schema. The following example would work using a single strawberry type with Connection, but it fails when using an union ```python from typing import Generic, TypeVar, Union import strawberry T = TypeVar("T") @strawberry.type class Edge(Generic[T]): cursor: str node: T @strawberry.type class Connection(Generic[T]): edges: list["Edge[T]"] @strawberry.type class Entity1: id: int @strawberry.type class Entity2: id: int Entities = Union[Entity1, Entity2] @strawberry.type class Query: @strawberry.field def entities(self) -> Connection[Entities]: return Connection( edges=[ Edge( cursor="1", node=Entity1(id=1), ), Edge( cursor="2", node=Entity2(id=2), ), ], ) schema = strawberry.Schema(Query) print(schema.execute_sync("{ entities { __typename } }")) ``` error ``` raise cls(f"{self.name} fields cannot be resolved. {error}") from error TypeError: Query fields cannot be resolved. ``` ## System Information - Operating system: - Strawberry version (if applicable): 0.208.1 <!-- POLAR PLEDGE BADGE START --> ## Upvote & Fund - We're using [Polar.sh](https://polar.sh/strawberry-graphql) so you can upvote and help fund this issue. - We receive the funding once the issue is completed & confirmed by you. - Thank you in advance for helping prioritize & fund our backlog. <a href="https://polar.sh/strawberry-graphql/strawberry/issues/3098"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/strawberry-graphql/strawberry/issues/3098/pledge.svg?darkmode=1"> <img alt="Fund with Polar" src="https://polar.sh/api/github/strawberry-graphql/strawberry/issues/3098/pledge.svg"> </picture> </a> <!-- POLAR PLEDGE BADGE END --> --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `strawberry/schema/name_converter.py` Content: ``` 1 from __future__ import annotations 2 3 from typing import TYPE_CHECKING, List, Optional, Union, cast 4 from typing_extensions import Protocol 5 6 from strawberry.custom_scalar import ScalarDefinition 7 from strawberry.directive import StrawberryDirective 8 from strawberry.enum import EnumDefinition, EnumValue 9 from strawberry.lazy_type import LazyType 10 from strawberry.schema_directive import StrawberrySchemaDirective 11 from strawberry.type import ( 12 StrawberryList, 13 StrawberryOptional, 14 has_object_definition, 15 ) 16 from strawberry.types.types import StrawberryObjectDefinition 17 from strawberry.union import StrawberryUnion 18 from strawberry.utils.str_converters import capitalize_first, to_camel_case 19 from strawberry.utils.typing import eval_type 20 21 if TYPE_CHECKING: 22 from strawberry.arguments import StrawberryArgument 23 from strawberry.field import StrawberryField 24 from strawberry.type import StrawberryType 25 26 27 class HasGraphQLName(Protocol): 28 python_name: str 29 graphql_name: Optional[str] 30 31 32 class NameConverter: 33 def __init__(self, auto_camel_case: bool = True) -> None: 34 self.auto_camel_case = auto_camel_case 35 36 def apply_naming_config(self, name: str) -> str: 37 if self.auto_camel_case: 38 name = to_camel_case(name) 39 40 return name 41 42 def from_type( 43 self, 44 type_: Union[StrawberryType, StrawberryDirective], 45 ) -> str: 46 if isinstance(type_, (StrawberryDirective, StrawberrySchemaDirective)): 47 return self.from_directive(type_) 48 if isinstance(type_, EnumDefinition): # TODO: Replace with StrawberryEnum 49 return self.from_enum(type_) 50 elif isinstance(type_, StrawberryObjectDefinition): 51 if type_.is_input: 52 return self.from_input_object(type_) 53 if type_.is_interface: 54 return self.from_interface(type_) 55 return self.from_object(type_) 56 elif isinstance(type_, StrawberryUnion): 57 return self.from_union(type_) 58 elif isinstance(type_, ScalarDefinition): # TODO: Replace with StrawberryScalar 59 return self.from_scalar(type_) 60 else: 61 return str(type_) 62 63 def from_argument(self, argument: StrawberryArgument) -> str: 64 return self.get_graphql_name(argument) 65 66 def from_object(self, object_type: StrawberryObjectDefinition) -> str: 67 # if concrete_of is not generic, than this is a subclass of an already 68 # especialized type. 69 if object_type.concrete_of and object_type.concrete_of.is_generic: 70 return self.from_generic( 71 object_type, list(object_type.type_var_map.values()) 72 ) 73 74 return object_type.name 75 76 def from_input_object(self, input_type: StrawberryObjectDefinition) -> str: 77 return self.from_object(input_type) 78 79 def from_interface(self, interface: StrawberryObjectDefinition) -> str: 80 return self.from_object(interface) 81 82 def from_enum(self, enum: EnumDefinition) -> str: 83 return enum.name 84 85 def from_enum_value(self, enum: EnumDefinition, enum_value: EnumValue) -> str: 86 return enum_value.name 87 88 def from_directive( 89 self, directive: Union[StrawberryDirective, StrawberrySchemaDirective] 90 ) -> str: 91 name = self.get_graphql_name(directive) 92 93 if self.auto_camel_case: 94 # we don't want the first letter to be uppercase for directives 95 return name[0].lower() + name[1:] 96 97 return name 98 99 def from_scalar(self, scalar: ScalarDefinition) -> str: 100 return scalar.name 101 102 def from_field(self, field: StrawberryField) -> str: 103 return self.get_graphql_name(field) 104 105 def from_union(self, union: StrawberryUnion) -> str: 106 if union.graphql_name is not None: 107 return union.graphql_name 108 109 name = "" 110 111 for type_ in union.types: 112 if isinstance(type_, LazyType): 113 type_ = cast("StrawberryType", type_.resolve_type()) # noqa: PLW2901 114 115 if has_object_definition(type_): 116 type_name = self.from_type(type_.__strawberry_definition__) 117 else: 118 # This should only be hit when generating names for type-related 119 # exceptions 120 type_name = self.from_type(type_) 121 122 name += type_name 123 124 return name 125 126 def from_generic( 127 self, 128 generic_type: StrawberryObjectDefinition, 129 types: List[Union[StrawberryType, type]], 130 ) -> str: 131 generic_type_name = generic_type.name 132 133 names: List[str] = [] 134 135 for type_ in types: 136 name = self.get_from_type(type_) 137 names.append(name) 138 139 return "".join(names) + generic_type_name 140 141 def get_from_type(self, type_: Union[StrawberryType, type]) -> str: 142 type_ = eval_type(type_) 143 144 if isinstance(type_, LazyType): 145 name = type_.type_name 146 elif isinstance(type_, EnumDefinition): 147 name = type_.name 148 elif isinstance(type_, StrawberryUnion): 149 # TODO: test Generics with unnamed unions 150 assert type_.graphql_name 151 152 name = type_.graphql_name 153 elif isinstance(type_, StrawberryList): 154 name = self.get_from_type(type_.of_type) + "List" 155 elif isinstance(type_, StrawberryOptional): 156 name = self.get_from_type(type_.of_type) + "Optional" 157 elif hasattr(type_, "_scalar_definition"): 158 strawberry_type = type_._scalar_definition 159 160 name = strawberry_type.name 161 elif has_object_definition(type_): 162 strawberry_type = type_.__strawberry_definition__ 163 164 if ( 165 strawberry_type.is_generic 166 and not strawberry_type.is_specialized_generic 167 ): 168 types = type_.__args__ # type: ignore 169 name = self.from_generic(strawberry_type, types) 170 elif ( 171 strawberry_type.concrete_of 172 and not strawberry_type.is_specialized_generic 173 ): 174 types = list(strawberry_type.type_var_map.values()) 175 name = self.from_generic(strawberry_type, types) 176 else: 177 name = strawberry_type.name 178 else: 179 name = type_.__name__ 180 181 return capitalize_first(name) 182 183 def get_graphql_name(self, obj: HasGraphQLName) -> str: 184 if obj.graphql_name is not None: 185 return obj.graphql_name 186 187 assert obj.python_name 188 189 return self.apply_naming_config(obj.python_name) 190 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/strawberry/schema/name_converter.py b/strawberry/schema/name_converter.py --- a/strawberry/schema/name_converter.py +++ b/strawberry/schema/name_converter.py @@ -146,10 +146,7 @@ elif isinstance(type_, EnumDefinition): name = type_.name elif isinstance(type_, StrawberryUnion): - # TODO: test Generics with unnamed unions - assert type_.graphql_name - - name = type_.graphql_name + name = type_.graphql_name if type_.graphql_name else self.from_union(type_) elif isinstance(type_, StrawberryList): name = self.get_from_type(type_.of_type) + "List" elif isinstance(type_, StrawberryOptional):
{"golden_diff": "diff --git a/strawberry/schema/name_converter.py b/strawberry/schema/name_converter.py\n--- a/strawberry/schema/name_converter.py\n+++ b/strawberry/schema/name_converter.py\n@@ -146,10 +146,7 @@\n elif isinstance(type_, EnumDefinition):\n name = type_.name\n elif isinstance(type_, StrawberryUnion):\n- # TODO: test Generics with unnamed unions\n- assert type_.graphql_name\n-\n- name = type_.graphql_name\n+ name = type_.graphql_name if type_.graphql_name else self.from_union(type_)\n elif isinstance(type_, StrawberryList):\n name = self.get_from_type(type_.of_type) + \"List\"\n elif isinstance(type_, StrawberryOptional):\n", "issue": "Unable to use Unions with Generics\n## Describe the Bug\r\n\r\nNot sure if it is a bug or something non supported but when passing a union to a generic type strawberry is unable to initialize the schema.\r\n\r\nThe following example would work using a single strawberry type with Connection, but it fails when using an union \r\n\r\n\r\n```python\r\nfrom typing import Generic, TypeVar, Union\r\nimport strawberry\r\n\r\nT = TypeVar(\"T\")\r\n\r\n\r\[email protected]\r\nclass Edge(Generic[T]):\r\n cursor: str\r\n node: T\r\n\r\n\r\[email protected]\r\nclass Connection(Generic[T]):\r\n edges: list[\"Edge[T]\"]\r\n\r\n\r\[email protected]\r\nclass Entity1:\r\n id: int\r\n\r\n\r\[email protected]\r\nclass Entity2:\r\n id: int\r\n\r\n\r\nEntities = Union[Entity1, Entity2]\r\n\r\n\r\[email protected]\r\nclass Query:\r\n @strawberry.field\r\n def entities(self) -> Connection[Entities]:\r\n return Connection(\r\n edges=[\r\n Edge(\r\n cursor=\"1\",\r\n node=Entity1(id=1),\r\n ),\r\n Edge(\r\n cursor=\"2\",\r\n node=Entity2(id=2),\r\n ),\r\n ],\r\n )\r\n\r\n\r\nschema = strawberry.Schema(Query)\r\nprint(schema.execute_sync(\"{ entities { __typename } }\"))\r\n\r\n```\r\nerror\r\n```\r\nraise cls(f\"{self.name} fields cannot be resolved. {error}\") from error\r\nTypeError: Query fields cannot be resolved.\r\n```\r\n\r\n\r\n## System Information\r\n\r\n - Operating system:\r\n - Strawberry version (if applicable): 0.208.1\r\n\n\n<!-- POLAR PLEDGE BADGE START -->\n## Upvote & Fund\n\n- We're using [Polar.sh](https://polar.sh/strawberry-graphql) so you can upvote and help fund this issue.\n- We receive the funding once the issue is completed & confirmed by you.\n- Thank you in advance for helping prioritize & fund our backlog.\n\n<a href=\"https://polar.sh/strawberry-graphql/strawberry/issues/3098\">\n<picture>\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/strawberry-graphql/strawberry/issues/3098/pledge.svg?darkmode=1\">\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/strawberry-graphql/strawberry/issues/3098/pledge.svg\">\n</picture>\n</a>\n<!-- POLAR PLEDGE BADGE END -->\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, List, Optional, Union, cast\nfrom typing_extensions import Protocol\n\nfrom strawberry.custom_scalar import ScalarDefinition\nfrom strawberry.directive import StrawberryDirective\nfrom strawberry.enum import EnumDefinition, EnumValue\nfrom strawberry.lazy_type import LazyType\nfrom strawberry.schema_directive import StrawberrySchemaDirective\nfrom strawberry.type import (\n StrawberryList,\n StrawberryOptional,\n has_object_definition,\n)\nfrom strawberry.types.types import StrawberryObjectDefinition\nfrom strawberry.union import StrawberryUnion\nfrom strawberry.utils.str_converters import capitalize_first, to_camel_case\nfrom strawberry.utils.typing import eval_type\n\nif TYPE_CHECKING:\n from strawberry.arguments import StrawberryArgument\n from strawberry.field import StrawberryField\n from strawberry.type import StrawberryType\n\n\nclass HasGraphQLName(Protocol):\n python_name: str\n graphql_name: Optional[str]\n\n\nclass NameConverter:\n def __init__(self, auto_camel_case: bool = True) -> None:\n self.auto_camel_case = auto_camel_case\n\n def apply_naming_config(self, name: str) -> str:\n if self.auto_camel_case:\n name = to_camel_case(name)\n\n return name\n\n def from_type(\n self,\n type_: Union[StrawberryType, StrawberryDirective],\n ) -> str:\n if isinstance(type_, (StrawberryDirective, StrawberrySchemaDirective)):\n return self.from_directive(type_)\n if isinstance(type_, EnumDefinition): # TODO: Replace with StrawberryEnum\n return self.from_enum(type_)\n elif isinstance(type_, StrawberryObjectDefinition):\n if type_.is_input:\n return self.from_input_object(type_)\n if type_.is_interface:\n return self.from_interface(type_)\n return self.from_object(type_)\n elif isinstance(type_, StrawberryUnion):\n return self.from_union(type_)\n elif isinstance(type_, ScalarDefinition): # TODO: Replace with StrawberryScalar\n return self.from_scalar(type_)\n else:\n return str(type_)\n\n def from_argument(self, argument: StrawberryArgument) -> str:\n return self.get_graphql_name(argument)\n\n def from_object(self, object_type: StrawberryObjectDefinition) -> str:\n # if concrete_of is not generic, than this is a subclass of an already\n # especialized type.\n if object_type.concrete_of and object_type.concrete_of.is_generic:\n return self.from_generic(\n object_type, list(object_type.type_var_map.values())\n )\n\n return object_type.name\n\n def from_input_object(self, input_type: StrawberryObjectDefinition) -> str:\n return self.from_object(input_type)\n\n def from_interface(self, interface: StrawberryObjectDefinition) -> str:\n return self.from_object(interface)\n\n def from_enum(self, enum: EnumDefinition) -> str:\n return enum.name\n\n def from_enum_value(self, enum: EnumDefinition, enum_value: EnumValue) -> str:\n return enum_value.name\n\n def from_directive(\n self, directive: Union[StrawberryDirective, StrawberrySchemaDirective]\n ) -> str:\n name = self.get_graphql_name(directive)\n\n if self.auto_camel_case:\n # we don't want the first letter to be uppercase for directives\n return name[0].lower() + name[1:]\n\n return name\n\n def from_scalar(self, scalar: ScalarDefinition) -> str:\n return scalar.name\n\n def from_field(self, field: StrawberryField) -> str:\n return self.get_graphql_name(field)\n\n def from_union(self, union: StrawberryUnion) -> str:\n if union.graphql_name is not None:\n return union.graphql_name\n\n name = \"\"\n\n for type_ in union.types:\n if isinstance(type_, LazyType):\n type_ = cast(\"StrawberryType\", type_.resolve_type()) # noqa: PLW2901\n\n if has_object_definition(type_):\n type_name = self.from_type(type_.__strawberry_definition__)\n else:\n # This should only be hit when generating names for type-related\n # exceptions\n type_name = self.from_type(type_)\n\n name += type_name\n\n return name\n\n def from_generic(\n self,\n generic_type: StrawberryObjectDefinition,\n types: List[Union[StrawberryType, type]],\n ) -> str:\n generic_type_name = generic_type.name\n\n names: List[str] = []\n\n for type_ in types:\n name = self.get_from_type(type_)\n names.append(name)\n\n return \"\".join(names) + generic_type_name\n\n def get_from_type(self, type_: Union[StrawberryType, type]) -> str:\n type_ = eval_type(type_)\n\n if isinstance(type_, LazyType):\n name = type_.type_name\n elif isinstance(type_, EnumDefinition):\n name = type_.name\n elif isinstance(type_, StrawberryUnion):\n # TODO: test Generics with unnamed unions\n assert type_.graphql_name\n\n name = type_.graphql_name\n elif isinstance(type_, StrawberryList):\n name = self.get_from_type(type_.of_type) + \"List\"\n elif isinstance(type_, StrawberryOptional):\n name = self.get_from_type(type_.of_type) + \"Optional\"\n elif hasattr(type_, \"_scalar_definition\"):\n strawberry_type = type_._scalar_definition\n\n name = strawberry_type.name\n elif has_object_definition(type_):\n strawberry_type = type_.__strawberry_definition__\n\n if (\n strawberry_type.is_generic\n and not strawberry_type.is_specialized_generic\n ):\n types = type_.__args__ # type: ignore\n name = self.from_generic(strawberry_type, types)\n elif (\n strawberry_type.concrete_of\n and not strawberry_type.is_specialized_generic\n ):\n types = list(strawberry_type.type_var_map.values())\n name = self.from_generic(strawberry_type, types)\n else:\n name = strawberry_type.name\n else:\n name = type_.__name__\n\n return capitalize_first(name)\n\n def get_graphql_name(self, obj: HasGraphQLName) -> str:\n if obj.graphql_name is not None:\n return obj.graphql_name\n\n assert obj.python_name\n\n return self.apply_naming_config(obj.python_name)\n", "path": "strawberry/schema/name_converter.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, List, Optional, Union, cast\nfrom typing_extensions import Protocol\n\nfrom strawberry.custom_scalar import ScalarDefinition\nfrom strawberry.directive import StrawberryDirective\nfrom strawberry.enum import EnumDefinition, EnumValue\nfrom strawberry.lazy_type import LazyType\nfrom strawberry.schema_directive import StrawberrySchemaDirective\nfrom strawberry.type import (\n StrawberryList,\n StrawberryOptional,\n has_object_definition,\n)\nfrom strawberry.types.types import StrawberryObjectDefinition\nfrom strawberry.union import StrawberryUnion\nfrom strawberry.utils.str_converters import capitalize_first, to_camel_case\nfrom strawberry.utils.typing import eval_type\n\nif TYPE_CHECKING:\n from strawberry.arguments import StrawberryArgument\n from strawberry.field import StrawberryField\n from strawberry.type import StrawberryType\n\n\nclass HasGraphQLName(Protocol):\n python_name: str\n graphql_name: Optional[str]\n\n\nclass NameConverter:\n def __init__(self, auto_camel_case: bool = True) -> None:\n self.auto_camel_case = auto_camel_case\n\n def apply_naming_config(self, name: str) -> str:\n if self.auto_camel_case:\n name = to_camel_case(name)\n\n return name\n\n def from_type(\n self,\n type_: Union[StrawberryType, StrawberryDirective],\n ) -> str:\n if isinstance(type_, (StrawberryDirective, StrawberrySchemaDirective)):\n return self.from_directive(type_)\n if isinstance(type_, EnumDefinition): # TODO: Replace with StrawberryEnum\n return self.from_enum(type_)\n elif isinstance(type_, StrawberryObjectDefinition):\n if type_.is_input:\n return self.from_input_object(type_)\n if type_.is_interface:\n return self.from_interface(type_)\n return self.from_object(type_)\n elif isinstance(type_, StrawberryUnion):\n return self.from_union(type_)\n elif isinstance(type_, ScalarDefinition): # TODO: Replace with StrawberryScalar\n return self.from_scalar(type_)\n else:\n return str(type_)\n\n def from_argument(self, argument: StrawberryArgument) -> str:\n return self.get_graphql_name(argument)\n\n def from_object(self, object_type: StrawberryObjectDefinition) -> str:\n # if concrete_of is not generic, than this is a subclass of an already\n # especialized type.\n if object_type.concrete_of and object_type.concrete_of.is_generic:\n return self.from_generic(\n object_type, list(object_type.type_var_map.values())\n )\n\n return object_type.name\n\n def from_input_object(self, input_type: StrawberryObjectDefinition) -> str:\n return self.from_object(input_type)\n\n def from_interface(self, interface: StrawberryObjectDefinition) -> str:\n return self.from_object(interface)\n\n def from_enum(self, enum: EnumDefinition) -> str:\n return enum.name\n\n def from_enum_value(self, enum: EnumDefinition, enum_value: EnumValue) -> str:\n return enum_value.name\n\n def from_directive(\n self, directive: Union[StrawberryDirective, StrawberrySchemaDirective]\n ) -> str:\n name = self.get_graphql_name(directive)\n\n if self.auto_camel_case:\n # we don't want the first letter to be uppercase for directives\n return name[0].lower() + name[1:]\n\n return name\n\n def from_scalar(self, scalar: ScalarDefinition) -> str:\n return scalar.name\n\n def from_field(self, field: StrawberryField) -> str:\n return self.get_graphql_name(field)\n\n def from_union(self, union: StrawberryUnion) -> str:\n if union.graphql_name is not None:\n return union.graphql_name\n\n name = \"\"\n\n for type_ in union.types:\n if isinstance(type_, LazyType):\n type_ = cast(\"StrawberryType\", type_.resolve_type()) # noqa: PLW2901\n\n if has_object_definition(type_):\n type_name = self.from_type(type_.__strawberry_definition__)\n else:\n # This should only be hit when generating names for type-related\n # exceptions\n type_name = self.from_type(type_)\n\n name += type_name\n\n return name\n\n def from_generic(\n self,\n generic_type: StrawberryObjectDefinition,\n types: List[Union[StrawberryType, type]],\n ) -> str:\n generic_type_name = generic_type.name\n\n names: List[str] = []\n\n for type_ in types:\n name = self.get_from_type(type_)\n names.append(name)\n\n return \"\".join(names) + generic_type_name\n\n def get_from_type(self, type_: Union[StrawberryType, type]) -> str:\n type_ = eval_type(type_)\n\n if isinstance(type_, LazyType):\n name = type_.type_name\n elif isinstance(type_, EnumDefinition):\n name = type_.name\n elif isinstance(type_, StrawberryUnion):\n name = type_.graphql_name if type_.graphql_name else self.from_union(type_)\n elif isinstance(type_, StrawberryList):\n name = self.get_from_type(type_.of_type) + \"List\"\n elif isinstance(type_, StrawberryOptional):\n name = self.get_from_type(type_.of_type) + \"Optional\"\n elif hasattr(type_, \"_scalar_definition\"):\n strawberry_type = type_._scalar_definition\n\n name = strawberry_type.name\n elif has_object_definition(type_):\n strawberry_type = type_.__strawberry_definition__\n\n if (\n strawberry_type.is_generic\n and not strawberry_type.is_specialized_generic\n ):\n types = type_.__args__ # type: ignore\n name = self.from_generic(strawberry_type, types)\n elif (\n strawberry_type.concrete_of\n and not strawberry_type.is_specialized_generic\n ):\n types = list(strawberry_type.type_var_map.values())\n name = self.from_generic(strawberry_type, types)\n else:\n name = strawberry_type.name\n else:\n name = type_.__name__\n\n return capitalize_first(name)\n\n def get_graphql_name(self, obj: HasGraphQLName) -> str:\n if obj.graphql_name is not None:\n return obj.graphql_name\n\n assert obj.python_name\n\n return self.apply_naming_config(obj.python_name)\n", "path": "strawberry/schema/name_converter.py"}]}
2,621
163
gh_patches_debug_16313
rasdani/github-patches
git_diff
searxng__searxng-1380
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- move donation page to docs.searxng.org and link to it from instances <!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SearXNG --> **Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG** ``` 3a75d3c1ccbf979a483b4e5b209a59db9876ba33; searxng master ``` **How did you install SearXNG?** with docker **What happened?** There is a donation page on the instance. **Expected behavior** The donation page should be in the official documentation and linked to by instances instead of having it on the instance. Also: There should be more information about who is receiving the money and what it is used for. **Screenshots & Logs** <!-- If applicable, add screenshots, logs to help explain your problem. --> **Additional context** Suggestion in matrix, since donation page on instances could mean legal trouble. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `searx/infopage/__init__.py` Content: ``` 1 # SPDX-License-Identifier: AGPL-3.0-or-later 2 # lint: pylint 3 # pyright: basic 4 """Render SearXNG instance documentation. 5 6 Usage in a Flask app route: 7 8 .. code:: python 9 10 from searx import infopage 11 12 _INFO_PAGES = infopage.InfoPageSet(infopage.MistletoePage) 13 14 @app.route('/info/<pagename>', methods=['GET']) 15 def info(pagename): 16 17 locale = request.preferences.get_value('locale') 18 page = _INFO_PAGES.get_page(pagename, locale) 19 20 """ 21 22 __all__ = ['InfoPage', 'InfoPageSet'] 23 24 import os 25 import os.path 26 import logging 27 import typing 28 29 import urllib.parse 30 import jinja2 31 from flask.helpers import url_for 32 from markdown_it import MarkdownIt 33 34 from .. import get_setting 35 from ..compat import cached_property 36 from ..version import GIT_URL 37 from ..locales import LOCALE_NAMES 38 39 40 logger = logging.getLogger('searx.infopage') 41 _INFO_FOLDER = os.path.abspath(os.path.dirname(__file__)) 42 43 44 class InfoPage: 45 """A page of the :py:obj:`online documentation <InfoPageSet>`.""" 46 47 def __init__(self, fname): 48 self.fname = fname 49 50 @cached_property 51 def raw_content(self): 52 """Raw content of the page (without any jinja rendering)""" 53 with open(self.fname, 'r', encoding='utf-8') as f: 54 return f.read() 55 56 @cached_property 57 def content(self): 58 """Content of the page (rendered in a Jinja conntext)""" 59 ctx = self.get_ctx() 60 template = jinja2.Environment().from_string(self.raw_content) 61 return template.render(**ctx) 62 63 @cached_property 64 def title(self): 65 """Title of the content (without any markup)""" 66 t = "" 67 for l in self.raw_content.split('\n'): 68 if l.startswith('# '): 69 t = l.strip('# ') 70 return t 71 72 @cached_property 73 def html(self): 74 """Render Markdown (CommonMark_) to HTML by using markdown-it-py_. 75 76 .. _CommonMark: https://commonmark.org/ 77 .. _markdown-it-py: https://github.com/executablebooks/markdown-it-py 78 79 """ 80 return ( 81 MarkdownIt("commonmark", {"typographer": True}).enable(["replacements", "smartquotes"]).render(self.content) 82 ) 83 84 def get_ctx(self): 85 """Jinja context to render :py:obj:`InfoPage.content`""" 86 87 def _md_link(name, url): 88 url = url_for(url, _external=True) 89 return "[%s](%s)" % (name, url) 90 91 def _md_search(query): 92 url = '%s?q=%s' % (url_for('search', _external=True), urllib.parse.quote(query)) 93 return '[%s](%s)' % (query, url) 94 95 ctx = {} 96 ctx['GIT_URL'] = GIT_URL 97 ctx['get_setting'] = get_setting 98 ctx['link'] = _md_link 99 ctx['search'] = _md_search 100 101 return ctx 102 103 def __repr__(self): 104 return f'<{self.__class__.__name__} fname={self.fname!r}>' 105 106 107 class InfoPageSet: # pylint: disable=too-few-public-methods 108 """Cached rendering of the online documentation a SearXNG instance has. 109 110 :param page_class: render online documentation by :py:obj:`InfoPage` parser. 111 :type page_class: :py:obj:`InfoPage` 112 113 :param info_folder: information directory 114 :type info_folder: str 115 """ 116 117 def __init__( 118 self, page_class: typing.Optional[typing.Type[InfoPage]] = None, info_folder: typing.Optional[str] = None 119 ): 120 self.page_class = page_class or InfoPage 121 self.folder: str = info_folder or _INFO_FOLDER 122 """location of the Markdwon files""" 123 124 self.CACHE: typing.Dict[tuple, typing.Optional[InfoPage]] = {} 125 126 self.locale_default: str = 'en' 127 """default language""" 128 129 self.locales: typing.List[str] = [ 130 locale.replace('_', '-') for locale in os.listdir(_INFO_FOLDER) if locale.replace('_', '-') in LOCALE_NAMES 131 ] 132 """list of supported languages (aka locales)""" 133 134 self.toc: typing.List[str] = [ 135 'search-syntax', 136 'about', 137 'donate', 138 ] 139 """list of articles in the online documentation""" 140 141 def get_page(self, pagename: str, locale: typing.Optional[str] = None): 142 """Return ``pagename`` instance of :py:obj:`InfoPage` 143 144 :param pagename: name of the page, a value from :py:obj:`InfoPageSet.toc` 145 :type pagename: str 146 147 :param locale: language of the page, e.g. ``en``, ``zh_Hans_CN`` 148 (default: :py:obj:`InfoPageSet.i18n_origin`) 149 :type locale: str 150 151 """ 152 locale = locale or self.locale_default 153 154 if pagename not in self.toc: 155 return None 156 if locale not in self.locales: 157 return None 158 159 cache_key = (pagename, locale) 160 page = self.CACHE.get(cache_key) 161 162 if page is not None: 163 return page 164 165 # not yet instantiated 166 167 fname = os.path.join(self.folder, locale.replace('-', '_'), pagename) + '.md' 168 if not os.path.exists(fname): 169 logger.info('file %s does not exists', fname) 170 self.CACHE[cache_key] = None 171 return None 172 173 page = self.page_class(fname) 174 self.CACHE[cache_key] = page 175 return page 176 177 def iter_pages(self, locale: typing.Optional[str] = None, fallback_to_default=False): 178 """Iterate over all pages of the TOC""" 179 locale = locale or self.locale_default 180 for page_name in self.toc: 181 page_locale = locale 182 page = self.get_page(page_name, locale) 183 if fallback_to_default and page is None: 184 page_locale = self.locale_default 185 page = self.get_page(page_name, self.locale_default) 186 yield page_name, page_locale, page 187 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/searx/infopage/__init__.py b/searx/infopage/__init__.py --- a/searx/infopage/__init__.py +++ b/searx/infopage/__init__.py @@ -157,10 +157,9 @@ return None cache_key = (pagename, locale) - page = self.CACHE.get(cache_key) - if page is not None: - return page + if cache_key in self.CACHE: + return self.CACHE[cache_key] # not yet instantiated @@ -183,4 +182,6 @@ if fallback_to_default and page is None: page_locale = self.locale_default page = self.get_page(page_name, self.locale_default) - yield page_name, page_locale, page + if page is not None: + # page is None if the page was deleted by the administrator + yield page_name, page_locale, page
{"golden_diff": "diff --git a/searx/infopage/__init__.py b/searx/infopage/__init__.py\n--- a/searx/infopage/__init__.py\n+++ b/searx/infopage/__init__.py\n@@ -157,10 +157,9 @@\n return None\n \n cache_key = (pagename, locale)\n- page = self.CACHE.get(cache_key)\n \n- if page is not None:\n- return page\n+ if cache_key in self.CACHE:\n+ return self.CACHE[cache_key]\n \n # not yet instantiated\n \n@@ -183,4 +182,6 @@\n if fallback_to_default and page is None:\n page_locale = self.locale_default\n page = self.get_page(page_name, self.locale_default)\n- yield page_name, page_locale, page\n+ if page is not None:\n+ # page is None if the page was deleted by the administrator\n+ yield page_name, page_locale, page\n", "issue": "move donation page to docs.searxng.org and link to it from instances\n<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SearXNG -->\r\n\r\n**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**\r\n```\r\n3a75d3c1ccbf979a483b4e5b209a59db9876ba33; searxng master\r\n``` \r\n\r\n**How did you install SearXNG?**\r\n\r\nwith docker\r\n\r\n**What happened?**\r\nThere is a donation page on the instance.\r\n\r\n**Expected behavior**\r\nThe donation page should be in the official documentation and linked to by instances instead of having it on the instance. Also: There should be more information about who is receiving the money and what it is used for.\r\n\r\n**Screenshots & Logs**\r\n<!-- If applicable, add screenshots, logs to help explain your problem. -->\r\n\r\n**Additional context**\r\nSuggestion in matrix, since donation page on instances could mean legal trouble.\r\n\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n# pyright: basic\n\"\"\"Render SearXNG instance documentation.\n\nUsage in a Flask app route:\n\n.. code:: python\n\n from searx import infopage\n\n _INFO_PAGES = infopage.InfoPageSet(infopage.MistletoePage)\n\n @app.route('/info/<pagename>', methods=['GET'])\n def info(pagename):\n\n locale = request.preferences.get_value('locale')\n page = _INFO_PAGES.get_page(pagename, locale)\n\n\"\"\"\n\n__all__ = ['InfoPage', 'InfoPageSet']\n\nimport os\nimport os.path\nimport logging\nimport typing\n\nimport urllib.parse\nimport jinja2\nfrom flask.helpers import url_for\nfrom markdown_it import MarkdownIt\n\nfrom .. import get_setting\nfrom ..compat import cached_property\nfrom ..version import GIT_URL\nfrom ..locales import LOCALE_NAMES\n\n\nlogger = logging.getLogger('searx.infopage')\n_INFO_FOLDER = os.path.abspath(os.path.dirname(__file__))\n\n\nclass InfoPage:\n \"\"\"A page of the :py:obj:`online documentation <InfoPageSet>`.\"\"\"\n\n def __init__(self, fname):\n self.fname = fname\n\n @cached_property\n def raw_content(self):\n \"\"\"Raw content of the page (without any jinja rendering)\"\"\"\n with open(self.fname, 'r', encoding='utf-8') as f:\n return f.read()\n\n @cached_property\n def content(self):\n \"\"\"Content of the page (rendered in a Jinja conntext)\"\"\"\n ctx = self.get_ctx()\n template = jinja2.Environment().from_string(self.raw_content)\n return template.render(**ctx)\n\n @cached_property\n def title(self):\n \"\"\"Title of the content (without any markup)\"\"\"\n t = \"\"\n for l in self.raw_content.split('\\n'):\n if l.startswith('# '):\n t = l.strip('# ')\n return t\n\n @cached_property\n def html(self):\n \"\"\"Render Markdown (CommonMark_) to HTML by using markdown-it-py_.\n\n .. _CommonMark: https://commonmark.org/\n .. _markdown-it-py: https://github.com/executablebooks/markdown-it-py\n\n \"\"\"\n return (\n MarkdownIt(\"commonmark\", {\"typographer\": True}).enable([\"replacements\", \"smartquotes\"]).render(self.content)\n )\n\n def get_ctx(self):\n \"\"\"Jinja context to render :py:obj:`InfoPage.content`\"\"\"\n\n def _md_link(name, url):\n url = url_for(url, _external=True)\n return \"[%s](%s)\" % (name, url)\n\n def _md_search(query):\n url = '%s?q=%s' % (url_for('search', _external=True), urllib.parse.quote(query))\n return '[%s](%s)' % (query, url)\n\n ctx = {}\n ctx['GIT_URL'] = GIT_URL\n ctx['get_setting'] = get_setting\n ctx['link'] = _md_link\n ctx['search'] = _md_search\n\n return ctx\n\n def __repr__(self):\n return f'<{self.__class__.__name__} fname={self.fname!r}>'\n\n\nclass InfoPageSet: # pylint: disable=too-few-public-methods\n \"\"\"Cached rendering of the online documentation a SearXNG instance has.\n\n :param page_class: render online documentation by :py:obj:`InfoPage` parser.\n :type page_class: :py:obj:`InfoPage`\n\n :param info_folder: information directory\n :type info_folder: str\n \"\"\"\n\n def __init__(\n self, page_class: typing.Optional[typing.Type[InfoPage]] = None, info_folder: typing.Optional[str] = None\n ):\n self.page_class = page_class or InfoPage\n self.folder: str = info_folder or _INFO_FOLDER\n \"\"\"location of the Markdwon files\"\"\"\n\n self.CACHE: typing.Dict[tuple, typing.Optional[InfoPage]] = {}\n\n self.locale_default: str = 'en'\n \"\"\"default language\"\"\"\n\n self.locales: typing.List[str] = [\n locale.replace('_', '-') for locale in os.listdir(_INFO_FOLDER) if locale.replace('_', '-') in LOCALE_NAMES\n ]\n \"\"\"list of supported languages (aka locales)\"\"\"\n\n self.toc: typing.List[str] = [\n 'search-syntax',\n 'about',\n 'donate',\n ]\n \"\"\"list of articles in the online documentation\"\"\"\n\n def get_page(self, pagename: str, locale: typing.Optional[str] = None):\n \"\"\"Return ``pagename`` instance of :py:obj:`InfoPage`\n\n :param pagename: name of the page, a value from :py:obj:`InfoPageSet.toc`\n :type pagename: str\n\n :param locale: language of the page, e.g. ``en``, ``zh_Hans_CN``\n (default: :py:obj:`InfoPageSet.i18n_origin`)\n :type locale: str\n\n \"\"\"\n locale = locale or self.locale_default\n\n if pagename not in self.toc:\n return None\n if locale not in self.locales:\n return None\n\n cache_key = (pagename, locale)\n page = self.CACHE.get(cache_key)\n\n if page is not None:\n return page\n\n # not yet instantiated\n\n fname = os.path.join(self.folder, locale.replace('-', '_'), pagename) + '.md'\n if not os.path.exists(fname):\n logger.info('file %s does not exists', fname)\n self.CACHE[cache_key] = None\n return None\n\n page = self.page_class(fname)\n self.CACHE[cache_key] = page\n return page\n\n def iter_pages(self, locale: typing.Optional[str] = None, fallback_to_default=False):\n \"\"\"Iterate over all pages of the TOC\"\"\"\n locale = locale or self.locale_default\n for page_name in self.toc:\n page_locale = locale\n page = self.get_page(page_name, locale)\n if fallback_to_default and page is None:\n page_locale = self.locale_default\n page = self.get_page(page_name, self.locale_default)\n yield page_name, page_locale, page\n", "path": "searx/infopage/__init__.py"}], "after_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n# pyright: basic\n\"\"\"Render SearXNG instance documentation.\n\nUsage in a Flask app route:\n\n.. code:: python\n\n from searx import infopage\n\n _INFO_PAGES = infopage.InfoPageSet(infopage.MistletoePage)\n\n @app.route('/info/<pagename>', methods=['GET'])\n def info(pagename):\n\n locale = request.preferences.get_value('locale')\n page = _INFO_PAGES.get_page(pagename, locale)\n\n\"\"\"\n\n__all__ = ['InfoPage', 'InfoPageSet']\n\nimport os\nimport os.path\nimport logging\nimport typing\n\nimport urllib.parse\nimport jinja2\nfrom flask.helpers import url_for\nfrom markdown_it import MarkdownIt\n\nfrom .. import get_setting\nfrom ..compat import cached_property\nfrom ..version import GIT_URL\nfrom ..locales import LOCALE_NAMES\n\n\nlogger = logging.getLogger('searx.infopage')\n_INFO_FOLDER = os.path.abspath(os.path.dirname(__file__))\n\n\nclass InfoPage:\n \"\"\"A page of the :py:obj:`online documentation <InfoPageSet>`.\"\"\"\n\n def __init__(self, fname):\n self.fname = fname\n\n @cached_property\n def raw_content(self):\n \"\"\"Raw content of the page (without any jinja rendering)\"\"\"\n with open(self.fname, 'r', encoding='utf-8') as f:\n return f.read()\n\n @cached_property\n def content(self):\n \"\"\"Content of the page (rendered in a Jinja conntext)\"\"\"\n ctx = self.get_ctx()\n template = jinja2.Environment().from_string(self.raw_content)\n return template.render(**ctx)\n\n @cached_property\n def title(self):\n \"\"\"Title of the content (without any markup)\"\"\"\n t = \"\"\n for l in self.raw_content.split('\\n'):\n if l.startswith('# '):\n t = l.strip('# ')\n return t\n\n @cached_property\n def html(self):\n \"\"\"Render Markdown (CommonMark_) to HTML by using markdown-it-py_.\n\n .. _CommonMark: https://commonmark.org/\n .. _markdown-it-py: https://github.com/executablebooks/markdown-it-py\n\n \"\"\"\n return (\n MarkdownIt(\"commonmark\", {\"typographer\": True}).enable([\"replacements\", \"smartquotes\"]).render(self.content)\n )\n\n def get_ctx(self):\n \"\"\"Jinja context to render :py:obj:`InfoPage.content`\"\"\"\n\n def _md_link(name, url):\n url = url_for(url, _external=True)\n return \"[%s](%s)\" % (name, url)\n\n def _md_search(query):\n url = '%s?q=%s' % (url_for('search', _external=True), urllib.parse.quote(query))\n return '[%s](%s)' % (query, url)\n\n ctx = {}\n ctx['GIT_URL'] = GIT_URL\n ctx['get_setting'] = get_setting\n ctx['link'] = _md_link\n ctx['search'] = _md_search\n\n return ctx\n\n def __repr__(self):\n return f'<{self.__class__.__name__} fname={self.fname!r}>'\n\n\nclass InfoPageSet: # pylint: disable=too-few-public-methods\n \"\"\"Cached rendering of the online documentation a SearXNG instance has.\n\n :param page_class: render online documentation by :py:obj:`InfoPage` parser.\n :type page_class: :py:obj:`InfoPage`\n\n :param info_folder: information directory\n :type info_folder: str\n \"\"\"\n\n def __init__(\n self, page_class: typing.Optional[typing.Type[InfoPage]] = None, info_folder: typing.Optional[str] = None\n ):\n self.page_class = page_class or InfoPage\n self.folder: str = info_folder or _INFO_FOLDER\n \"\"\"location of the Markdwon files\"\"\"\n\n self.CACHE: typing.Dict[tuple, typing.Optional[InfoPage]] = {}\n\n self.locale_default: str = 'en'\n \"\"\"default language\"\"\"\n\n self.locales: typing.List[str] = [\n locale.replace('_', '-') for locale in os.listdir(_INFO_FOLDER) if locale.replace('_', '-') in LOCALE_NAMES\n ]\n \"\"\"list of supported languages (aka locales)\"\"\"\n\n self.toc: typing.List[str] = [\n 'search-syntax',\n 'about',\n 'donate',\n ]\n \"\"\"list of articles in the online documentation\"\"\"\n\n def get_page(self, pagename: str, locale: typing.Optional[str] = None):\n \"\"\"Return ``pagename`` instance of :py:obj:`InfoPage`\n\n :param pagename: name of the page, a value from :py:obj:`InfoPageSet.toc`\n :type pagename: str\n\n :param locale: language of the page, e.g. ``en``, ``zh_Hans_CN``\n (default: :py:obj:`InfoPageSet.i18n_origin`)\n :type locale: str\n\n \"\"\"\n locale = locale or self.locale_default\n\n if pagename not in self.toc:\n return None\n if locale not in self.locales:\n return None\n\n cache_key = (pagename, locale)\n\n if cache_key in self.CACHE:\n return self.CACHE[cache_key]\n\n # not yet instantiated\n\n fname = os.path.join(self.folder, locale.replace('-', '_'), pagename) + '.md'\n if not os.path.exists(fname):\n logger.info('file %s does not exists', fname)\n self.CACHE[cache_key] = None\n return None\n\n page = self.page_class(fname)\n self.CACHE[cache_key] = page\n return page\n\n def iter_pages(self, locale: typing.Optional[str] = None, fallback_to_default=False):\n \"\"\"Iterate over all pages of the TOC\"\"\"\n locale = locale or self.locale_default\n for page_name in self.toc:\n page_locale = locale\n page = self.get_page(page_name, locale)\n if fallback_to_default and page is None:\n page_locale = self.locale_default\n page = self.get_page(page_name, self.locale_default)\n if page is not None:\n # page is None if the page was deleted by the administrator\n yield page_name, page_locale, page\n", "path": "searx/infopage/__init__.py"}]}
2,350
227
gh_patches_debug_34820
rasdani/github-patches
git_diff
falconry__falcon-57
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- test: Add Unicode chars to logging test --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `falcon/request.py` Content: ``` 1 """Defines the Request class. 2 3 Copyright 2013 by Rackspace Hosting, Inc. 4 5 Licensed under the Apache License, Version 2.0 (the "License"); 6 you may not use this file except in compliance with the License. 7 You may obtain a copy of the License at 8 9 http://www.apache.org/licenses/LICENSE-2.0 10 11 Unless required by applicable law or agreed to in writing, software 12 distributed under the License is distributed on an "AS IS" BASIS, 13 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 See the License for the specific language governing permissions and 15 limitations under the License. 16 17 """ 18 19 import sys 20 from datetime import datetime 21 22 from falcon.request_helpers import * 23 from falcon.exceptions import * 24 import six 25 26 27 class Request(object): 28 """Represents a client's HTTP request""" 29 30 __slots__ = ( 31 'app', 32 'body', 33 '_headers', 34 'method', 35 '_params', 36 'path', 37 'protocol', 38 'query_string', 39 '_wsgierrors' 40 ) 41 42 def __init__(self, env): 43 """Initialize attributes based on a WSGI environment dict 44 45 Note: Request is not meant to be instantiated directory by responders. 46 47 Args: 48 env: A WSGI environment dict passed in from the server. See also 49 the PEP-333 spec. 50 51 """ 52 53 self.app = env['SCRIPT_NAME'] 54 self.body = env['wsgi.input'] 55 self.method = env['REQUEST_METHOD'] 56 self.path = env['PATH_INFO'] or '/' 57 self.protocol = env['wsgi.url_scheme'] 58 self.query_string = query_string = env['QUERY_STRING'] 59 self._params = parse_query_string(query_string) 60 self._headers = parse_headers(env) 61 self._wsgierrors = env['wsgi.errors'] 62 63 def log_error(self, message): 64 """Log an error to wsgi.error 65 66 Prepends timestamp and request info to message, and writes the result 67 out to the WSGI server's error stream (wsgi.error). 68 69 Args: 70 message: A string describing the problem. If a byte-string and 71 running under Python 2, the string is assumed to be encoded 72 as UTF-8. 73 74 """ 75 u = six.text_type 76 log_line = ( 77 u('{0:%Y-%m-%d %H:%M:%S} [FALCON] [ERROR] {1} {2}?{3} => {4}\n'). 78 format(datetime.now(), self.method, self.path, self.query_string, 79 message) 80 ) 81 82 self._wsgierrors.write(log_line) 83 84 def client_accepts_json(self): 85 """Return True if the Accept header indicates JSON support""" 86 87 accept = self.get_header('Accept') 88 if accept is not None: 89 return ('application/json' in accept) or ('*/*' in accept) 90 91 return False 92 93 def get_header(self, name, default=None, required=False): 94 """Return a header value as a string 95 96 Args: 97 name: Header name, case-insensitive (e.g., 'Content-Type') 98 default: Value to return in case the header is not 99 found (default None) 100 required: Set to True to raise HttpBadRequest instead 101 of returning gracefully when the header is not found 102 (default False) 103 104 """ 105 106 # Use try..except to optimize for the header existing in most cases 107 try: 108 # Don't take the time to cache beforehand, using HTTP naming. 109 # This will be faster, assuming that most headers are looked 110 # up only once, and not all headers will be requested. 111 return self._headers[name.upper().replace('-', '_')] 112 except KeyError: 113 if not required: 114 return default 115 116 raise HTTPBadRequest('Missing header', 117 'The "' + name + '" header is required.') 118 119 def get_param(self, name, default=None, required=False): 120 """Return the value of a query string parameter as a string 121 122 Args: 123 name: Parameter name, case-sensitive (e.g., 'sort') 124 default: Value to return in case the parameter is not found in the 125 query string (default None) 126 required: Set to True to raise HTTPBadRequest instead of returning 127 gracefully when the parameter is not found (default False) 128 129 Returns: 130 The value of the param as a byte string, or the default value if 131 param is not found and is not required. 132 133 Raises 134 HTTPBadRequest: The param was not found in the request, but was 135 required. 136 137 """ 138 139 # PERF: Use if..in since it is a good all-around performer; we don't 140 # know how likely params are to be specified by clients. 141 if name in self._params: 142 return self._params[name] 143 144 if not required: 145 return default 146 147 raise HTTPBadRequest('Missing query parameter', 148 'The "' + name + '" query parameter is required.') 149 150 def get_param_as_int(self, name, default=None, required=False): 151 """Return the value of a query string parameter as an int 152 153 Args: 154 name: Parameter name, case-sensitive (e.g., 'limit') 155 default: Value to return in case the parameter is not found in the 156 query string, or it is not an integer (default None) 157 required: Set to True to raise HTTPBadRequest instead of returning 158 gracefully when the parameter is not found or is not an 159 integer (default False) 160 161 Returns: 162 The value of the param if it is found and can be converted to an 163 integer. Otherwise, returns the default value unless required is 164 True. 165 166 Raises 167 HTTPBadRequest: The param was not found in the request, but was 168 required. 169 170 """ 171 172 # PERF: Use if..in since it is a good all-around performer; we don't 173 # know how likely params are to be specified by clients. 174 if name in self._params: 175 val = self._params[name] 176 try: 177 return int(val) 178 except ValueError: 179 pass 180 181 if not required: 182 return default 183 184 raise HTTPBadRequest('Missing query parameter', 185 'The "' + name + '" query parameter is required.') 186 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/falcon/request.py b/falcon/request.py --- a/falcon/request.py +++ b/falcon/request.py @@ -16,12 +16,15 @@ """ -import sys from datetime import datetime +import six + from falcon.request_helpers import * from falcon.exceptions import * -import six + +DEFAULT_ERROR_LOG_FORMAT = ('{0:%Y-%m-%d %H:%M:%S} [FALCON] [ERROR]' + ' {1} {2}?{3} => {4}\n') class Request(object): @@ -50,21 +53,23 @@ """ - self.app = env['SCRIPT_NAME'] + self._wsgierrors = env['wsgi.errors'] self.body = env['wsgi.input'] + + self.protocol = env['wsgi.url_scheme'] + self.app = env['SCRIPT_NAME'] self.method = env['REQUEST_METHOD'] self.path = env['PATH_INFO'] or '/' - self.protocol = env['wsgi.url_scheme'] self.query_string = query_string = env['QUERY_STRING'] + self._params = parse_query_string(query_string) self._headers = parse_headers(env) - self._wsgierrors = env['wsgi.errors'] def log_error(self, message): """Log an error to wsgi.error - Prepends timestamp and request info to message, and writes the result - out to the WSGI server's error stream (wsgi.error). + Prepends timestamp and request info to message, and writes the + result out to the WSGI server's error stream (wsgi.error). Args: message: A string describing the problem. If a byte-string and @@ -72,11 +77,13 @@ as UTF-8. """ - u = six.text_type + if not six.PY3 and isinstance(message, unicode): + message = message.encode('utf-8') + log_line = ( - u('{0:%Y-%m-%d %H:%M:%S} [FALCON] [ERROR] {1} {2}?{3} => {4}\n'). - format(datetime.now(), self.method, self.path, self.query_string, - message) + DEFAULT_ERROR_LOG_FORMAT. + format(datetime.now(), self.method, self.path, + self.query_string, message) ) self._wsgierrors.write(log_line)
{"golden_diff": "diff --git a/falcon/request.py b/falcon/request.py\n--- a/falcon/request.py\n+++ b/falcon/request.py\n@@ -16,12 +16,15 @@\n \n \"\"\"\n \n-import sys\n from datetime import datetime\n \n+import six\n+\n from falcon.request_helpers import *\n from falcon.exceptions import *\n-import six\n+\n+DEFAULT_ERROR_LOG_FORMAT = ('{0:%Y-%m-%d %H:%M:%S} [FALCON] [ERROR]'\n+ ' {1} {2}?{3} => {4}\\n')\n \n \n class Request(object):\n@@ -50,21 +53,23 @@\n \n \"\"\"\n \n- self.app = env['SCRIPT_NAME']\n+ self._wsgierrors = env['wsgi.errors']\n self.body = env['wsgi.input']\n+\n+ self.protocol = env['wsgi.url_scheme']\n+ self.app = env['SCRIPT_NAME']\n self.method = env['REQUEST_METHOD']\n self.path = env['PATH_INFO'] or '/'\n- self.protocol = env['wsgi.url_scheme']\n self.query_string = query_string = env['QUERY_STRING']\n+\n self._params = parse_query_string(query_string)\n self._headers = parse_headers(env)\n- self._wsgierrors = env['wsgi.errors']\n \n def log_error(self, message):\n \"\"\"Log an error to wsgi.error\n \n- Prepends timestamp and request info to message, and writes the result\n- out to the WSGI server's error stream (wsgi.error).\n+ Prepends timestamp and request info to message, and writes the\n+ result out to the WSGI server's error stream (wsgi.error).\n \n Args:\n message: A string describing the problem. If a byte-string and\n@@ -72,11 +77,13 @@\n as UTF-8.\n \n \"\"\"\n- u = six.text_type\n+ if not six.PY3 and isinstance(message, unicode):\n+ message = message.encode('utf-8')\n+\n log_line = (\n- u('{0:%Y-%m-%d %H:%M:%S} [FALCON] [ERROR] {1} {2}?{3} => {4}\\n').\n- format(datetime.now(), self.method, self.path, self.query_string,\n- message)\n+ DEFAULT_ERROR_LOG_FORMAT.\n+ format(datetime.now(), self.method, self.path,\n+ self.query_string, message)\n )\n \n self._wsgierrors.write(log_line)\n", "issue": "test: Add Unicode chars to logging test\n\n", "before_files": [{"content": "\"\"\"Defines the Request class.\n\nCopyright 2013 by Rackspace Hosting, Inc.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\n\"\"\"\n\nimport sys\nfrom datetime import datetime\n\nfrom falcon.request_helpers import *\nfrom falcon.exceptions import *\nimport six\n\n\nclass Request(object):\n \"\"\"Represents a client's HTTP request\"\"\"\n\n __slots__ = (\n 'app',\n 'body',\n '_headers',\n 'method',\n '_params',\n 'path',\n 'protocol',\n 'query_string',\n '_wsgierrors'\n )\n\n def __init__(self, env):\n \"\"\"Initialize attributes based on a WSGI environment dict\n\n Note: Request is not meant to be instantiated directory by responders.\n\n Args:\n env: A WSGI environment dict passed in from the server. See also\n the PEP-333 spec.\n\n \"\"\"\n\n self.app = env['SCRIPT_NAME']\n self.body = env['wsgi.input']\n self.method = env['REQUEST_METHOD']\n self.path = env['PATH_INFO'] or '/'\n self.protocol = env['wsgi.url_scheme']\n self.query_string = query_string = env['QUERY_STRING']\n self._params = parse_query_string(query_string)\n self._headers = parse_headers(env)\n self._wsgierrors = env['wsgi.errors']\n\n def log_error(self, message):\n \"\"\"Log an error to wsgi.error\n\n Prepends timestamp and request info to message, and writes the result\n out to the WSGI server's error stream (wsgi.error).\n\n Args:\n message: A string describing the problem. If a byte-string and\n running under Python 2, the string is assumed to be encoded\n as UTF-8.\n\n \"\"\"\n u = six.text_type\n log_line = (\n u('{0:%Y-%m-%d %H:%M:%S} [FALCON] [ERROR] {1} {2}?{3} => {4}\\n').\n format(datetime.now(), self.method, self.path, self.query_string,\n message)\n )\n\n self._wsgierrors.write(log_line)\n\n def client_accepts_json(self):\n \"\"\"Return True if the Accept header indicates JSON support\"\"\"\n\n accept = self.get_header('Accept')\n if accept is not None:\n return ('application/json' in accept) or ('*/*' in accept)\n\n return False\n\n def get_header(self, name, default=None, required=False):\n \"\"\"Return a header value as a string\n\n Args:\n name: Header name, case-insensitive (e.g., 'Content-Type')\n default: Value to return in case the header is not\n found (default None)\n required: Set to True to raise HttpBadRequest instead\n of returning gracefully when the header is not found\n (default False)\n\n \"\"\"\n\n # Use try..except to optimize for the header existing in most cases\n try:\n # Don't take the time to cache beforehand, using HTTP naming.\n # This will be faster, assuming that most headers are looked\n # up only once, and not all headers will be requested.\n return self._headers[name.upper().replace('-', '_')]\n except KeyError:\n if not required:\n return default\n\n raise HTTPBadRequest('Missing header',\n 'The \"' + name + '\" header is required.')\n\n def get_param(self, name, default=None, required=False):\n \"\"\"Return the value of a query string parameter as a string\n\n Args:\n name: Parameter name, case-sensitive (e.g., 'sort')\n default: Value to return in case the parameter is not found in the\n query string (default None)\n required: Set to True to raise HTTPBadRequest instead of returning\n gracefully when the parameter is not found (default False)\n\n Returns:\n The value of the param as a byte string, or the default value if\n param is not found and is not required.\n\n Raises\n HTTPBadRequest: The param was not found in the request, but was\n required.\n\n \"\"\"\n\n # PERF: Use if..in since it is a good all-around performer; we don't\n # know how likely params are to be specified by clients.\n if name in self._params:\n return self._params[name]\n\n if not required:\n return default\n\n raise HTTPBadRequest('Missing query parameter',\n 'The \"' + name + '\" query parameter is required.')\n\n def get_param_as_int(self, name, default=None, required=False):\n \"\"\"Return the value of a query string parameter as an int\n\n Args:\n name: Parameter name, case-sensitive (e.g., 'limit')\n default: Value to return in case the parameter is not found in the\n query string, or it is not an integer (default None)\n required: Set to True to raise HTTPBadRequest instead of returning\n gracefully when the parameter is not found or is not an\n integer (default False)\n\n Returns:\n The value of the param if it is found and can be converted to an\n integer. Otherwise, returns the default value unless required is\n True.\n\n Raises\n HTTPBadRequest: The param was not found in the request, but was\n required.\n\n \"\"\"\n\n # PERF: Use if..in since it is a good all-around performer; we don't\n # know how likely params are to be specified by clients.\n if name in self._params:\n val = self._params[name]\n try:\n return int(val)\n except ValueError:\n pass\n\n if not required:\n return default\n\n raise HTTPBadRequest('Missing query parameter',\n 'The \"' + name + '\" query parameter is required.')\n", "path": "falcon/request.py"}], "after_files": [{"content": "\"\"\"Defines the Request class.\n\nCopyright 2013 by Rackspace Hosting, Inc.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\n\"\"\"\n\nfrom datetime import datetime\n\nimport six\n\nfrom falcon.request_helpers import *\nfrom falcon.exceptions import *\n\nDEFAULT_ERROR_LOG_FORMAT = ('{0:%Y-%m-%d %H:%M:%S} [FALCON] [ERROR]'\n ' {1} {2}?{3} => {4}\\n')\n\n\nclass Request(object):\n \"\"\"Represents a client's HTTP request\"\"\"\n\n __slots__ = (\n 'app',\n 'body',\n '_headers',\n 'method',\n '_params',\n 'path',\n 'protocol',\n 'query_string',\n '_wsgierrors'\n )\n\n def __init__(self, env):\n \"\"\"Initialize attributes based on a WSGI environment dict\n\n Note: Request is not meant to be instantiated directory by responders.\n\n Args:\n env: A WSGI environment dict passed in from the server. See also\n the PEP-333 spec.\n\n \"\"\"\n\n self._wsgierrors = env['wsgi.errors']\n self.body = env['wsgi.input']\n\n self.protocol = env['wsgi.url_scheme']\n self.app = env['SCRIPT_NAME']\n self.method = env['REQUEST_METHOD']\n self.path = env['PATH_INFO'] or '/'\n self.query_string = query_string = env['QUERY_STRING']\n\n self._params = parse_query_string(query_string)\n self._headers = parse_headers(env)\n\n def log_error(self, message):\n \"\"\"Log an error to wsgi.error\n\n Prepends timestamp and request info to message, and writes the\n result out to the WSGI server's error stream (wsgi.error).\n\n Args:\n message: A string describing the problem. If a byte-string and\n running under Python 2, the string is assumed to be encoded\n as UTF-8.\n\n \"\"\"\n if not six.PY3 and isinstance(message, unicode):\n message = message.encode('utf-8')\n\n log_line = (\n DEFAULT_ERROR_LOG_FORMAT.\n format(datetime.now(), self.method, self.path,\n self.query_string, message)\n )\n\n self._wsgierrors.write(log_line)\n\n def client_accepts_json(self):\n \"\"\"Return True if the Accept header indicates JSON support\"\"\"\n\n accept = self.get_header('Accept')\n if accept is not None:\n return ('application/json' in accept) or ('*/*' in accept)\n\n return False\n\n def get_header(self, name, default=None, required=False):\n \"\"\"Return a header value as a string\n\n Args:\n name: Header name, case-insensitive (e.g., 'Content-Type')\n default: Value to return in case the header is not\n found (default None)\n required: Set to True to raise HttpBadRequest instead\n of returning gracefully when the header is not found\n (default False)\n\n \"\"\"\n\n # Use try..except to optimize for the header existing in most cases\n try:\n # Don't take the time to cache beforehand, using HTTP naming.\n # This will be faster, assuming that most headers are looked\n # up only once, and not all headers will be requested.\n return self._headers[name.upper().replace('-', '_')]\n except KeyError:\n if not required:\n return default\n\n raise HTTPBadRequest('Missing header',\n 'The \"' + name + '\" header is required.')\n\n def get_param(self, name, default=None, required=False):\n \"\"\"Return the value of a query string parameter as a string\n\n Args:\n name: Parameter name, case-sensitive (e.g., 'sort')\n default: Value to return in case the parameter is not found in the\n query string (default None)\n required: Set to True to raise HTTPBadRequest instead of returning\n gracefully when the parameter is not found (default False)\n\n Returns:\n The value of the param as a byte string, or the default value if\n param is not found and is not required.\n\n Raises\n HTTPBadRequest: The param was not found in the request, but was\n required.\n\n \"\"\"\n\n # PERF: Use if..in since it is a good all-around performer; we don't\n # know how likely params are to be specified by clients.\n if name in self._params:\n return self._params[name]\n\n if not required:\n return default\n\n raise HTTPBadRequest('Missing query parameter',\n 'The \"' + name + '\" query parameter is required.')\n\n def get_param_as_int(self, name, default=None, required=False):\n \"\"\"Return the value of a query string parameter as an int\n\n Args:\n name: Parameter name, case-sensitive (e.g., 'limit')\n default: Value to return in case the parameter is not found in the\n query string, or it is not an integer (default None)\n required: Set to True to raise HTTPBadRequest instead of returning\n gracefully when the parameter is not found or is not an\n integer (default False)\n\n Returns:\n The value of the param if it is found and can be converted to an\n integer. Otherwise, returns the default value unless required is\n True.\n\n Raises\n HTTPBadRequest: The param was not found in the request, but was\n required.\n\n \"\"\"\n\n # PERF: Use if..in since it is a good all-around performer; we don't\n # know how likely params are to be specified by clients.\n if name in self._params:\n val = self._params[name]\n try:\n return int(val)\n except ValueError:\n pass\n\n if not required:\n return default\n\n raise HTTPBadRequest('Missing query parameter',\n 'The \"' + name + '\" query parameter is required.')\n", "path": "falcon/request.py"}]}
2,067
550
gh_patches_debug_1202
rasdani/github-patches
git_diff
openvinotoolkit__datumaro-125
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- infer result passed from openvino launcher to interpreter is not appropriate. I tried model run using openvino's mobileenet-v2-pytorch model. (using mobilenet-v2-pytorch.xml, mobilenet-v2-pytorch.bin) `datum model run -p proj -m model-0` However, only the name of the layer (ex. 'prob' string) comes into the input parameters(outputs) of the interpreter. Please check the return result of OpenvinoLauncher.infer `results = self._net.infer(inputs)` line 178, openvino_launcher.py Debugging results are normal up to the code above, but it seems that only the name of the result layer is returned when returning and passing to interpreter. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `datumaro/plugins/openvino_launcher.py` Content: ``` 1 2 # Copyright (C) 2019-2020 Intel Corporation 3 # 4 # SPDX-License-Identifier: MIT 5 6 # pylint: disable=exec-used 7 8 import cv2 9 import logging as log 10 import numpy as np 11 import os.path as osp 12 import shutil 13 14 from openvino.inference_engine import IECore 15 16 from datumaro.components.cli_plugin import CliPlugin 17 from datumaro.components.launcher import Launcher 18 19 20 class _OpenvinoImporter(CliPlugin): 21 @staticmethod 22 def _parse_output_layers(s): 23 return [s.strip() for s in s.split(',')] 24 25 @classmethod 26 def build_cmdline_parser(cls, **kwargs): 27 parser = super().build_cmdline_parser(**kwargs) 28 parser.add_argument('-d', '--description', required=True, 29 help="Path to the model description file (.xml)") 30 parser.add_argument('-w', '--weights', required=True, 31 help="Path to the model weights file (.bin)") 32 parser.add_argument('-i', '--interpreter', required=True, 33 help="Path to the network output interprter script (.py)") 34 parser.add_argument('--device', default='CPU', 35 help="Target device (default: %(default)s)") 36 parser.add_argument('--output-layers', type=cls._parse_output_layers, 37 help="A comma-separated list of extra output layers") 38 return parser 39 40 @staticmethod 41 def copy_model(model_dir, model): 42 shutil.copy(model['description'], 43 osp.join(model_dir, osp.basename(model['description']))) 44 model['description'] = osp.basename(model['description']) 45 46 shutil.copy(model['weights'], 47 osp.join(model_dir, osp.basename(model['weights']))) 48 model['weights'] = osp.basename(model['weights']) 49 50 shutil.copy(model['interpreter'], 51 osp.join(model_dir, osp.basename(model['interpreter']))) 52 model['interpreter'] = osp.basename(model['interpreter']) 53 54 55 class InterpreterScript: 56 def __init__(self, path): 57 with open(path, 'r') as f: 58 script = f.read() 59 60 context = {} 61 exec(script, context, context) 62 63 process_outputs = context.get('process_outputs') 64 if not callable(process_outputs): 65 raise Exception("Can't find 'process_outputs' function in " 66 "the interpreter script") 67 self.__dict__['process_outputs'] = process_outputs 68 69 get_categories = context.get('get_categories') 70 assert get_categories is None or callable(get_categories) 71 if get_categories: 72 self.__dict__['get_categories'] = get_categories 73 74 @staticmethod 75 def get_categories(): 76 return None 77 78 @staticmethod 79 def process_outputs(inputs, outputs): 80 raise NotImplementedError( 81 "Function should be implemented in the interpreter script") 82 83 84 class OpenvinoLauncher(Launcher): 85 cli_plugin = _OpenvinoImporter 86 87 def __init__(self, description, weights, interpreter, 88 device=None, model_dir=None, output_layers=None): 89 if not model_dir: 90 model_dir = '' 91 if not osp.isfile(description): 92 description = osp.join(model_dir, description) 93 if not osp.isfile(description): 94 raise Exception('Failed to open model description file "%s"' % \ 95 (description)) 96 97 if not osp.isfile(weights): 98 weights = osp.join(model_dir, weights) 99 if not osp.isfile(weights): 100 raise Exception('Failed to open model weights file "%s"' % \ 101 (weights)) 102 103 if not osp.isfile(interpreter): 104 interpreter = osp.join(model_dir, interpreter) 105 if not osp.isfile(interpreter): 106 raise Exception('Failed to open model interpreter script file "%s"' % \ 107 (interpreter)) 108 109 self._interpreter = InterpreterScript(interpreter) 110 111 self._device = device or 'CPU' 112 self._output_blobs = output_layers 113 114 self._ie = IECore() 115 self._network = self._ie.read_network(description, weights) 116 self._check_model_support(self._network, self._device) 117 self._load_executable_net() 118 119 def _check_model_support(self, net, device): 120 not_supported_layers = set(name 121 for name, dev in self._ie.query_network(net, device).items() 122 if not dev) 123 if len(not_supported_layers) != 0: 124 log.error("The following layers are not supported " \ 125 "by the plugin for device '%s': %s." % \ 126 (device, ', '.join(not_supported_layers))) 127 raise NotImplementedError( 128 "Some layers are not supported on the device") 129 130 def _load_executable_net(self, batch_size=1): 131 network = self._network 132 133 if self._output_blobs: 134 network.add_outputs(self._output_blobs) 135 136 iter_inputs = iter(network.input_info) 137 self._input_blob = next(iter_inputs) 138 139 # NOTE: handling for the inclusion of `image_info` in OpenVino2019 140 self._require_image_info = 'image_info' in network.input_info 141 if self._input_blob == 'image_info': 142 self._input_blob = next(iter_inputs) 143 144 self._input_layout = network.input_info[self._input_blob].input_data.shape 145 self._input_layout[0] = batch_size 146 network.reshape({self._input_blob: self._input_layout}) 147 self._batch_size = batch_size 148 149 self._net = self._ie.load_network(network=network, num_requests=1, 150 device_name=self._device) 151 152 def infer(self, inputs): 153 assert len(inputs.shape) == 4, \ 154 "Expected an input image in (N, H, W, C) format, got %s" % \ 155 (inputs.shape, ) 156 157 if inputs.shape[3] == 1: # A batch of single-channel images 158 inputs = np.repeat(inputs, 3, axis=3) 159 160 assert inputs.shape[3] == 3, \ 161 "Expected BGR input, got %s" % (inputs.shape, ) 162 163 n, c, h, w = self._input_layout 164 if inputs.shape[1:3] != (h, w): 165 resized_inputs = np.empty((n, h, w, c), dtype=inputs.dtype) 166 for inp, resized_input in zip(inputs, resized_inputs): 167 cv2.resize(inp, (w, h), resized_input) 168 inputs = resized_inputs 169 inputs = inputs.transpose((0, 3, 1, 2)) # NHWC to NCHW 170 inputs = {self._input_blob: inputs} 171 if self._require_image_info: 172 info = np.zeros([1, 3]) 173 info[0, 0] = h 174 info[0, 1] = w 175 info[0, 2] = 1.0 # scale 176 inputs['image_info'] = info 177 178 results = self._net.infer(inputs) 179 if len(results) == 1: 180 return next(iter(results)) 181 else: 182 return results 183 184 def launch(self, inputs): 185 batch_size = len(inputs) 186 if self._batch_size < batch_size: 187 self._load_executable_net(batch_size) 188 189 outputs = self.infer(inputs) 190 results = self.process_outputs(inputs, outputs) 191 return results 192 193 def categories(self): 194 return self._interpreter.get_categories() 195 196 def process_outputs(self, inputs, outputs): 197 return self._interpreter.process_outputs(inputs, outputs) 198 199 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/datumaro/plugins/openvino_launcher.py b/datumaro/plugins/openvino_launcher.py --- a/datumaro/plugins/openvino_launcher.py +++ b/datumaro/plugins/openvino_launcher.py @@ -177,7 +177,7 @@ results = self._net.infer(inputs) if len(results) == 1: - return next(iter(results)) + return next(iter(results.values())) else: return results
{"golden_diff": "diff --git a/datumaro/plugins/openvino_launcher.py b/datumaro/plugins/openvino_launcher.py\n--- a/datumaro/plugins/openvino_launcher.py\n+++ b/datumaro/plugins/openvino_launcher.py\n@@ -177,7 +177,7 @@\n \n results = self._net.infer(inputs)\n if len(results) == 1:\n- return next(iter(results))\n+ return next(iter(results.values()))\n else:\n return results\n", "issue": "infer result passed from openvino launcher to interpreter is not appropriate.\nI tried model run using openvino's mobileenet-v2-pytorch model.\r\n(using mobilenet-v2-pytorch.xml, mobilenet-v2-pytorch.bin)\r\n\r\n`datum model run -p proj -m model-0`\r\n\r\nHowever, only the name of the layer (ex. 'prob' string) comes into the input parameters(outputs) of the interpreter. Please check the return result of OpenvinoLauncher.infer\r\n\r\n`results = self._net.infer(inputs)` line 178, openvino_launcher.py\r\nDebugging results are normal up to the code above, but it seems that only the name of the result layer is returned when returning and passing to interpreter.\n", "before_files": [{"content": "\n# Copyright (C) 2019-2020 Intel Corporation\n#\n# SPDX-License-Identifier: MIT\n\n# pylint: disable=exec-used\n\nimport cv2\nimport logging as log\nimport numpy as np\nimport os.path as osp\nimport shutil\n\nfrom openvino.inference_engine import IECore\n\nfrom datumaro.components.cli_plugin import CliPlugin\nfrom datumaro.components.launcher import Launcher\n\n\nclass _OpenvinoImporter(CliPlugin):\n @staticmethod\n def _parse_output_layers(s):\n return [s.strip() for s in s.split(',')]\n\n @classmethod\n def build_cmdline_parser(cls, **kwargs):\n parser = super().build_cmdline_parser(**kwargs)\n parser.add_argument('-d', '--description', required=True,\n help=\"Path to the model description file (.xml)\")\n parser.add_argument('-w', '--weights', required=True,\n help=\"Path to the model weights file (.bin)\")\n parser.add_argument('-i', '--interpreter', required=True,\n help=\"Path to the network output interprter script (.py)\")\n parser.add_argument('--device', default='CPU',\n help=\"Target device (default: %(default)s)\")\n parser.add_argument('--output-layers', type=cls._parse_output_layers,\n help=\"A comma-separated list of extra output layers\")\n return parser\n\n @staticmethod\n def copy_model(model_dir, model):\n shutil.copy(model['description'],\n osp.join(model_dir, osp.basename(model['description'])))\n model['description'] = osp.basename(model['description'])\n\n shutil.copy(model['weights'],\n osp.join(model_dir, osp.basename(model['weights'])))\n model['weights'] = osp.basename(model['weights'])\n\n shutil.copy(model['interpreter'],\n osp.join(model_dir, osp.basename(model['interpreter'])))\n model['interpreter'] = osp.basename(model['interpreter'])\n\n\nclass InterpreterScript:\n def __init__(self, path):\n with open(path, 'r') as f:\n script = f.read()\n\n context = {}\n exec(script, context, context)\n\n process_outputs = context.get('process_outputs')\n if not callable(process_outputs):\n raise Exception(\"Can't find 'process_outputs' function in \"\n \"the interpreter script\")\n self.__dict__['process_outputs'] = process_outputs\n\n get_categories = context.get('get_categories')\n assert get_categories is None or callable(get_categories)\n if get_categories:\n self.__dict__['get_categories'] = get_categories\n\n @staticmethod\n def get_categories():\n return None\n\n @staticmethod\n def process_outputs(inputs, outputs):\n raise NotImplementedError(\n \"Function should be implemented in the interpreter script\")\n\n\nclass OpenvinoLauncher(Launcher):\n cli_plugin = _OpenvinoImporter\n\n def __init__(self, description, weights, interpreter,\n device=None, model_dir=None, output_layers=None):\n if not model_dir:\n model_dir = ''\n if not osp.isfile(description):\n description = osp.join(model_dir, description)\n if not osp.isfile(description):\n raise Exception('Failed to open model description file \"%s\"' % \\\n (description))\n\n if not osp.isfile(weights):\n weights = osp.join(model_dir, weights)\n if not osp.isfile(weights):\n raise Exception('Failed to open model weights file \"%s\"' % \\\n (weights))\n\n if not osp.isfile(interpreter):\n interpreter = osp.join(model_dir, interpreter)\n if not osp.isfile(interpreter):\n raise Exception('Failed to open model interpreter script file \"%s\"' % \\\n (interpreter))\n\n self._interpreter = InterpreterScript(interpreter)\n\n self._device = device or 'CPU'\n self._output_blobs = output_layers\n\n self._ie = IECore()\n self._network = self._ie.read_network(description, weights)\n self._check_model_support(self._network, self._device)\n self._load_executable_net()\n\n def _check_model_support(self, net, device):\n not_supported_layers = set(name\n for name, dev in self._ie.query_network(net, device).items()\n if not dev)\n if len(not_supported_layers) != 0:\n log.error(\"The following layers are not supported \" \\\n \"by the plugin for device '%s': %s.\" % \\\n (device, ', '.join(not_supported_layers)))\n raise NotImplementedError(\n \"Some layers are not supported on the device\")\n\n def _load_executable_net(self, batch_size=1):\n network = self._network\n\n if self._output_blobs:\n network.add_outputs(self._output_blobs)\n\n iter_inputs = iter(network.input_info)\n self._input_blob = next(iter_inputs)\n\n # NOTE: handling for the inclusion of `image_info` in OpenVino2019\n self._require_image_info = 'image_info' in network.input_info\n if self._input_blob == 'image_info':\n self._input_blob = next(iter_inputs)\n\n self._input_layout = network.input_info[self._input_blob].input_data.shape\n self._input_layout[0] = batch_size\n network.reshape({self._input_blob: self._input_layout})\n self._batch_size = batch_size\n\n self._net = self._ie.load_network(network=network, num_requests=1,\n device_name=self._device)\n\n def infer(self, inputs):\n assert len(inputs.shape) == 4, \\\n \"Expected an input image in (N, H, W, C) format, got %s\" % \\\n (inputs.shape, )\n\n if inputs.shape[3] == 1: # A batch of single-channel images\n inputs = np.repeat(inputs, 3, axis=3)\n\n assert inputs.shape[3] == 3, \\\n \"Expected BGR input, got %s\" % (inputs.shape, )\n\n n, c, h, w = self._input_layout\n if inputs.shape[1:3] != (h, w):\n resized_inputs = np.empty((n, h, w, c), dtype=inputs.dtype)\n for inp, resized_input in zip(inputs, resized_inputs):\n cv2.resize(inp, (w, h), resized_input)\n inputs = resized_inputs\n inputs = inputs.transpose((0, 3, 1, 2)) # NHWC to NCHW\n inputs = {self._input_blob: inputs}\n if self._require_image_info:\n info = np.zeros([1, 3])\n info[0, 0] = h\n info[0, 1] = w\n info[0, 2] = 1.0 # scale\n inputs['image_info'] = info\n\n results = self._net.infer(inputs)\n if len(results) == 1:\n return next(iter(results))\n else:\n return results\n\n def launch(self, inputs):\n batch_size = len(inputs)\n if self._batch_size < batch_size:\n self._load_executable_net(batch_size)\n\n outputs = self.infer(inputs)\n results = self.process_outputs(inputs, outputs)\n return results\n\n def categories(self):\n return self._interpreter.get_categories()\n\n def process_outputs(self, inputs, outputs):\n return self._interpreter.process_outputs(inputs, outputs)\n\n", "path": "datumaro/plugins/openvino_launcher.py"}], "after_files": [{"content": "\n# Copyright (C) 2019-2020 Intel Corporation\n#\n# SPDX-License-Identifier: MIT\n\n# pylint: disable=exec-used\n\nimport cv2\nimport logging as log\nimport numpy as np\nimport os.path as osp\nimport shutil\n\nfrom openvino.inference_engine import IECore\n\nfrom datumaro.components.cli_plugin import CliPlugin\nfrom datumaro.components.launcher import Launcher\n\n\nclass _OpenvinoImporter(CliPlugin):\n @staticmethod\n def _parse_output_layers(s):\n return [s.strip() for s in s.split(',')]\n\n @classmethod\n def build_cmdline_parser(cls, **kwargs):\n parser = super().build_cmdline_parser(**kwargs)\n parser.add_argument('-d', '--description', required=True,\n help=\"Path to the model description file (.xml)\")\n parser.add_argument('-w', '--weights', required=True,\n help=\"Path to the model weights file (.bin)\")\n parser.add_argument('-i', '--interpreter', required=True,\n help=\"Path to the network output interprter script (.py)\")\n parser.add_argument('--device', default='CPU',\n help=\"Target device (default: %(default)s)\")\n parser.add_argument('--output-layers', type=cls._parse_output_layers,\n help=\"A comma-separated list of extra output layers\")\n return parser\n\n @staticmethod\n def copy_model(model_dir, model):\n shutil.copy(model['description'],\n osp.join(model_dir, osp.basename(model['description'])))\n model['description'] = osp.basename(model['description'])\n\n shutil.copy(model['weights'],\n osp.join(model_dir, osp.basename(model['weights'])))\n model['weights'] = osp.basename(model['weights'])\n\n shutil.copy(model['interpreter'],\n osp.join(model_dir, osp.basename(model['interpreter'])))\n model['interpreter'] = osp.basename(model['interpreter'])\n\n\nclass InterpreterScript:\n def __init__(self, path):\n with open(path, 'r') as f:\n script = f.read()\n\n context = {}\n exec(script, context, context)\n\n process_outputs = context.get('process_outputs')\n if not callable(process_outputs):\n raise Exception(\"Can't find 'process_outputs' function in \"\n \"the interpreter script\")\n self.__dict__['process_outputs'] = process_outputs\n\n get_categories = context.get('get_categories')\n assert get_categories is None or callable(get_categories)\n if get_categories:\n self.__dict__['get_categories'] = get_categories\n\n @staticmethod\n def get_categories():\n return None\n\n @staticmethod\n def process_outputs(inputs, outputs):\n raise NotImplementedError(\n \"Function should be implemented in the interpreter script\")\n\n\nclass OpenvinoLauncher(Launcher):\n cli_plugin = _OpenvinoImporter\n\n def __init__(self, description, weights, interpreter,\n device=None, model_dir=None, output_layers=None):\n if not model_dir:\n model_dir = ''\n if not osp.isfile(description):\n description = osp.join(model_dir, description)\n if not osp.isfile(description):\n raise Exception('Failed to open model description file \"%s\"' % \\\n (description))\n\n if not osp.isfile(weights):\n weights = osp.join(model_dir, weights)\n if not osp.isfile(weights):\n raise Exception('Failed to open model weights file \"%s\"' % \\\n (weights))\n\n if not osp.isfile(interpreter):\n interpreter = osp.join(model_dir, interpreter)\n if not osp.isfile(interpreter):\n raise Exception('Failed to open model interpreter script file \"%s\"' % \\\n (interpreter))\n\n self._interpreter = InterpreterScript(interpreter)\n\n self._device = device or 'CPU'\n self._output_blobs = output_layers\n\n self._ie = IECore()\n self._network = self._ie.read_network(description, weights)\n self._check_model_support(self._network, self._device)\n self._load_executable_net()\n\n def _check_model_support(self, net, device):\n not_supported_layers = set(name\n for name, dev in self._ie.query_network(net, device).items()\n if not dev)\n if len(not_supported_layers) != 0:\n log.error(\"The following layers are not supported \" \\\n \"by the plugin for device '%s': %s.\" % \\\n (device, ', '.join(not_supported_layers)))\n raise NotImplementedError(\n \"Some layers are not supported on the device\")\n\n def _load_executable_net(self, batch_size=1):\n network = self._network\n\n if self._output_blobs:\n network.add_outputs(self._output_blobs)\n\n iter_inputs = iter(network.input_info)\n self._input_blob = next(iter_inputs)\n\n # NOTE: handling for the inclusion of `image_info` in OpenVino2019\n self._require_image_info = 'image_info' in network.input_info\n if self._input_blob == 'image_info':\n self._input_blob = next(iter_inputs)\n\n self._input_layout = network.input_info[self._input_blob].input_data.shape\n self._input_layout[0] = batch_size\n network.reshape({self._input_blob: self._input_layout})\n self._batch_size = batch_size\n\n self._net = self._ie.load_network(network=network, num_requests=1,\n device_name=self._device)\n\n def infer(self, inputs):\n assert len(inputs.shape) == 4, \\\n \"Expected an input image in (N, H, W, C) format, got %s\" % \\\n (inputs.shape, )\n\n if inputs.shape[3] == 1: # A batch of single-channel images\n inputs = np.repeat(inputs, 3, axis=3)\n\n assert inputs.shape[3] == 3, \\\n \"Expected BGR input, got %s\" % (inputs.shape, )\n\n n, c, h, w = self._input_layout\n if inputs.shape[1:3] != (h, w):\n resized_inputs = np.empty((n, h, w, c), dtype=inputs.dtype)\n for inp, resized_input in zip(inputs, resized_inputs):\n cv2.resize(inp, (w, h), resized_input)\n inputs = resized_inputs\n inputs = inputs.transpose((0, 3, 1, 2)) # NHWC to NCHW\n inputs = {self._input_blob: inputs}\n if self._require_image_info:\n info = np.zeros([1, 3])\n info[0, 0] = h\n info[0, 1] = w\n info[0, 2] = 1.0 # scale\n inputs['image_info'] = info\n\n results = self._net.infer(inputs)\n if len(results) == 1:\n return next(iter(results.values()))\n else:\n return results\n\n def launch(self, inputs):\n batch_size = len(inputs)\n if self._batch_size < batch_size:\n self._load_executable_net(batch_size)\n\n outputs = self.infer(inputs)\n results = self.process_outputs(inputs, outputs)\n return results\n\n def categories(self):\n return self._interpreter.get_categories()\n\n def process_outputs(self, inputs, outputs):\n return self._interpreter.process_outputs(inputs, outputs)\n\n", "path": "datumaro/plugins/openvino_launcher.py"}]}
2,496
104
gh_patches_debug_19923
rasdani/github-patches
git_diff
dotkom__onlineweb4-3010
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- "Vis påmeldte" shows list of both attending and waiting list **Describe the bug** A clear and concise description of what the bug is. "Vis påmeldte" on arrangements shows both attending people and people on the waiting list. This can lead to misunderstandings because people on the waiting list might think they're attending the arrangement. **To Reproduce** Steps to reproduce the behavior: 1. Go to any arrangement 2. Click on 'Vis påmeldte' 3. Scroll down to bottom of list 4. See error **Expected behavior** A clear and concise description of what you expected to happen. I expect the "vis påmeldte" modal to show only attending people. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `apps/events/api/views.py` Content: ``` 1 from django.shortcuts import get_object_or_404 2 from guardian.shortcuts import get_objects_for_user 3 from rest_framework import mixins, permissions, status, viewsets 4 from rest_framework.decorators import action 5 from rest_framework.exceptions import NotFound 6 from rest_framework.response import Response 7 8 from apps.payment.serializers import PaymentReadOnlySerializer 9 from apps.profiles.models import Privacy 10 11 from ..constants import AttendStatus 12 from ..filters import ( 13 EventFilter, 14 ExtrasFilter, 15 FieldOfStudyRuleFilter, 16 GradeRuleFilter, 17 RuleBundleFilter, 18 UserGroupRuleFilter, 19 ) 20 from ..models import ( 21 AttendanceEvent, 22 Attendee, 23 Event, 24 Extras, 25 FieldOfStudyRule, 26 GradeRule, 27 RuleBundle, 28 UserGroupRule, 29 ) 30 from ..utils import handle_attend_event_payment 31 from .permissions import ( 32 ChangeAttendeePermission, 33 RegisterPermission, 34 UnregisterPermission, 35 ) 36 from .register_attendance_serializer import RegisterAttendanceSerializer 37 from .serializers import ( 38 AttendanceEventSerializer, 39 AttendeeAdministrateSerializer, 40 AttendeeSerializer, 41 AttendeeUpdateSerializer, 42 EventSerializer, 43 ExtrasSerializer, 44 FieldOfStudyRuleSerializer, 45 GradeRuleSerializer, 46 PublicAttendeeSerializer, 47 RegisterSerializer, 48 RuleBundleSerializer, 49 UserGroupRuleSerializer, 50 ) 51 52 53 class EventViewSet(viewsets.ModelViewSet): 54 serializer_class = EventSerializer 55 permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,) 56 filterset_class = EventFilter 57 ordering_fields = ( 58 "event_start", 59 "event_end", 60 "id", 61 "closest", 62 "has_passed", 63 ) 64 ordering = ("has_passed", "closest", "id") 65 66 def get_queryset(self): 67 user = self.request.user 68 return Event.by_nearest_active_event.get_queryset_for_user(user) 69 70 71 class AttendanceEventViewSet(viewsets.ModelViewSet): 72 serializer_class = AttendanceEventSerializer 73 permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,) 74 queryset = AttendanceEvent.objects.all() 75 76 def get_queryset(self): 77 user = self.request.user 78 events = Event.by_registration.get_queryset_for_user(user) 79 return super().get_queryset().filter(event__in=events) 80 81 @action( 82 detail=True, 83 methods=["POST"], 84 permission_classes=(permissions.IsAuthenticated, RegisterPermission), 85 serializer_class=RegisterSerializer, 86 ) 87 def register(self, request, pk=None): 88 user = request.user 89 privacy: Privacy = user.privacy 90 attendance_event: AttendanceEvent = self.get_object() 91 # Check if the recaptcha and other request data is valid 92 register_serializer = self.get_serializer(data=request.data) 93 register_serializer.is_valid(raise_exception=True) 94 data = register_serializer.validated_data 95 # Set the values to the users default settings if sent data is empty 96 # intentionally uses that bool(None) == False 97 attending_visibility = ( 98 specific 99 if (specific := data.get("show_as_attending_event")) is not None 100 else bool(privacy.visible_as_attending_events) 101 ) 102 allow_pictures = ( 103 specific 104 if (specific := data.get("allow_pictures")) is not None 105 else bool(privacy.allow_pictures) 106 ) 107 108 attendee = Attendee.objects.create( 109 event=attendance_event, 110 user=user, 111 show_as_attending_event=attending_visibility, 112 allow_pictures=allow_pictures, 113 note=data.get("note"), 114 ) 115 116 if attendance_event.payment(): 117 handle_attend_event_payment(attendance_event.event, user) 118 119 attendee_serializer = AttendeeSerializer(attendee) 120 return Response(data=attendee_serializer.data, status=status.HTTP_201_CREATED) 121 122 @action( 123 detail=True, 124 methods=["DELETE"], 125 permission_classes=(permissions.IsAuthenticated, UnregisterPermission), 126 ) 127 def unregister(self, request, pk=None): 128 user = request.user 129 attendance_event: AttendanceEvent = self.get_object() 130 attendee = Attendee.objects.get(event=attendance_event, user=user) 131 # Attendees un-attend with themselves as the admin user. 132 attendee.unattend(user) 133 134 return Response(status=status.HTTP_204_NO_CONTENT) 135 136 @action( 137 detail=True, 138 methods=["GET"], 139 permission_classes=(permissions.IsAuthenticated,), 140 serializer_class=PublicAttendeeSerializer, 141 url_path="public-attendees", 142 ) 143 def public_attendees(self, request, pk=None): 144 attendance_event: AttendanceEvent = self.get_object() 145 attendees = ( 146 attendance_event.attending_attendees_qs | attendance_event.waitlist_qs 147 ) 148 attendees = attendees.order_by("-show_as_attending_event", "timestamp") 149 serializer = self.get_serializer(attendees, many=True) 150 151 return Response(data=serializer.data, status=status.HTTP_200_OK) 152 153 @action( 154 detail=True, 155 methods=["GET"], 156 permission_classes=(permissions.IsAuthenticated,), 157 serializer_class=AttendeeSerializer, 158 ) 159 def attendee(self, request, pk=None): 160 user = request.user 161 attendance_event: AttendanceEvent = self.get_object() 162 attendee = get_object_or_404(Attendee, event=attendance_event, user=user) 163 serializer = self.get_serializer(attendee) 164 return Response(data=serializer.data, status=status.HTTP_200_OK) 165 166 @action( 167 detail=True, 168 methods=["GET"], 169 permission_classes=(permissions.IsAuthenticated,), 170 serializer_class=ExtrasSerializer, 171 ) 172 def extras(self, request, pk=None): 173 attendance_event: AttendanceEvent = self.get_object() 174 serializer = self.get_serializer(attendance_event.extras, many=True) 175 return Response(data=serializer.data, status=status.HTTP_200_OK) 176 177 @action( 178 detail=True, 179 methods=["GET"], 180 permission_classes=(permissions.IsAuthenticated,), 181 serializer_class=PaymentReadOnlySerializer, 182 ) 183 def payment(self, request, pk=None): 184 attendance_event: AttendanceEvent = self.get_object() 185 payment = attendance_event.get_payment() 186 if not payment: 187 raise NotFound 188 serializer = self.get_serializer(payment) 189 return Response(data=serializer.data, status=status.HTTP_200_OK) 190 191 192 class AttendeeViewSet( 193 viewsets.GenericViewSet, mixins.ListModelMixin, mixins.RetrieveModelMixin 194 ): 195 serializer_class = AttendeeSerializer 196 filterset_fields = ( 197 "event", 198 "attended", 199 "user", 200 "show_as_attending_event", 201 "allow_pictures", 202 "extras", 203 ) 204 205 @staticmethod 206 def _get_allowed_attendees(user): 207 """ 208 A user is allowed to see attendees for their own user, and for events they are organizing. 209 """ 210 if user.is_anonymous: 211 return Attendee.objects.none() 212 213 attendees = get_objects_for_user( 214 user, "events.change_attendee", accept_global_perms=False 215 ) 216 attendees |= Attendee.objects.filter(user=user) 217 return attendees.distinct() 218 219 def get_queryset(self): 220 return self._get_allowed_attendees(self.request.user) 221 222 @action( 223 detail=True, 224 methods=["PATCH", "PUT"], 225 permission_classes=(ChangeAttendeePermission,), 226 serializer_class=AttendeeUpdateSerializer, 227 ) 228 def change(self, request, pk=None): 229 attendee: Attendee = self.get_object() 230 partial = request.method == "PATCH" 231 serializer = self.get_serializer(attendee, data=request.data, partial=partial) 232 serializer.is_valid(raise_exception=True) 233 serializer.save() 234 return Response(data=serializer.data, status=status.HTTP_200_OK) 235 236 @action( 237 detail=True, 238 methods=["PATCH", "PUT"], 239 permission_classes=(permissions.DjangoObjectPermissions,), 240 serializer_class=AttendeeAdministrateSerializer, 241 ) 242 def administrate(self, request, pk=None): 243 attendee: Attendee = self.get_object() 244 partial = request.method == "PATCH" 245 serializer = self.get_serializer(attendee, data=request.data, partial=partial) 246 serializer.is_valid(raise_exception=True) 247 serializer.save() 248 return Response(data=serializer.data, status=status.HTTP_200_OK) 249 250 @action( 251 detail=False, 252 methods=["POST"], 253 serializer_class=RegisterAttendanceSerializer, 254 url_path="register-attendance", 255 ) 256 def register_attendance(self, request, pk=None): 257 """ 258 Register that a user has physically attended an event. 259 """ 260 serializer = self.get_serializer(data=request.data) 261 262 serializer.is_valid(raise_exception=True) 263 264 attendee = serializer.get_attendee(request.data) 265 attendee.attended = True 266 attendee.save() 267 268 return Response( 269 { 270 "detail": { 271 "message": f"{attendee.user} er registrert som deltaker. Velkommen!", 272 "attend_status": AttendStatus.REGISTER_SUCCESS, 273 "attendee": attendee.id, 274 } 275 }, 276 status=status.HTTP_200_OK, 277 ) 278 279 280 class ExtrasViewSet(viewsets.ModelViewSet): 281 serializer_class = ExtrasSerializer 282 queryset = Extras.objects.all() 283 permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,) 284 filterset_class = ExtrasFilter 285 286 287 class RuleBundleViewSet(viewsets.ModelViewSet): 288 serializer_class = RuleBundleSerializer 289 queryset = RuleBundle.objects.all() 290 permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,) 291 filterset_class = RuleBundleFilter 292 293 294 class FieldOfStudyRuleViewSet(viewsets.ModelViewSet): 295 serializer_class = FieldOfStudyRuleSerializer 296 queryset = FieldOfStudyRule.objects.all() 297 permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,) 298 filterset_class = FieldOfStudyRuleFilter 299 300 301 class GradeRuleViewSet(viewsets.ModelViewSet): 302 serializer_class = GradeRuleSerializer 303 queryset = GradeRule.objects.all() 304 permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,) 305 filterset_class = GradeRuleFilter 306 307 308 class UserGroupRuleViewSet(viewsets.ModelViewSet): 309 serializer_class = UserGroupRuleSerializer 310 queryset = UserGroupRule.objects.all() 311 permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,) 312 filterset_class = UserGroupRuleFilter 313 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/apps/events/api/views.py b/apps/events/api/views.py --- a/apps/events/api/views.py +++ b/apps/events/api/views.py @@ -142,10 +142,21 @@ ) def public_attendees(self, request, pk=None): attendance_event: AttendanceEvent = self.get_object() - attendees = ( - attendance_event.attending_attendees_qs | attendance_event.waitlist_qs - ) - attendees = attendees.order_by("-show_as_attending_event", "timestamp") + attendees = attendance_event.attending_attendees_qs + serializer = self.get_serializer(attendees, many=True) + + return Response(data=serializer.data, status=status.HTTP_200_OK) + + @action( + detail=True, + methods=["GET"], + permission_classes=(permissions.IsAuthenticated,), + serializer_class=PublicAttendeeSerializer, + url_path="public-on-waitlist", + ) + def public_on_waitlist(self, request, pk=None): + attendance_event: AttendanceEvent = self.get_object() + attendees = attendance_event.waitlist_qs serializer = self.get_serializer(attendees, many=True) return Response(data=serializer.data, status=status.HTTP_200_OK)
{"golden_diff": "diff --git a/apps/events/api/views.py b/apps/events/api/views.py\n--- a/apps/events/api/views.py\n+++ b/apps/events/api/views.py\n@@ -142,10 +142,21 @@\n )\n def public_attendees(self, request, pk=None):\n attendance_event: AttendanceEvent = self.get_object()\n- attendees = (\n- attendance_event.attending_attendees_qs | attendance_event.waitlist_qs\n- )\n- attendees = attendees.order_by(\"-show_as_attending_event\", \"timestamp\")\n+ attendees = attendance_event.attending_attendees_qs\n+ serializer = self.get_serializer(attendees, many=True)\n+\n+ return Response(data=serializer.data, status=status.HTTP_200_OK)\n+\n+ @action(\n+ detail=True,\n+ methods=[\"GET\"],\n+ permission_classes=(permissions.IsAuthenticated,),\n+ serializer_class=PublicAttendeeSerializer,\n+ url_path=\"public-on-waitlist\",\n+ )\n+ def public_on_waitlist(self, request, pk=None):\n+ attendance_event: AttendanceEvent = self.get_object()\n+ attendees = attendance_event.waitlist_qs\n serializer = self.get_serializer(attendees, many=True)\n \n return Response(data=serializer.data, status=status.HTTP_200_OK)\n", "issue": "\"Vis p\u00e5meldte\" shows list of both attending and waiting list\n**Describe the bug**\r\nA clear and concise description of what the bug is.\r\n\r\n\"Vis p\u00e5meldte\" on arrangements shows both attending people and people on the waiting list. This can lead to misunderstandings because people on the waiting list might think they're attending the arrangement.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Go to any arrangement\r\n2. Click on 'Vis p\u00e5meldte'\r\n3. Scroll down to bottom of list\r\n4. See error\r\n\r\n**Expected behavior**\r\nA clear and concise description of what you expected to happen.\r\n\r\nI expect the \"vis p\u00e5meldte\" modal to show only attending people.\n", "before_files": [{"content": "from django.shortcuts import get_object_or_404\nfrom guardian.shortcuts import get_objects_for_user\nfrom rest_framework import mixins, permissions, status, viewsets\nfrom rest_framework.decorators import action\nfrom rest_framework.exceptions import NotFound\nfrom rest_framework.response import Response\n\nfrom apps.payment.serializers import PaymentReadOnlySerializer\nfrom apps.profiles.models import Privacy\n\nfrom ..constants import AttendStatus\nfrom ..filters import (\n EventFilter,\n ExtrasFilter,\n FieldOfStudyRuleFilter,\n GradeRuleFilter,\n RuleBundleFilter,\n UserGroupRuleFilter,\n)\nfrom ..models import (\n AttendanceEvent,\n Attendee,\n Event,\n Extras,\n FieldOfStudyRule,\n GradeRule,\n RuleBundle,\n UserGroupRule,\n)\nfrom ..utils import handle_attend_event_payment\nfrom .permissions import (\n ChangeAttendeePermission,\n RegisterPermission,\n UnregisterPermission,\n)\nfrom .register_attendance_serializer import RegisterAttendanceSerializer\nfrom .serializers import (\n AttendanceEventSerializer,\n AttendeeAdministrateSerializer,\n AttendeeSerializer,\n AttendeeUpdateSerializer,\n EventSerializer,\n ExtrasSerializer,\n FieldOfStudyRuleSerializer,\n GradeRuleSerializer,\n PublicAttendeeSerializer,\n RegisterSerializer,\n RuleBundleSerializer,\n UserGroupRuleSerializer,\n)\n\n\nclass EventViewSet(viewsets.ModelViewSet):\n serializer_class = EventSerializer\n permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)\n filterset_class = EventFilter\n ordering_fields = (\n \"event_start\",\n \"event_end\",\n \"id\",\n \"closest\",\n \"has_passed\",\n )\n ordering = (\"has_passed\", \"closest\", \"id\")\n\n def get_queryset(self):\n user = self.request.user\n return Event.by_nearest_active_event.get_queryset_for_user(user)\n\n\nclass AttendanceEventViewSet(viewsets.ModelViewSet):\n serializer_class = AttendanceEventSerializer\n permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)\n queryset = AttendanceEvent.objects.all()\n\n def get_queryset(self):\n user = self.request.user\n events = Event.by_registration.get_queryset_for_user(user)\n return super().get_queryset().filter(event__in=events)\n\n @action(\n detail=True,\n methods=[\"POST\"],\n permission_classes=(permissions.IsAuthenticated, RegisterPermission),\n serializer_class=RegisterSerializer,\n )\n def register(self, request, pk=None):\n user = request.user\n privacy: Privacy = user.privacy\n attendance_event: AttendanceEvent = self.get_object()\n # Check if the recaptcha and other request data is valid\n register_serializer = self.get_serializer(data=request.data)\n register_serializer.is_valid(raise_exception=True)\n data = register_serializer.validated_data\n # Set the values to the users default settings if sent data is empty\n # intentionally uses that bool(None) == False\n attending_visibility = (\n specific\n if (specific := data.get(\"show_as_attending_event\")) is not None\n else bool(privacy.visible_as_attending_events)\n )\n allow_pictures = (\n specific\n if (specific := data.get(\"allow_pictures\")) is not None\n else bool(privacy.allow_pictures)\n )\n\n attendee = Attendee.objects.create(\n event=attendance_event,\n user=user,\n show_as_attending_event=attending_visibility,\n allow_pictures=allow_pictures,\n note=data.get(\"note\"),\n )\n\n if attendance_event.payment():\n handle_attend_event_payment(attendance_event.event, user)\n\n attendee_serializer = AttendeeSerializer(attendee)\n return Response(data=attendee_serializer.data, status=status.HTTP_201_CREATED)\n\n @action(\n detail=True,\n methods=[\"DELETE\"],\n permission_classes=(permissions.IsAuthenticated, UnregisterPermission),\n )\n def unregister(self, request, pk=None):\n user = request.user\n attendance_event: AttendanceEvent = self.get_object()\n attendee = Attendee.objects.get(event=attendance_event, user=user)\n # Attendees un-attend with themselves as the admin user.\n attendee.unattend(user)\n\n return Response(status=status.HTTP_204_NO_CONTENT)\n\n @action(\n detail=True,\n methods=[\"GET\"],\n permission_classes=(permissions.IsAuthenticated,),\n serializer_class=PublicAttendeeSerializer,\n url_path=\"public-attendees\",\n )\n def public_attendees(self, request, pk=None):\n attendance_event: AttendanceEvent = self.get_object()\n attendees = (\n attendance_event.attending_attendees_qs | attendance_event.waitlist_qs\n )\n attendees = attendees.order_by(\"-show_as_attending_event\", \"timestamp\")\n serializer = self.get_serializer(attendees, many=True)\n\n return Response(data=serializer.data, status=status.HTTP_200_OK)\n\n @action(\n detail=True,\n methods=[\"GET\"],\n permission_classes=(permissions.IsAuthenticated,),\n serializer_class=AttendeeSerializer,\n )\n def attendee(self, request, pk=None):\n user = request.user\n attendance_event: AttendanceEvent = self.get_object()\n attendee = get_object_or_404(Attendee, event=attendance_event, user=user)\n serializer = self.get_serializer(attendee)\n return Response(data=serializer.data, status=status.HTTP_200_OK)\n\n @action(\n detail=True,\n methods=[\"GET\"],\n permission_classes=(permissions.IsAuthenticated,),\n serializer_class=ExtrasSerializer,\n )\n def extras(self, request, pk=None):\n attendance_event: AttendanceEvent = self.get_object()\n serializer = self.get_serializer(attendance_event.extras, many=True)\n return Response(data=serializer.data, status=status.HTTP_200_OK)\n\n @action(\n detail=True,\n methods=[\"GET\"],\n permission_classes=(permissions.IsAuthenticated,),\n serializer_class=PaymentReadOnlySerializer,\n )\n def payment(self, request, pk=None):\n attendance_event: AttendanceEvent = self.get_object()\n payment = attendance_event.get_payment()\n if not payment:\n raise NotFound\n serializer = self.get_serializer(payment)\n return Response(data=serializer.data, status=status.HTTP_200_OK)\n\n\nclass AttendeeViewSet(\n viewsets.GenericViewSet, mixins.ListModelMixin, mixins.RetrieveModelMixin\n):\n serializer_class = AttendeeSerializer\n filterset_fields = (\n \"event\",\n \"attended\",\n \"user\",\n \"show_as_attending_event\",\n \"allow_pictures\",\n \"extras\",\n )\n\n @staticmethod\n def _get_allowed_attendees(user):\n \"\"\"\n A user is allowed to see attendees for their own user, and for events they are organizing.\n \"\"\"\n if user.is_anonymous:\n return Attendee.objects.none()\n\n attendees = get_objects_for_user(\n user, \"events.change_attendee\", accept_global_perms=False\n )\n attendees |= Attendee.objects.filter(user=user)\n return attendees.distinct()\n\n def get_queryset(self):\n return self._get_allowed_attendees(self.request.user)\n\n @action(\n detail=True,\n methods=[\"PATCH\", \"PUT\"],\n permission_classes=(ChangeAttendeePermission,),\n serializer_class=AttendeeUpdateSerializer,\n )\n def change(self, request, pk=None):\n attendee: Attendee = self.get_object()\n partial = request.method == \"PATCH\"\n serializer = self.get_serializer(attendee, data=request.data, partial=partial)\n serializer.is_valid(raise_exception=True)\n serializer.save()\n return Response(data=serializer.data, status=status.HTTP_200_OK)\n\n @action(\n detail=True,\n methods=[\"PATCH\", \"PUT\"],\n permission_classes=(permissions.DjangoObjectPermissions,),\n serializer_class=AttendeeAdministrateSerializer,\n )\n def administrate(self, request, pk=None):\n attendee: Attendee = self.get_object()\n partial = request.method == \"PATCH\"\n serializer = self.get_serializer(attendee, data=request.data, partial=partial)\n serializer.is_valid(raise_exception=True)\n serializer.save()\n return Response(data=serializer.data, status=status.HTTP_200_OK)\n\n @action(\n detail=False,\n methods=[\"POST\"],\n serializer_class=RegisterAttendanceSerializer,\n url_path=\"register-attendance\",\n )\n def register_attendance(self, request, pk=None):\n \"\"\"\n Register that a user has physically attended an event.\n \"\"\"\n serializer = self.get_serializer(data=request.data)\n\n serializer.is_valid(raise_exception=True)\n\n attendee = serializer.get_attendee(request.data)\n attendee.attended = True\n attendee.save()\n\n return Response(\n {\n \"detail\": {\n \"message\": f\"{attendee.user} er registrert som deltaker. Velkommen!\",\n \"attend_status\": AttendStatus.REGISTER_SUCCESS,\n \"attendee\": attendee.id,\n }\n },\n status=status.HTTP_200_OK,\n )\n\n\nclass ExtrasViewSet(viewsets.ModelViewSet):\n serializer_class = ExtrasSerializer\n queryset = Extras.objects.all()\n permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)\n filterset_class = ExtrasFilter\n\n\nclass RuleBundleViewSet(viewsets.ModelViewSet):\n serializer_class = RuleBundleSerializer\n queryset = RuleBundle.objects.all()\n permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)\n filterset_class = RuleBundleFilter\n\n\nclass FieldOfStudyRuleViewSet(viewsets.ModelViewSet):\n serializer_class = FieldOfStudyRuleSerializer\n queryset = FieldOfStudyRule.objects.all()\n permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)\n filterset_class = FieldOfStudyRuleFilter\n\n\nclass GradeRuleViewSet(viewsets.ModelViewSet):\n serializer_class = GradeRuleSerializer\n queryset = GradeRule.objects.all()\n permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)\n filterset_class = GradeRuleFilter\n\n\nclass UserGroupRuleViewSet(viewsets.ModelViewSet):\n serializer_class = UserGroupRuleSerializer\n queryset = UserGroupRule.objects.all()\n permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)\n filterset_class = UserGroupRuleFilter\n", "path": "apps/events/api/views.py"}], "after_files": [{"content": "from django.shortcuts import get_object_or_404\nfrom guardian.shortcuts import get_objects_for_user\nfrom rest_framework import mixins, permissions, status, viewsets\nfrom rest_framework.decorators import action\nfrom rest_framework.exceptions import NotFound\nfrom rest_framework.response import Response\n\nfrom apps.payment.serializers import PaymentReadOnlySerializer\nfrom apps.profiles.models import Privacy\n\nfrom ..constants import AttendStatus\nfrom ..filters import (\n EventFilter,\n ExtrasFilter,\n FieldOfStudyRuleFilter,\n GradeRuleFilter,\n RuleBundleFilter,\n UserGroupRuleFilter,\n)\nfrom ..models import (\n AttendanceEvent,\n Attendee,\n Event,\n Extras,\n FieldOfStudyRule,\n GradeRule,\n RuleBundle,\n UserGroupRule,\n)\nfrom ..utils import handle_attend_event_payment\nfrom .permissions import (\n ChangeAttendeePermission,\n RegisterPermission,\n UnregisterPermission,\n)\nfrom .register_attendance_serializer import RegisterAttendanceSerializer\nfrom .serializers import (\n AttendanceEventSerializer,\n AttendeeAdministrateSerializer,\n AttendeeSerializer,\n AttendeeUpdateSerializer,\n EventSerializer,\n ExtrasSerializer,\n FieldOfStudyRuleSerializer,\n GradeRuleSerializer,\n PublicAttendeeSerializer,\n RegisterSerializer,\n RuleBundleSerializer,\n UserGroupRuleSerializer,\n)\n\n\nclass EventViewSet(viewsets.ModelViewSet):\n serializer_class = EventSerializer\n permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)\n filterset_class = EventFilter\n ordering_fields = (\n \"event_start\",\n \"event_end\",\n \"id\",\n \"closest\",\n \"has_passed\",\n )\n ordering = (\"has_passed\", \"closest\", \"id\")\n\n def get_queryset(self):\n user = self.request.user\n return Event.by_nearest_active_event.get_queryset_for_user(user)\n\n\nclass AttendanceEventViewSet(viewsets.ModelViewSet):\n serializer_class = AttendanceEventSerializer\n permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)\n queryset = AttendanceEvent.objects.all()\n\n def get_queryset(self):\n user = self.request.user\n events = Event.by_registration.get_queryset_for_user(user)\n return super().get_queryset().filter(event__in=events)\n\n @action(\n detail=True,\n methods=[\"POST\"],\n permission_classes=(permissions.IsAuthenticated, RegisterPermission),\n serializer_class=RegisterSerializer,\n )\n def register(self, request, pk=None):\n user = request.user\n privacy: Privacy = user.privacy\n attendance_event: AttendanceEvent = self.get_object()\n # Check if the recaptcha and other request data is valid\n register_serializer = self.get_serializer(data=request.data)\n register_serializer.is_valid(raise_exception=True)\n data = register_serializer.validated_data\n # Set the values to the users default settings if sent data is empty\n # intentionally uses that bool(None) == False\n attending_visibility = (\n specific\n if (specific := data.get(\"show_as_attending_event\")) is not None\n else bool(privacy.visible_as_attending_events)\n )\n allow_pictures = (\n specific\n if (specific := data.get(\"allow_pictures\")) is not None\n else bool(privacy.allow_pictures)\n )\n\n attendee = Attendee.objects.create(\n event=attendance_event,\n user=user,\n show_as_attending_event=attending_visibility,\n allow_pictures=allow_pictures,\n note=data.get(\"note\"),\n )\n\n if attendance_event.payment():\n handle_attend_event_payment(attendance_event.event, user)\n\n attendee_serializer = AttendeeSerializer(attendee)\n return Response(data=attendee_serializer.data, status=status.HTTP_201_CREATED)\n\n @action(\n detail=True,\n methods=[\"DELETE\"],\n permission_classes=(permissions.IsAuthenticated, UnregisterPermission),\n )\n def unregister(self, request, pk=None):\n user = request.user\n attendance_event: AttendanceEvent = self.get_object()\n attendee = Attendee.objects.get(event=attendance_event, user=user)\n # Attendees un-attend with themselves as the admin user.\n attendee.unattend(user)\n\n return Response(status=status.HTTP_204_NO_CONTENT)\n\n @action(\n detail=True,\n methods=[\"GET\"],\n permission_classes=(permissions.IsAuthenticated,),\n serializer_class=PublicAttendeeSerializer,\n url_path=\"public-attendees\",\n )\n def public_attendees(self, request, pk=None):\n attendance_event: AttendanceEvent = self.get_object()\n attendees = attendance_event.attending_attendees_qs\n serializer = self.get_serializer(attendees, many=True)\n\n return Response(data=serializer.data, status=status.HTTP_200_OK)\n\n @action(\n detail=True,\n methods=[\"GET\"],\n permission_classes=(permissions.IsAuthenticated,),\n serializer_class=PublicAttendeeSerializer,\n url_path=\"public-on-waitlist\",\n )\n def public_on_waitlist(self, request, pk=None):\n attendance_event: AttendanceEvent = self.get_object()\n attendees = attendance_event.waitlist_qs\n serializer = self.get_serializer(attendees, many=True)\n\n return Response(data=serializer.data, status=status.HTTP_200_OK)\n\n @action(\n detail=True,\n methods=[\"GET\"],\n permission_classes=(permissions.IsAuthenticated,),\n serializer_class=AttendeeSerializer,\n )\n def attendee(self, request, pk=None):\n user = request.user\n attendance_event: AttendanceEvent = self.get_object()\n attendee = get_object_or_404(Attendee, event=attendance_event, user=user)\n serializer = self.get_serializer(attendee)\n return Response(data=serializer.data, status=status.HTTP_200_OK)\n\n @action(\n detail=True,\n methods=[\"GET\"],\n permission_classes=(permissions.IsAuthenticated,),\n serializer_class=ExtrasSerializer,\n )\n def extras(self, request, pk=None):\n attendance_event: AttendanceEvent = self.get_object()\n serializer = self.get_serializer(attendance_event.extras, many=True)\n return Response(data=serializer.data, status=status.HTTP_200_OK)\n\n @action(\n detail=True,\n methods=[\"GET\"],\n permission_classes=(permissions.IsAuthenticated,),\n serializer_class=PaymentReadOnlySerializer,\n )\n def payment(self, request, pk=None):\n attendance_event: AttendanceEvent = self.get_object()\n payment = attendance_event.get_payment()\n if not payment:\n raise NotFound\n serializer = self.get_serializer(payment)\n return Response(data=serializer.data, status=status.HTTP_200_OK)\n\n\nclass AttendeeViewSet(\n viewsets.GenericViewSet, mixins.ListModelMixin, mixins.RetrieveModelMixin\n):\n serializer_class = AttendeeSerializer\n filterset_fields = (\n \"event\",\n \"attended\",\n \"user\",\n \"show_as_attending_event\",\n \"allow_pictures\",\n \"extras\",\n )\n\n @staticmethod\n def _get_allowed_attendees(user):\n \"\"\"\n A user is allowed to see attendees for their own user, and for events they are organizing.\n \"\"\"\n if user.is_anonymous:\n return Attendee.objects.none()\n\n attendees = get_objects_for_user(\n user, \"events.change_attendee\", accept_global_perms=False\n )\n attendees |= Attendee.objects.filter(user=user)\n return attendees.distinct()\n\n def get_queryset(self):\n return self._get_allowed_attendees(self.request.user)\n\n @action(\n detail=True,\n methods=[\"PATCH\", \"PUT\"],\n permission_classes=(ChangeAttendeePermission,),\n serializer_class=AttendeeUpdateSerializer,\n )\n def change(self, request, pk=None):\n attendee: Attendee = self.get_object()\n partial = request.method == \"PATCH\"\n serializer = self.get_serializer(attendee, data=request.data, partial=partial)\n serializer.is_valid(raise_exception=True)\n serializer.save()\n return Response(data=serializer.data, status=status.HTTP_200_OK)\n\n @action(\n detail=True,\n methods=[\"PATCH\", \"PUT\"],\n permission_classes=(permissions.DjangoObjectPermissions,),\n serializer_class=AttendeeAdministrateSerializer,\n )\n def administrate(self, request, pk=None):\n attendee: Attendee = self.get_object()\n partial = request.method == \"PATCH\"\n serializer = self.get_serializer(attendee, data=request.data, partial=partial)\n serializer.is_valid(raise_exception=True)\n serializer.save()\n return Response(data=serializer.data, status=status.HTTP_200_OK)\n\n @action(\n detail=False,\n methods=[\"POST\"],\n serializer_class=RegisterAttendanceSerializer,\n url_path=\"register-attendance\",\n )\n def register_attendance(self, request, pk=None):\n \"\"\"\n Register that a user has physically attended an event.\n \"\"\"\n serializer = self.get_serializer(data=request.data)\n\n serializer.is_valid(raise_exception=True)\n\n attendee = serializer.get_attendee(request.data)\n attendee.attended = True\n attendee.save()\n\n return Response(\n {\n \"detail\": {\n \"message\": f\"{attendee.user} er registrert som deltaker. Velkommen!\",\n \"attend_status\": AttendStatus.REGISTER_SUCCESS,\n \"attendee\": attendee.id,\n }\n },\n status=status.HTTP_200_OK,\n )\n\n\nclass ExtrasViewSet(viewsets.ModelViewSet):\n serializer_class = ExtrasSerializer\n queryset = Extras.objects.all()\n permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)\n filterset_class = ExtrasFilter\n\n\nclass RuleBundleViewSet(viewsets.ModelViewSet):\n serializer_class = RuleBundleSerializer\n queryset = RuleBundle.objects.all()\n permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)\n filterset_class = RuleBundleFilter\n\n\nclass FieldOfStudyRuleViewSet(viewsets.ModelViewSet):\n serializer_class = FieldOfStudyRuleSerializer\n queryset = FieldOfStudyRule.objects.all()\n permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)\n filterset_class = FieldOfStudyRuleFilter\n\n\nclass GradeRuleViewSet(viewsets.ModelViewSet):\n serializer_class = GradeRuleSerializer\n queryset = GradeRule.objects.all()\n permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)\n filterset_class = GradeRuleFilter\n\n\nclass UserGroupRuleViewSet(viewsets.ModelViewSet):\n serializer_class = UserGroupRuleSerializer\n queryset = UserGroupRule.objects.all()\n permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)\n filterset_class = UserGroupRuleFilter\n", "path": "apps/events/api/views.py"}]}
3,388
279
gh_patches_debug_30785
rasdani/github-patches
git_diff
Parsl__parsl-1119
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Implement time limits for python apps Requested by @lgray. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `parsl/app/app.py` Content: ``` 1 """Definitions for the @App decorator and the App classes. 2 3 The App class encapsulates a generic leaf task that can be executed asynchronously. 4 """ 5 import logging 6 from abc import ABCMeta, abstractmethod 7 from inspect import getsource 8 from hashlib import md5 9 from inspect import signature 10 11 from parsl.app.errors import InvalidAppTypeError 12 13 logger = logging.getLogger(__name__) 14 15 16 class AppBase(metaclass=ABCMeta): 17 """This is the base class that defines the two external facing functions that an App must define. 18 19 The __init__ () which is called when the interpreter sees the definition of the decorated 20 function, and the __call__ () which is invoked when a decorated function is called by the user. 21 22 """ 23 24 def __init__(self, func, data_flow_kernel=None, walltime=60, executors='all', cache=False): 25 """Construct the App object. 26 27 Args: 28 - func (function): Takes the function to be made into an App 29 30 Kwargs: 31 - data_flow_kernel (DataFlowKernel): The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for 32 managing this app. This can be omitted only 33 after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. 34 - walltime (int) : Walltime in seconds for the app execution. 35 - executors (str|list) : Labels of the executors that this app can execute over. Default is 'all'. 36 - cache (Bool) : Enable caching of this app ? 37 38 Returns: 39 - App object. 40 41 """ 42 self.__name__ = func.__name__ 43 self.func = func 44 self.data_flow_kernel = data_flow_kernel 45 self.status = 'created' 46 self.executors = executors 47 self.cache = cache 48 if not (isinstance(executors, list) or isinstance(executors, str)): 49 logger.error("App {} specifies invalid executor option, expects string or list".format( 50 func.__name__)) 51 52 if cache is True: 53 try: 54 self.fn_source = getsource(func) 55 except OSError: 56 logger.debug("Unable to get source code for AppCaching. Recommend creating module") 57 self.fn_source = func.__name__ 58 59 self.func_hash = md5(self.fn_source.encode('utf-8')).hexdigest() 60 else: 61 self.func_hash = func.__name__ 62 63 params = signature(func).parameters 64 65 self.kwargs = {} 66 if 'stdout' in params: 67 self.kwargs['stdout'] = params['stdout'].default 68 if 'stderr' in params: 69 self.kwargs['stderr'] = params['stderr'].default 70 self.outputs = params['outputs'].default if 'outputs' in params else [] 71 self.inputs = params['inputs'].default if 'inputs' in params else [] 72 73 @abstractmethod 74 def __call__(self, *args, **kwargs): 75 pass 76 77 78 def App(apptype, data_flow_kernel=None, walltime=60, cache=False, executors='all'): 79 """The App decorator function. 80 81 Args: 82 - apptype (string) : Apptype can be bash|python 83 84 Kwargs: 85 - data_flow_kernel (DataFlowKernel): The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for 86 managing this app. This can be omitted only 87 after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. 88 - walltime (int) : Walltime for app in seconds, 89 default=60 90 - executors (str|list) : Labels of the executors that this app can execute over. Default is 'all'. 91 - cache (Bool) : Enable caching of the app call 92 default=False 93 94 Returns: 95 A PythonApp or BashApp object, which when called runs the apps through the executor. 96 """ 97 98 from parsl.app.python import PythonApp 99 from parsl.app.bash import BashApp 100 101 logger.warning("The 'App' decorator will be deprecated in Parsl 0.8. Please use 'python_app' or 'bash_app' instead.") 102 103 if apptype == 'python': 104 app_class = PythonApp 105 elif apptype == 'bash': 106 app_class = BashApp 107 else: 108 raise InvalidAppTypeError("Invalid apptype requested {}; must be 'python' or 'bash'".format(apptype)) 109 110 def wrapper(f): 111 return app_class(f, 112 data_flow_kernel=data_flow_kernel, 113 walltime=walltime, 114 cache=cache, 115 executors=executors) 116 return wrapper 117 118 119 def python_app(function=None, data_flow_kernel=None, walltime=60, cache=False, executors='all'): 120 """Decorator function for making python apps. 121 122 Parameters 123 ---------- 124 function : function 125 Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis, 126 for example, `@python_app` if using all defaults or `@python_app(walltime=120)`. If the 127 decorator is used alone, function will be the actual function being decorated, whereas if it 128 is called with arguments, function will be None. Default is None. 129 data_flow_kernel : DataFlowKernel 130 The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can 131 be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None. 132 walltime : int 133 Walltime for app in seconds. Default is 60. 134 executors : string or list 135 Labels of the executors that this app can execute over. Default is 'all'. 136 cache : bool 137 Enable caching of the app call. Default is False. 138 """ 139 from parsl.app.python import PythonApp 140 141 def decorator(func): 142 def wrapper(f): 143 return PythonApp(f, 144 data_flow_kernel=data_flow_kernel, 145 walltime=walltime, 146 cache=cache, 147 executors=executors) 148 return wrapper(func) 149 if function is not None: 150 return decorator(function) 151 return decorator 152 153 154 def bash_app(function=None, data_flow_kernel=None, walltime=60, cache=False, executors='all'): 155 """Decorator function for making bash apps. 156 157 Parameters 158 ---------- 159 function : function 160 Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis, 161 for example, `@bash_app` if using all defaults or `@bash_app(walltime=120)`. If the 162 decorator is used alone, function will be the actual function being decorated, whereas if it 163 is called with arguments, function will be None. Default is None. 164 data_flow_kernel : DataFlowKernel 165 The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can 166 be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None. 167 walltime : int 168 Walltime for app in seconds. Default is 60. 169 executors : string or list 170 Labels of the executors that this app can execute over. Default is 'all'. 171 cache : bool 172 Enable caching of the app call. Default is False. 173 """ 174 from parsl.app.bash import BashApp 175 176 def decorator(func): 177 def wrapper(f): 178 return BashApp(f, 179 data_flow_kernel=data_flow_kernel, 180 walltime=walltime, 181 cache=cache, 182 executors=executors) 183 return wrapper(func) 184 if function is not None: 185 return decorator(function) 186 return decorator 187 ``` Path: `parsl/app/python.py` Content: ``` 1 import logging 2 3 import tblib.pickling_support 4 tblib.pickling_support.install() 5 6 from parsl.app.futures import DataFuture 7 from parsl.app.app import AppBase 8 from parsl.app.errors import wrap_error 9 from parsl.dataflow.dflow import DataFlowKernelLoader 10 11 12 logger = logging.getLogger(__name__) 13 14 15 class PythonApp(AppBase): 16 """Extends AppBase to cover the Python App.""" 17 18 def __init__(self, func, data_flow_kernel=None, walltime=60, cache=False, executors='all'): 19 super().__init__( 20 wrap_error(func), 21 data_flow_kernel=data_flow_kernel, 22 walltime=walltime, 23 executors=executors, 24 cache=cache 25 ) 26 27 def __call__(self, *args, **kwargs): 28 """This is where the call to a python app is handled. 29 30 Args: 31 - Arbitrary 32 Kwargs: 33 - Arbitrary 34 35 Returns: 36 If outputs=[...] was a kwarg then: 37 App_fut, [Data_Futures...] 38 else: 39 App_fut 40 41 """ 42 43 if self.data_flow_kernel is None: 44 dfk = DataFlowKernelLoader.dfk() 45 else: 46 dfk = self.data_flow_kernel 47 48 app_fut = dfk.submit(self.func, *args, 49 executors=self.executors, 50 fn_hash=self.func_hash, 51 cache=self.cache, 52 **kwargs) 53 54 # logger.debug("App[{}] assigned Task[{}]".format(self.func.__name__, 55 # app_fut.tid)) 56 out_futs = [DataFuture(app_fut, o, tid=app_fut.tid) 57 for o in kwargs.get('outputs', [])] 58 app_fut._outputs = out_futs 59 60 return app_fut 61 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/parsl/app/app.py b/parsl/app/app.py --- a/parsl/app/app.py +++ b/parsl/app/app.py @@ -67,6 +67,8 @@ self.kwargs['stdout'] = params['stdout'].default if 'stderr' in params: self.kwargs['stderr'] = params['stderr'].default + if 'walltime' in params: + self.kwargs['walltime'] = params['walltime'].default self.outputs = params['outputs'].default if 'outputs' in params else [] self.inputs = params['inputs'].default if 'inputs' in params else [] diff --git a/parsl/app/python.py b/parsl/app/python.py --- a/parsl/app/python.py +++ b/parsl/app/python.py @@ -12,6 +12,27 @@ logger = logging.getLogger(__name__) +def timeout(f, seconds): + def wrapper(*args, **kwargs): + import threading + import ctypes + import parsl.app.errors + + def inject_exception(thread): + ctypes.pythonapi.PyThreadState_SetAsyncExc( + ctypes.c_long(thread), + ctypes.py_object(parsl.app.errors.AppTimeout) + ) + + thread = threading.current_thread().ident + timer = threading.Timer(seconds, inject_exception, args=[thread]) + timer.start() + result = f(*args, **kwargs) + timer.cancel() + return result + return wrapper + + class PythonApp(AppBase): """Extends AppBase to cover the Python App.""" @@ -45,6 +66,9 @@ else: dfk = self.data_flow_kernel + walltime = self.kwargs.get('walltime') + if walltime is not None: + self.func = timeout(self.func, walltime) app_fut = dfk.submit(self.func, *args, executors=self.executors, fn_hash=self.func_hash,
{"golden_diff": "diff --git a/parsl/app/app.py b/parsl/app/app.py\n--- a/parsl/app/app.py\n+++ b/parsl/app/app.py\n@@ -67,6 +67,8 @@\n self.kwargs['stdout'] = params['stdout'].default\n if 'stderr' in params:\n self.kwargs['stderr'] = params['stderr'].default\n+ if 'walltime' in params:\n+ self.kwargs['walltime'] = params['walltime'].default\n self.outputs = params['outputs'].default if 'outputs' in params else []\n self.inputs = params['inputs'].default if 'inputs' in params else []\n \ndiff --git a/parsl/app/python.py b/parsl/app/python.py\n--- a/parsl/app/python.py\n+++ b/parsl/app/python.py\n@@ -12,6 +12,27 @@\n logger = logging.getLogger(__name__)\n \n \n+def timeout(f, seconds):\n+ def wrapper(*args, **kwargs):\n+ import threading\n+ import ctypes\n+ import parsl.app.errors\n+\n+ def inject_exception(thread):\n+ ctypes.pythonapi.PyThreadState_SetAsyncExc(\n+ ctypes.c_long(thread),\n+ ctypes.py_object(parsl.app.errors.AppTimeout)\n+ )\n+\n+ thread = threading.current_thread().ident\n+ timer = threading.Timer(seconds, inject_exception, args=[thread])\n+ timer.start()\n+ result = f(*args, **kwargs)\n+ timer.cancel()\n+ return result\n+ return wrapper\n+\n+\n class PythonApp(AppBase):\n \"\"\"Extends AppBase to cover the Python App.\"\"\"\n \n@@ -45,6 +66,9 @@\n else:\n dfk = self.data_flow_kernel\n \n+ walltime = self.kwargs.get('walltime')\n+ if walltime is not None:\n+ self.func = timeout(self.func, walltime)\n app_fut = dfk.submit(self.func, *args,\n executors=self.executors,\n fn_hash=self.func_hash,\n", "issue": "Implement time limits for python apps\nRequested by @lgray.\n", "before_files": [{"content": "\"\"\"Definitions for the @App decorator and the App classes.\n\nThe App class encapsulates a generic leaf task that can be executed asynchronously.\n\"\"\"\nimport logging\nfrom abc import ABCMeta, abstractmethod\nfrom inspect import getsource\nfrom hashlib import md5\nfrom inspect import signature\n\nfrom parsl.app.errors import InvalidAppTypeError\n\nlogger = logging.getLogger(__name__)\n\n\nclass AppBase(metaclass=ABCMeta):\n \"\"\"This is the base class that defines the two external facing functions that an App must define.\n\n The __init__ () which is called when the interpreter sees the definition of the decorated\n function, and the __call__ () which is invoked when a decorated function is called by the user.\n\n \"\"\"\n\n def __init__(self, func, data_flow_kernel=None, walltime=60, executors='all', cache=False):\n \"\"\"Construct the App object.\n\n Args:\n - func (function): Takes the function to be made into an App\n\n Kwargs:\n - data_flow_kernel (DataFlowKernel): The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for\n managing this app. This can be omitted only\n after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`.\n - walltime (int) : Walltime in seconds for the app execution.\n - executors (str|list) : Labels of the executors that this app can execute over. Default is 'all'.\n - cache (Bool) : Enable caching of this app ?\n\n Returns:\n - App object.\n\n \"\"\"\n self.__name__ = func.__name__\n self.func = func\n self.data_flow_kernel = data_flow_kernel\n self.status = 'created'\n self.executors = executors\n self.cache = cache\n if not (isinstance(executors, list) or isinstance(executors, str)):\n logger.error(\"App {} specifies invalid executor option, expects string or list\".format(\n func.__name__))\n\n if cache is True:\n try:\n self.fn_source = getsource(func)\n except OSError:\n logger.debug(\"Unable to get source code for AppCaching. Recommend creating module\")\n self.fn_source = func.__name__\n\n self.func_hash = md5(self.fn_source.encode('utf-8')).hexdigest()\n else:\n self.func_hash = func.__name__\n\n params = signature(func).parameters\n\n self.kwargs = {}\n if 'stdout' in params:\n self.kwargs['stdout'] = params['stdout'].default\n if 'stderr' in params:\n self.kwargs['stderr'] = params['stderr'].default\n self.outputs = params['outputs'].default if 'outputs' in params else []\n self.inputs = params['inputs'].default if 'inputs' in params else []\n\n @abstractmethod\n def __call__(self, *args, **kwargs):\n pass\n\n\ndef App(apptype, data_flow_kernel=None, walltime=60, cache=False, executors='all'):\n \"\"\"The App decorator function.\n\n Args:\n - apptype (string) : Apptype can be bash|python\n\n Kwargs:\n - data_flow_kernel (DataFlowKernel): The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for\n managing this app. This can be omitted only\n after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`.\n - walltime (int) : Walltime for app in seconds,\n default=60\n - executors (str|list) : Labels of the executors that this app can execute over. Default is 'all'.\n - cache (Bool) : Enable caching of the app call\n default=False\n\n Returns:\n A PythonApp or BashApp object, which when called runs the apps through the executor.\n \"\"\"\n\n from parsl.app.python import PythonApp\n from parsl.app.bash import BashApp\n\n logger.warning(\"The 'App' decorator will be deprecated in Parsl 0.8. Please use 'python_app' or 'bash_app' instead.\")\n\n if apptype == 'python':\n app_class = PythonApp\n elif apptype == 'bash':\n app_class = BashApp\n else:\n raise InvalidAppTypeError(\"Invalid apptype requested {}; must be 'python' or 'bash'\".format(apptype))\n\n def wrapper(f):\n return app_class(f,\n data_flow_kernel=data_flow_kernel,\n walltime=walltime,\n cache=cache,\n executors=executors)\n return wrapper\n\n\ndef python_app(function=None, data_flow_kernel=None, walltime=60, cache=False, executors='all'):\n \"\"\"Decorator function for making python apps.\n\n Parameters\n ----------\n function : function\n Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis,\n for example, `@python_app` if using all defaults or `@python_app(walltime=120)`. If the\n decorator is used alone, function will be the actual function being decorated, whereas if it\n is called with arguments, function will be None. Default is None.\n data_flow_kernel : DataFlowKernel\n The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can\n be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.\n walltime : int\n Walltime for app in seconds. Default is 60.\n executors : string or list\n Labels of the executors that this app can execute over. Default is 'all'.\n cache : bool\n Enable caching of the app call. Default is False.\n \"\"\"\n from parsl.app.python import PythonApp\n\n def decorator(func):\n def wrapper(f):\n return PythonApp(f,\n data_flow_kernel=data_flow_kernel,\n walltime=walltime,\n cache=cache,\n executors=executors)\n return wrapper(func)\n if function is not None:\n return decorator(function)\n return decorator\n\n\ndef bash_app(function=None, data_flow_kernel=None, walltime=60, cache=False, executors='all'):\n \"\"\"Decorator function for making bash apps.\n\n Parameters\n ----------\n function : function\n Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis,\n for example, `@bash_app` if using all defaults or `@bash_app(walltime=120)`. If the\n decorator is used alone, function will be the actual function being decorated, whereas if it\n is called with arguments, function will be None. Default is None.\n data_flow_kernel : DataFlowKernel\n The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can\n be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.\n walltime : int\n Walltime for app in seconds. Default is 60.\n executors : string or list\n Labels of the executors that this app can execute over. Default is 'all'.\n cache : bool\n Enable caching of the app call. Default is False.\n \"\"\"\n from parsl.app.bash import BashApp\n\n def decorator(func):\n def wrapper(f):\n return BashApp(f,\n data_flow_kernel=data_flow_kernel,\n walltime=walltime,\n cache=cache,\n executors=executors)\n return wrapper(func)\n if function is not None:\n return decorator(function)\n return decorator\n", "path": "parsl/app/app.py"}, {"content": "import logging\n\nimport tblib.pickling_support\ntblib.pickling_support.install()\n\nfrom parsl.app.futures import DataFuture\nfrom parsl.app.app import AppBase\nfrom parsl.app.errors import wrap_error\nfrom parsl.dataflow.dflow import DataFlowKernelLoader\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass PythonApp(AppBase):\n \"\"\"Extends AppBase to cover the Python App.\"\"\"\n\n def __init__(self, func, data_flow_kernel=None, walltime=60, cache=False, executors='all'):\n super().__init__(\n wrap_error(func),\n data_flow_kernel=data_flow_kernel,\n walltime=walltime,\n executors=executors,\n cache=cache\n )\n\n def __call__(self, *args, **kwargs):\n \"\"\"This is where the call to a python app is handled.\n\n Args:\n - Arbitrary\n Kwargs:\n - Arbitrary\n\n Returns:\n If outputs=[...] was a kwarg then:\n App_fut, [Data_Futures...]\n else:\n App_fut\n\n \"\"\"\n\n if self.data_flow_kernel is None:\n dfk = DataFlowKernelLoader.dfk()\n else:\n dfk = self.data_flow_kernel\n\n app_fut = dfk.submit(self.func, *args,\n executors=self.executors,\n fn_hash=self.func_hash,\n cache=self.cache,\n **kwargs)\n\n # logger.debug(\"App[{}] assigned Task[{}]\".format(self.func.__name__,\n # app_fut.tid))\n out_futs = [DataFuture(app_fut, o, tid=app_fut.tid)\n for o in kwargs.get('outputs', [])]\n app_fut._outputs = out_futs\n\n return app_fut\n", "path": "parsl/app/python.py"}], "after_files": [{"content": "\"\"\"Definitions for the @App decorator and the App classes.\n\nThe App class encapsulates a generic leaf task that can be executed asynchronously.\n\"\"\"\nimport logging\nfrom abc import ABCMeta, abstractmethod\nfrom inspect import getsource\nfrom hashlib import md5\nfrom inspect import signature\n\nfrom parsl.app.errors import InvalidAppTypeError\n\nlogger = logging.getLogger(__name__)\n\n\nclass AppBase(metaclass=ABCMeta):\n \"\"\"This is the base class that defines the two external facing functions that an App must define.\n\n The __init__ () which is called when the interpreter sees the definition of the decorated\n function, and the __call__ () which is invoked when a decorated function is called by the user.\n\n \"\"\"\n\n def __init__(self, func, data_flow_kernel=None, walltime=60, executors='all', cache=False):\n \"\"\"Construct the App object.\n\n Args:\n - func (function): Takes the function to be made into an App\n\n Kwargs:\n - data_flow_kernel (DataFlowKernel): The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for\n managing this app. This can be omitted only\n after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`.\n - walltime (int) : Walltime in seconds for the app execution.\n - executors (str|list) : Labels of the executors that this app can execute over. Default is 'all'.\n - cache (Bool) : Enable caching of this app ?\n\n Returns:\n - App object.\n\n \"\"\"\n self.__name__ = func.__name__\n self.func = func\n self.data_flow_kernel = data_flow_kernel\n self.status = 'created'\n self.executors = executors\n self.cache = cache\n if not (isinstance(executors, list) or isinstance(executors, str)):\n logger.error(\"App {} specifies invalid executor option, expects string or list\".format(\n func.__name__))\n\n if cache is True:\n try:\n self.fn_source = getsource(func)\n except OSError:\n logger.debug(\"Unable to get source code for AppCaching. Recommend creating module\")\n self.fn_source = func.__name__\n\n self.func_hash = md5(self.fn_source.encode('utf-8')).hexdigest()\n else:\n self.func_hash = func.__name__\n\n params = signature(func).parameters\n\n self.kwargs = {}\n if 'stdout' in params:\n self.kwargs['stdout'] = params['stdout'].default\n if 'stderr' in params:\n self.kwargs['stderr'] = params['stderr'].default\n if 'walltime' in params:\n self.kwargs['walltime'] = params['walltime'].default\n self.outputs = params['outputs'].default if 'outputs' in params else []\n self.inputs = params['inputs'].default if 'inputs' in params else []\n\n @abstractmethod\n def __call__(self, *args, **kwargs):\n pass\n\n\ndef App(apptype, data_flow_kernel=None, walltime=60, cache=False, executors='all'):\n \"\"\"The App decorator function.\n\n Args:\n - apptype (string) : Apptype can be bash|python\n\n Kwargs:\n - data_flow_kernel (DataFlowKernel): The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for\n managing this app. This can be omitted only\n after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`.\n - walltime (int) : Walltime for app in seconds,\n default=60\n - executors (str|list) : Labels of the executors that this app can execute over. Default is 'all'.\n - cache (Bool) : Enable caching of the app call\n default=False\n\n Returns:\n A PythonApp or BashApp object, which when called runs the apps through the executor.\n \"\"\"\n\n from parsl.app.python import PythonApp\n from parsl.app.bash import BashApp\n\n logger.warning(\"The 'App' decorator will be deprecated in Parsl 0.8. Please use 'python_app' or 'bash_app' instead.\")\n\n if apptype == 'python':\n app_class = PythonApp\n elif apptype == 'bash':\n app_class = BashApp\n else:\n raise InvalidAppTypeError(\"Invalid apptype requested {}; must be 'python' or 'bash'\".format(apptype))\n\n def wrapper(f):\n return app_class(f,\n data_flow_kernel=data_flow_kernel,\n walltime=walltime,\n cache=cache,\n executors=executors)\n return wrapper\n\n\ndef python_app(function=None, data_flow_kernel=None, walltime=60, cache=False, executors='all'):\n \"\"\"Decorator function for making python apps.\n\n Parameters\n ----------\n function : function\n Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis,\n for example, `@python_app` if using all defaults or `@python_app(walltime=120)`. If the\n decorator is used alone, function will be the actual function being decorated, whereas if it\n is called with arguments, function will be None. Default is None.\n data_flow_kernel : DataFlowKernel\n The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can\n be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.\n walltime : int\n Walltime for app in seconds. Default is 60.\n executors : string or list\n Labels of the executors that this app can execute over. Default is 'all'.\n cache : bool\n Enable caching of the app call. Default is False.\n \"\"\"\n from parsl.app.python import PythonApp\n\n def decorator(func):\n def wrapper(f):\n return PythonApp(f,\n data_flow_kernel=data_flow_kernel,\n walltime=walltime,\n cache=cache,\n executors=executors)\n return wrapper(func)\n if function is not None:\n return decorator(function)\n return decorator\n\n\ndef bash_app(function=None, data_flow_kernel=None, walltime=60, cache=False, executors='all'):\n \"\"\"Decorator function for making bash apps.\n\n Parameters\n ----------\n function : function\n Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis,\n for example, `@bash_app` if using all defaults or `@bash_app(walltime=120)`. If the\n decorator is used alone, function will be the actual function being decorated, whereas if it\n is called with arguments, function will be None. Default is None.\n data_flow_kernel : DataFlowKernel\n The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can\n be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.\n walltime : int\n Walltime for app in seconds. Default is 60.\n executors : string or list\n Labels of the executors that this app can execute over. Default is 'all'.\n cache : bool\n Enable caching of the app call. Default is False.\n \"\"\"\n from parsl.app.bash import BashApp\n\n def decorator(func):\n def wrapper(f):\n return BashApp(f,\n data_flow_kernel=data_flow_kernel,\n walltime=walltime,\n cache=cache,\n executors=executors)\n return wrapper(func)\n if function is not None:\n return decorator(function)\n return decorator\n", "path": "parsl/app/app.py"}, {"content": "import logging\n\nimport tblib.pickling_support\ntblib.pickling_support.install()\n\nfrom parsl.app.futures import DataFuture\nfrom parsl.app.app import AppBase\nfrom parsl.app.errors import wrap_error\nfrom parsl.dataflow.dflow import DataFlowKernelLoader\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef timeout(f, seconds):\n def wrapper(*args, **kwargs):\n import threading\n import ctypes\n import parsl.app.errors\n\n def inject_exception(thread):\n ctypes.pythonapi.PyThreadState_SetAsyncExc(\n ctypes.c_long(thread),\n ctypes.py_object(parsl.app.errors.AppTimeout)\n )\n\n thread = threading.current_thread().ident\n timer = threading.Timer(seconds, inject_exception, args=[thread])\n timer.start()\n result = f(*args, **kwargs)\n timer.cancel()\n return result\n return wrapper\n\n\nclass PythonApp(AppBase):\n \"\"\"Extends AppBase to cover the Python App.\"\"\"\n\n def __init__(self, func, data_flow_kernel=None, walltime=60, cache=False, executors='all'):\n super().__init__(\n wrap_error(func),\n data_flow_kernel=data_flow_kernel,\n walltime=walltime,\n executors=executors,\n cache=cache\n )\n\n def __call__(self, *args, **kwargs):\n \"\"\"This is where the call to a python app is handled.\n\n Args:\n - Arbitrary\n Kwargs:\n - Arbitrary\n\n Returns:\n If outputs=[...] was a kwarg then:\n App_fut, [Data_Futures...]\n else:\n App_fut\n\n \"\"\"\n\n if self.data_flow_kernel is None:\n dfk = DataFlowKernelLoader.dfk()\n else:\n dfk = self.data_flow_kernel\n\n walltime = self.kwargs.get('walltime')\n if walltime is not None:\n self.func = timeout(self.func, walltime)\n app_fut = dfk.submit(self.func, *args,\n executors=self.executors,\n fn_hash=self.func_hash,\n cache=self.cache,\n **kwargs)\n\n # logger.debug(\"App[{}] assigned Task[{}]\".format(self.func.__name__,\n # app_fut.tid))\n out_futs = [DataFuture(app_fut, o, tid=app_fut.tid)\n for o in kwargs.get('outputs', [])]\n app_fut._outputs = out_futs\n\n return app_fut\n", "path": "parsl/app/python.py"}]}
2,888
442
gh_patches_debug_31
rasdani/github-patches
git_diff
aws-cloudformation__cfn-lint-1456
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- AWS::AutoScaling::AutoScalingGroup MaxInstanceLifetime Validation *cfn-lint version: 0.29.2* *Description of issue.* When using the parameter `MaxInstanceLifetime` for `AWS::AutoScaling::AutoScalingGroup` we are hit with the following lint error: ``` $ cfn-lint templates/proj/rgs/rgs_autoscale_stretch_elb.yml E3002 Invalid Property Resources/autoscalegroup/Properties/MaxInstanceLifetime templates/proj/rgs/rgs_autoscale_stretch_elb.yml:194:7 ``` The template which leads to the error: ``` [...] autoscalegroup: Type: AWS::AutoScaling::AutoScalingGroup Properties: AvailabilityZones: !Ref AvailabilityZones Cooldown: '300' HealthCheckGracePeriod: !Ref GracePeriod HealthCheckType: ELB MaxSize: !Ref MaxSize MinSize: !Ref MinSize MaxInstanceLifetime: !Ref MaxInstanceLifetime VPCZoneIdentifier: !Ref EC2SubnetIDs TargetGroupARNs: - !Ref elbtargetgroup LaunchConfigurationName: !Ref launchconfiguration Tags: [...] PropagateAtLaunch: true TerminationPolicies: - Default [..] ``` It seems the parameter is currently not supported by cfn-lint, would be cool to see support for it. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `src/cfnlint/version.py` Content: ``` 1 """ 2 Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. 3 SPDX-License-Identifier: MIT-0 4 """ 5 6 __version__ = '0.29.3' 7 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/src/cfnlint/version.py b/src/cfnlint/version.py --- a/src/cfnlint/version.py +++ b/src/cfnlint/version.py @@ -3,4 +3,4 @@ SPDX-License-Identifier: MIT-0 """ -__version__ = '0.29.3' +__version__ = '0.29.4'
{"golden_diff": "diff --git a/src/cfnlint/version.py b/src/cfnlint/version.py\n--- a/src/cfnlint/version.py\n+++ b/src/cfnlint/version.py\n@@ -3,4 +3,4 @@\n SPDX-License-Identifier: MIT-0\n \"\"\"\n \n-__version__ = '0.29.3'\n+__version__ = '0.29.4'\n", "issue": "AWS::AutoScaling::AutoScalingGroup MaxInstanceLifetime Validation\n*cfn-lint version: 0.29.2*\r\n\r\n*Description of issue.*\r\n\r\nWhen using the parameter `MaxInstanceLifetime` for `AWS::AutoScaling::AutoScalingGroup` we are hit with the following lint error:\r\n\r\n```\r\n$ cfn-lint templates/proj/rgs/rgs_autoscale_stretch_elb.yml\r\nE3002 Invalid Property Resources/autoscalegroup/Properties/MaxInstanceLifetime\r\ntemplates/proj/rgs/rgs_autoscale_stretch_elb.yml:194:7\r\n```\r\n\r\nThe template which leads to the error:\r\n\r\n```\r\n[...]\r\n\r\n autoscalegroup:\r\n Type: AWS::AutoScaling::AutoScalingGroup\r\n Properties:\r\n AvailabilityZones: !Ref AvailabilityZones\r\n Cooldown: '300'\r\n HealthCheckGracePeriod: !Ref GracePeriod\r\n HealthCheckType: ELB\r\n MaxSize: !Ref MaxSize\r\n MinSize: !Ref MinSize\r\n MaxInstanceLifetime: !Ref MaxInstanceLifetime\r\n VPCZoneIdentifier: !Ref EC2SubnetIDs\r\n TargetGroupARNs:\r\n - !Ref elbtargetgroup\r\n LaunchConfigurationName: !Ref launchconfiguration\r\n Tags: [...]\r\n PropagateAtLaunch: true\r\n TerminationPolicies:\r\n - Default\r\n\r\n[..]\r\n```\r\n\r\nIt seems the parameter is currently not supported by cfn-lint, would be cool to see support for it.\n", "before_files": [{"content": "\"\"\"\nCopyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\n\n__version__ = '0.29.3'\n", "path": "src/cfnlint/version.py"}], "after_files": [{"content": "\"\"\"\nCopyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\n\n__version__ = '0.29.4'\n", "path": "src/cfnlint/version.py"}]}
634
82
gh_patches_debug_55
rasdani/github-patches
git_diff
emissary-ingress__emissary-23
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Users need statsd support Ambassador needs to be able to send stats off to statsd, whatever statsd the user wants to use. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `ambassador/VERSION.py` Content: ``` 1 # Don't change this line without also changing .bumpversion.cfg 2 Version = "0.5.0" 3 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/ambassador/VERSION.py b/ambassador/VERSION.py --- a/ambassador/VERSION.py +++ b/ambassador/VERSION.py @@ -1,2 +1,2 @@ # Don't change this line without also changing .bumpversion.cfg -Version = "0.5.0" +Version = "0.5.1"
{"golden_diff": "diff --git a/ambassador/VERSION.py b/ambassador/VERSION.py\n--- a/ambassador/VERSION.py\n+++ b/ambassador/VERSION.py\n@@ -1,2 +1,2 @@\n # Don't change this line without also changing .bumpversion.cfg\n-Version = \"0.5.0\"\n+Version = \"0.5.1\"\n", "issue": "Users need statsd support\nAmbassador needs to be able to send stats off to statsd, whatever statsd the user wants to use.\n", "before_files": [{"content": "# Don't change this line without also changing .bumpversion.cfg\nVersion = \"0.5.0\"\n", "path": "ambassador/VERSION.py"}], "after_files": [{"content": "# Don't change this line without also changing .bumpversion.cfg\nVersion = \"0.5.1\"\n", "path": "ambassador/VERSION.py"}]}
315
80
gh_patches_debug_38263
rasdani/github-patches
git_diff
microsoft__MLOS-573
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Don't expose all params as shell environment variables by default _Originally posted by @bpkroth in https://github.com/microsoft/MLOS/pull/557#discussion_r1374921396_ --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `mlos_bench/mlos_bench/environments/script_env.py` Content: ``` 1 # 2 # Copyright (c) Microsoft Corporation. 3 # Licensed under the MIT License. 4 # 5 """ 6 Base scriptable benchmark environment. 7 """ 8 9 import abc 10 import logging 11 import re 12 from typing import Dict, Iterable, Optional 13 14 from mlos_bench.environments.base_environment import Environment 15 from mlos_bench.services.base_service import Service 16 from mlos_bench.tunables.tunable import TunableValue 17 from mlos_bench.tunables.tunable_groups import TunableGroups 18 19 from mlos_bench.util import try_parse_val 20 21 _LOG = logging.getLogger(__name__) 22 23 24 class ScriptEnv(Environment, metaclass=abc.ABCMeta): 25 """ 26 Base Environment that runs scripts for setup/run/teardown. 27 """ 28 29 _RE_INVALID = re.compile(r"[^a-zA-Z0-9_]") 30 31 def __init__(self, 32 *, 33 name: str, 34 config: dict, 35 global_config: Optional[dict] = None, 36 tunables: Optional[TunableGroups] = None, 37 service: Optional[Service] = None): 38 """ 39 Create a new environment for script execution. 40 41 Parameters 42 ---------- 43 name: str 44 Human-readable name of the environment. 45 config : dict 46 Free-format dictionary that contains the benchmark environment 47 configuration. Each config must have at least the `tunable_params` 48 and the `const_args` sections. It must also have at least one of 49 the following parameters: {`setup`, `run`, `teardown`}. 50 Additional parameters: 51 * `shell_env_params` - an array of parameters to pass to the script 52 as shell environment variables, and 53 * `shell_env_params_rename` - a dictionary of {to: from} mappings 54 of the script parameters. If not specified, replace all 55 non-alphanumeric characters with underscores. 56 If neither `shell_env_params` nor `shell_env_params_rename` are specified, 57 pass *all* parameters to the script. 58 global_config : dict 59 Free-format dictionary of global parameters (e.g., security credentials) 60 to be mixed in into the "const_args" section of the local config. 61 tunables : TunableGroups 62 A collection of tunable parameters for *all* environments. 63 service: Service 64 An optional service object (e.g., providing methods to 65 deploy or reboot a VM, etc.). 66 """ 67 super().__init__(name=name, config=config, global_config=global_config, 68 tunables=tunables, service=service) 69 70 self._script_setup = self.config.get("setup") 71 self._script_run = self.config.get("run") 72 self._script_teardown = self.config.get("teardown") 73 74 self._shell_env_params: Optional[Iterable[str]] = self.config.get("shell_env_params") 75 self._shell_env_params_rename: Dict[str, str] = self.config.get("shell_env_params_rename", {}) 76 77 results_stdout_pattern = self.config.get("results_stdout_pattern") 78 self._results_stdout_pattern: Optional[re.Pattern[str]] = \ 79 re.compile(results_stdout_pattern) if results_stdout_pattern else None 80 81 def _get_env_params(self) -> Dict[str, str]: 82 """ 83 Get the *shell* environment parameters to be passed to the script. 84 85 Returns 86 ------- 87 env_params : Dict[str, str] 88 Parameters to pass as *shell* environment variables into the script. 89 This is usually a subset of `_params` with some possible conversions. 90 """ 91 rename: Dict[str, str] # {to: from} mapping of the script parameters. 92 if self._shell_env_params is None: 93 if self._shell_env_params_rename: 94 # Only rename specified - use it. 95 rename = self._shell_env_params_rename.copy() 96 else: 97 # FIXME: We should not be exposing all params by default. 98 # Neither `shell_env_params` nor rename are specified - use all params. 99 rename = {self._RE_INVALID.sub("_", key): key for key in self._params} 100 else: 101 # Use `shell_env_params` and rename if specified. 102 rename = {self._RE_INVALID.sub("_", key): key for key in self._shell_env_params} 103 rename.update(self._shell_env_params_rename) 104 105 return {key_sub: str(self._params[key]) for (key_sub, key) in rename.items()} 106 107 def _extract_stdout_results(self, stdout: str) -> Dict[str, TunableValue]: 108 """ 109 Extract the results from the stdout of the script. 110 111 Parameters 112 ---------- 113 stdout : str 114 The stdout of the script. 115 116 Returns 117 ------- 118 results : Dict[str, TunableValue] 119 A dictionary of results extracted from the stdout. 120 """ 121 if not self._results_stdout_pattern: 122 return {} 123 _LOG.debug("Extract regex: '%s' from: '%s'", self._results_stdout_pattern, stdout) 124 return {key: try_parse_val(val) for (key, val) in self._results_stdout_pattern.findall(stdout)} 125 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/mlos_bench/mlos_bench/environments/script_env.py b/mlos_bench/mlos_bench/environments/script_env.py --- a/mlos_bench/mlos_bench/environments/script_env.py +++ b/mlos_bench/mlos_bench/environments/script_env.py @@ -54,7 +54,7 @@ of the script parameters. If not specified, replace all non-alphanumeric characters with underscores. If neither `shell_env_params` nor `shell_env_params_rename` are specified, - pass *all* parameters to the script. + *no* additional shell parameters will be passed to the script. global_config : dict Free-format dictionary of global parameters (e.g., security credentials) to be mixed in into the "const_args" section of the local config. @@ -71,7 +71,7 @@ self._script_run = self.config.get("run") self._script_teardown = self.config.get("teardown") - self._shell_env_params: Optional[Iterable[str]] = self.config.get("shell_env_params") + self._shell_env_params: Iterable[str] = self.config.get("shell_env_params", []) self._shell_env_params_rename: Dict[str, str] = self.config.get("shell_env_params_rename", {}) results_stdout_pattern = self.config.get("results_stdout_pattern") @@ -88,20 +88,8 @@ Parameters to pass as *shell* environment variables into the script. This is usually a subset of `_params` with some possible conversions. """ - rename: Dict[str, str] # {to: from} mapping of the script parameters. - if self._shell_env_params is None: - if self._shell_env_params_rename: - # Only rename specified - use it. - rename = self._shell_env_params_rename.copy() - else: - # FIXME: We should not be exposing all params by default. - # Neither `shell_env_params` nor rename are specified - use all params. - rename = {self._RE_INVALID.sub("_", key): key for key in self._params} - else: - # Use `shell_env_params` and rename if specified. - rename = {self._RE_INVALID.sub("_", key): key for key in self._shell_env_params} - rename.update(self._shell_env_params_rename) - + rename = {self._RE_INVALID.sub("_", key): key for key in self._shell_env_params} + rename.update(self._shell_env_params_rename) return {key_sub: str(self._params[key]) for (key_sub, key) in rename.items()} def _extract_stdout_results(self, stdout: str) -> Dict[str, TunableValue]:
{"golden_diff": "diff --git a/mlos_bench/mlos_bench/environments/script_env.py b/mlos_bench/mlos_bench/environments/script_env.py\n--- a/mlos_bench/mlos_bench/environments/script_env.py\n+++ b/mlos_bench/mlos_bench/environments/script_env.py\n@@ -54,7 +54,7 @@\n of the script parameters. If not specified, replace all\n non-alphanumeric characters with underscores.\n If neither `shell_env_params` nor `shell_env_params_rename` are specified,\n- pass *all* parameters to the script.\n+ *no* additional shell parameters will be passed to the script.\n global_config : dict\n Free-format dictionary of global parameters (e.g., security credentials)\n to be mixed in into the \"const_args\" section of the local config.\n@@ -71,7 +71,7 @@\n self._script_run = self.config.get(\"run\")\n self._script_teardown = self.config.get(\"teardown\")\n \n- self._shell_env_params: Optional[Iterable[str]] = self.config.get(\"shell_env_params\")\n+ self._shell_env_params: Iterable[str] = self.config.get(\"shell_env_params\", [])\n self._shell_env_params_rename: Dict[str, str] = self.config.get(\"shell_env_params_rename\", {})\n \n results_stdout_pattern = self.config.get(\"results_stdout_pattern\")\n@@ -88,20 +88,8 @@\n Parameters to pass as *shell* environment variables into the script.\n This is usually a subset of `_params` with some possible conversions.\n \"\"\"\n- rename: Dict[str, str] # {to: from} mapping of the script parameters.\n- if self._shell_env_params is None:\n- if self._shell_env_params_rename:\n- # Only rename specified - use it.\n- rename = self._shell_env_params_rename.copy()\n- else:\n- # FIXME: We should not be exposing all params by default.\n- # Neither `shell_env_params` nor rename are specified - use all params.\n- rename = {self._RE_INVALID.sub(\"_\", key): key for key in self._params}\n- else:\n- # Use `shell_env_params` and rename if specified.\n- rename = {self._RE_INVALID.sub(\"_\", key): key for key in self._shell_env_params}\n- rename.update(self._shell_env_params_rename)\n-\n+ rename = {self._RE_INVALID.sub(\"_\", key): key for key in self._shell_env_params}\n+ rename.update(self._shell_env_params_rename)\n return {key_sub: str(self._params[key]) for (key_sub, key) in rename.items()}\n \n def _extract_stdout_results(self, stdout: str) -> Dict[str, TunableValue]:\n", "issue": "Don't expose all params as shell environment variables by default\n_Originally posted by @bpkroth in https://github.com/microsoft/MLOS/pull/557#discussion_r1374921396_\r\n \n", "before_files": [{"content": "#\n# Copyright (c) Microsoft Corporation.\n# Licensed under the MIT License.\n#\n\"\"\"\nBase scriptable benchmark environment.\n\"\"\"\n\nimport abc\nimport logging\nimport re\nfrom typing import Dict, Iterable, Optional\n\nfrom mlos_bench.environments.base_environment import Environment\nfrom mlos_bench.services.base_service import Service\nfrom mlos_bench.tunables.tunable import TunableValue\nfrom mlos_bench.tunables.tunable_groups import TunableGroups\n\nfrom mlos_bench.util import try_parse_val\n\n_LOG = logging.getLogger(__name__)\n\n\nclass ScriptEnv(Environment, metaclass=abc.ABCMeta):\n \"\"\"\n Base Environment that runs scripts for setup/run/teardown.\n \"\"\"\n\n _RE_INVALID = re.compile(r\"[^a-zA-Z0-9_]\")\n\n def __init__(self,\n *,\n name: str,\n config: dict,\n global_config: Optional[dict] = None,\n tunables: Optional[TunableGroups] = None,\n service: Optional[Service] = None):\n \"\"\"\n Create a new environment for script execution.\n\n Parameters\n ----------\n name: str\n Human-readable name of the environment.\n config : dict\n Free-format dictionary that contains the benchmark environment\n configuration. Each config must have at least the `tunable_params`\n and the `const_args` sections. It must also have at least one of\n the following parameters: {`setup`, `run`, `teardown`}.\n Additional parameters:\n * `shell_env_params` - an array of parameters to pass to the script\n as shell environment variables, and\n * `shell_env_params_rename` - a dictionary of {to: from} mappings\n of the script parameters. If not specified, replace all\n non-alphanumeric characters with underscores.\n If neither `shell_env_params` nor `shell_env_params_rename` are specified,\n pass *all* parameters to the script.\n global_config : dict\n Free-format dictionary of global parameters (e.g., security credentials)\n to be mixed in into the \"const_args\" section of the local config.\n tunables : TunableGroups\n A collection of tunable parameters for *all* environments.\n service: Service\n An optional service object (e.g., providing methods to\n deploy or reboot a VM, etc.).\n \"\"\"\n super().__init__(name=name, config=config, global_config=global_config,\n tunables=tunables, service=service)\n\n self._script_setup = self.config.get(\"setup\")\n self._script_run = self.config.get(\"run\")\n self._script_teardown = self.config.get(\"teardown\")\n\n self._shell_env_params: Optional[Iterable[str]] = self.config.get(\"shell_env_params\")\n self._shell_env_params_rename: Dict[str, str] = self.config.get(\"shell_env_params_rename\", {})\n\n results_stdout_pattern = self.config.get(\"results_stdout_pattern\")\n self._results_stdout_pattern: Optional[re.Pattern[str]] = \\\n re.compile(results_stdout_pattern) if results_stdout_pattern else None\n\n def _get_env_params(self) -> Dict[str, str]:\n \"\"\"\n Get the *shell* environment parameters to be passed to the script.\n\n Returns\n -------\n env_params : Dict[str, str]\n Parameters to pass as *shell* environment variables into the script.\n This is usually a subset of `_params` with some possible conversions.\n \"\"\"\n rename: Dict[str, str] # {to: from} mapping of the script parameters.\n if self._shell_env_params is None:\n if self._shell_env_params_rename:\n # Only rename specified - use it.\n rename = self._shell_env_params_rename.copy()\n else:\n # FIXME: We should not be exposing all params by default.\n # Neither `shell_env_params` nor rename are specified - use all params.\n rename = {self._RE_INVALID.sub(\"_\", key): key for key in self._params}\n else:\n # Use `shell_env_params` and rename if specified.\n rename = {self._RE_INVALID.sub(\"_\", key): key for key in self._shell_env_params}\n rename.update(self._shell_env_params_rename)\n\n return {key_sub: str(self._params[key]) for (key_sub, key) in rename.items()}\n\n def _extract_stdout_results(self, stdout: str) -> Dict[str, TunableValue]:\n \"\"\"\n Extract the results from the stdout of the script.\n\n Parameters\n ----------\n stdout : str\n The stdout of the script.\n\n Returns\n -------\n results : Dict[str, TunableValue]\n A dictionary of results extracted from the stdout.\n \"\"\"\n if not self._results_stdout_pattern:\n return {}\n _LOG.debug(\"Extract regex: '%s' from: '%s'\", self._results_stdout_pattern, stdout)\n return {key: try_parse_val(val) for (key, val) in self._results_stdout_pattern.findall(stdout)}\n", "path": "mlos_bench/mlos_bench/environments/script_env.py"}], "after_files": [{"content": "#\n# Copyright (c) Microsoft Corporation.\n# Licensed under the MIT License.\n#\n\"\"\"\nBase scriptable benchmark environment.\n\"\"\"\n\nimport abc\nimport logging\nimport re\nfrom typing import Dict, Iterable, Optional\n\nfrom mlos_bench.environments.base_environment import Environment\nfrom mlos_bench.services.base_service import Service\nfrom mlos_bench.tunables.tunable import TunableValue\nfrom mlos_bench.tunables.tunable_groups import TunableGroups\n\nfrom mlos_bench.util import try_parse_val\n\n_LOG = logging.getLogger(__name__)\n\n\nclass ScriptEnv(Environment, metaclass=abc.ABCMeta):\n \"\"\"\n Base Environment that runs scripts for setup/run/teardown.\n \"\"\"\n\n _RE_INVALID = re.compile(r\"[^a-zA-Z0-9_]\")\n\n def __init__(self,\n *,\n name: str,\n config: dict,\n global_config: Optional[dict] = None,\n tunables: Optional[TunableGroups] = None,\n service: Optional[Service] = None):\n \"\"\"\n Create a new environment for script execution.\n\n Parameters\n ----------\n name: str\n Human-readable name of the environment.\n config : dict\n Free-format dictionary that contains the benchmark environment\n configuration. Each config must have at least the `tunable_params`\n and the `const_args` sections. It must also have at least one of\n the following parameters: {`setup`, `run`, `teardown`}.\n Additional parameters:\n * `shell_env_params` - an array of parameters to pass to the script\n as shell environment variables, and\n * `shell_env_params_rename` - a dictionary of {to: from} mappings\n of the script parameters. If not specified, replace all\n non-alphanumeric characters with underscores.\n If neither `shell_env_params` nor `shell_env_params_rename` are specified,\n *no* additional shell parameters will be passed to the script.\n global_config : dict\n Free-format dictionary of global parameters (e.g., security credentials)\n to be mixed in into the \"const_args\" section of the local config.\n tunables : TunableGroups\n A collection of tunable parameters for *all* environments.\n service: Service\n An optional service object (e.g., providing methods to\n deploy or reboot a VM, etc.).\n \"\"\"\n super().__init__(name=name, config=config, global_config=global_config,\n tunables=tunables, service=service)\n\n self._script_setup = self.config.get(\"setup\")\n self._script_run = self.config.get(\"run\")\n self._script_teardown = self.config.get(\"teardown\")\n\n self._shell_env_params: Iterable[str] = self.config.get(\"shell_env_params\", [])\n self._shell_env_params_rename: Dict[str, str] = self.config.get(\"shell_env_params_rename\", {})\n\n results_stdout_pattern = self.config.get(\"results_stdout_pattern\")\n self._results_stdout_pattern: Optional[re.Pattern[str]] = \\\n re.compile(results_stdout_pattern) if results_stdout_pattern else None\n\n def _get_env_params(self) -> Dict[str, str]:\n \"\"\"\n Get the *shell* environment parameters to be passed to the script.\n\n Returns\n -------\n env_params : Dict[str, str]\n Parameters to pass as *shell* environment variables into the script.\n This is usually a subset of `_params` with some possible conversions.\n \"\"\"\n rename = {self._RE_INVALID.sub(\"_\", key): key for key in self._shell_env_params}\n rename.update(self._shell_env_params_rename)\n return {key_sub: str(self._params[key]) for (key_sub, key) in rename.items()}\n\n def _extract_stdout_results(self, stdout: str) -> Dict[str, TunableValue]:\n \"\"\"\n Extract the results from the stdout of the script.\n\n Parameters\n ----------\n stdout : str\n The stdout of the script.\n\n Returns\n -------\n results : Dict[str, TunableValue]\n A dictionary of results extracted from the stdout.\n \"\"\"\n if not self._results_stdout_pattern:\n return {}\n _LOG.debug(\"Extract regex: '%s' from: '%s'\", self._results_stdout_pattern, stdout)\n return {key: try_parse_val(val) for (key, val) in self._results_stdout_pattern.findall(stdout)}\n", "path": "mlos_bench/mlos_bench/environments/script_env.py"}]}
1,653
599
gh_patches_debug_4170
rasdani/github-patches
git_diff
google__flax-1423
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- flax.core.FrozenDict copy broken when the new dictionary contains some names Provide as much information as possible. At least, this should include a description of your issue and steps to reproduce the problem. If possible also provide a summary of what steps or workarounds you have already tried. ### Problem you have encountered: Adding a dictionary which contains 'cls' key fails, ![image](https://user-images.githubusercontent.com/6980056/125225456-33f22600-e284-11eb-8b32-8ef1bae5ac3f.png) ### What you expected to happen: expected to update the value of 'cls' key. ### Logs, error messages, etc: ### Steps to reproduce: ``` flax.core.FrozenDict({}).copy({'cls': 'abc'}) ``` One way to workaround this is to manually create concatenated FrozenDict instead of using `copy`. ``` flax.core.FrozenDict({**flax.core.FrozenDict({'def': '123', 'cls': 22}), **{'cls': 'abc'}}) ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `flax/core/frozen_dict.py` Content: ``` 1 # Copyright 2021 The Flax Authors. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 """Frozen Dictionary.""" 16 17 from typing import Any, TypeVar, Mapping, Dict, Tuple 18 19 from flax import serialization 20 import jax 21 22 23 K = TypeVar('K') 24 V = TypeVar('V') 25 26 27 def _indent(x, num_spaces): 28 indent_str = ' ' * num_spaces 29 lines = x.split('\n') 30 assert lines[-1] == '' 31 # skip the final line because it's empty and should not be indented. 32 return '\n'.join(indent_str + line for line in lines[:-1]) + '\n' 33 34 35 @jax.tree_util.register_pytree_node_class 36 class FrozenDict(Mapping[K, V]): 37 """An immutable variant of the Python dict.""" 38 __slots__ = ('_dict', '_hash') 39 40 def __init__(self, *args, __unsafe_skip_copy__=False, **kwargs): 41 # make sure the dict is as 42 xs = dict(*args, **kwargs) 43 if __unsafe_skip_copy__: 44 self._dict = xs 45 else: 46 self._dict = _prepare_freeze(xs) 47 48 self._hash = None 49 50 def __getitem__(self, key): 51 v = self._dict[key] 52 if isinstance(v, dict): 53 return FrozenDict(v) 54 return v 55 56 def __setitem__(self, key, value): 57 raise ValueError('FrozenDict is immutable.') 58 59 def __contains__(self, key): 60 return key in self._dict 61 62 def __iter__(self): 63 return iter(self._dict) 64 65 def __len__(self): 66 return len(self._dict) 67 68 def __repr__(self): 69 return self.pretty_repr() 70 71 def __reduce__(self): 72 return FrozenDict, (self.unfreeze(),) 73 74 def pretty_repr(self, num_spaces=4): 75 """Returns an indented representation of the nested dictionary.""" 76 def pretty_dict(x): 77 if not isinstance(x, dict): 78 return repr(x) 79 rep = '' 80 for key, val in x.items(): 81 rep += f'{key}: {pretty_dict(val)},\n' 82 if rep: 83 return '{\n' + _indent(rep, num_spaces) + '}' 84 else: 85 return '{}' 86 return f'FrozenDict({pretty_dict(self._dict)})' 87 88 def __hash__(self): 89 if self._hash is None: 90 h = 0 91 for key, value in self.items(): 92 h ^= hash((key, value)) 93 self._hash = h 94 return self._hash 95 96 def copy(self, add_or_replace: Mapping[K, V]) -> 'FrozenDict[K, V]': 97 """Create a new FrozenDict with additional or replaced entries.""" 98 return type(self)(self, **unfreeze(add_or_replace)) 99 100 def items(self): 101 for key in self._dict: 102 yield (key, self[key]) 103 104 def pop(self, key: K) -> Tuple['FrozenDict[K, V]', V]: 105 """Create a new FrozenDict where one entry is removed. 106 107 Example:: 108 109 state, params = variables.pop('params') 110 111 Args: 112 key: the key to remove from the dict 113 Returns: 114 A pair with the new FrozenDict and the removed value. 115 """ 116 value = self[key] 117 new_dict = dict(self._dict) 118 new_dict.pop(key) 119 new_self = type(self)(new_dict) 120 return new_self, value 121 122 def unfreeze(self) -> Dict[K, V]: 123 """Unfreeze this FrozenDict. 124 125 Returns: 126 An unfrozen version of this FrozenDict instance. 127 """ 128 return unfreeze(self) 129 130 def tree_flatten(self) -> Tuple[Tuple[Dict[Any, Any]], Tuple[()]]: 131 """Flattens this FrozenDict. 132 133 Returns: 134 A flattened version of this FrozenDict instance. 135 """ 136 return (self._dict,), () 137 138 @classmethod 139 def tree_unflatten(cls, _, data): 140 # data is already deep copied due to tree map mechanism 141 # we can skip the deep copy in the constructor 142 return cls(*data, __unsafe_skip_copy__=True) 143 144 145 def _prepare_freeze(xs: Any) -> Any: 146 """Deep copy unfrozen dicts to make the dictionary FrozenDict safe.""" 147 if isinstance(xs, FrozenDict): 148 # we can safely ref share the internal state of a FrozenDict 149 # because it is immutable. 150 return xs._dict # pylint: disable=protected-access 151 if not isinstance(xs, dict): 152 # return a leaf as is. 153 return xs 154 # recursively copy dictionary to avoid ref sharing 155 return {key: _prepare_freeze(val) for key, val in xs.items()} 156 157 158 def freeze(xs: Mapping[Any, Any]) -> FrozenDict[Any, Any]: 159 """Freeze a nested dict. 160 161 Makes a nested `dict` immutable by transforming it into `FrozenDict`. 162 """ 163 return FrozenDict(xs) 164 165 166 def unfreeze(x: FrozenDict[Any, Any]) -> Dict[Any, Any]: 167 """Unfreeze a FrozenDict. 168 169 Makes a mutable copy of a `FrozenDict` mutable by transforming 170 it into (nested) dict. 171 """ 172 if isinstance(x, FrozenDict): 173 # deep copy internal state of a FrozenDict 174 # the dict branch would also work here but 175 # it is much less performant because jax.tree_map 176 # uses an optimized C implementation. 177 return jax.tree_map(lambda y: y, x._dict) 178 elif isinstance(x, dict): 179 ys = {} 180 for key, value in x.items(): 181 ys[key] = unfreeze(value) 182 return ys 183 else: 184 return x 185 186 187 def _frozen_dict_state_dict(xs): 188 return {key: serialization.to_state_dict(value) for key, value in xs.items()} 189 190 191 def _restore_frozen_dict(xs, states): 192 return FrozenDict( 193 {key: serialization.from_state_dict(value, states[key]) 194 for key, value in xs.items()}) 195 196 197 serialization.register_serialization_state( 198 FrozenDict, 199 _frozen_dict_state_dict, 200 _restore_frozen_dict) 201 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/flax/core/frozen_dict.py b/flax/core/frozen_dict.py --- a/flax/core/frozen_dict.py +++ b/flax/core/frozen_dict.py @@ -95,7 +95,7 @@ def copy(self, add_or_replace: Mapping[K, V]) -> 'FrozenDict[K, V]': """Create a new FrozenDict with additional or replaced entries.""" - return type(self)(self, **unfreeze(add_or_replace)) + return type(self)({**self, **unfreeze(add_or_replace)}) def items(self): for key in self._dict:
{"golden_diff": "diff --git a/flax/core/frozen_dict.py b/flax/core/frozen_dict.py\n--- a/flax/core/frozen_dict.py\n+++ b/flax/core/frozen_dict.py\n@@ -95,7 +95,7 @@\n \n def copy(self, add_or_replace: Mapping[K, V]) -> 'FrozenDict[K, V]':\n \"\"\"Create a new FrozenDict with additional or replaced entries.\"\"\"\n- return type(self)(self, **unfreeze(add_or_replace))\n+ return type(self)({**self, **unfreeze(add_or_replace)})\n \n def items(self):\n for key in self._dict:\n", "issue": "flax.core.FrozenDict copy broken when the new dictionary contains some names\nProvide as much information as possible. At least, this should include a description of your issue and steps to reproduce the problem. If possible also provide a summary of what steps or workarounds you have already tried.\r\n\r\n### Problem you have encountered:\r\nAdding a dictionary which contains 'cls' key fails, \r\n![image](https://user-images.githubusercontent.com/6980056/125225456-33f22600-e284-11eb-8b32-8ef1bae5ac3f.png)\r\n\r\n### What you expected to happen:\r\nexpected to update the value of 'cls' key. \r\n\r\n### Logs, error messages, etc:\r\n\r\n\r\n\r\n### Steps to reproduce:\r\n\r\n```\r\nflax.core.FrozenDict({}).copy({'cls': 'abc'})\r\n```\r\n\r\nOne way to workaround this is to manually create concatenated FrozenDict instead of using `copy`.\r\n```\r\nflax.core.FrozenDict({**flax.core.FrozenDict({'def': '123', 'cls': 22}), **{'cls': 'abc'}})\r\n```\n", "before_files": [{"content": "# Copyright 2021 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Frozen Dictionary.\"\"\"\n\nfrom typing import Any, TypeVar, Mapping, Dict, Tuple\n\nfrom flax import serialization\nimport jax\n\n\nK = TypeVar('K')\nV = TypeVar('V')\n\n\ndef _indent(x, num_spaces):\n indent_str = ' ' * num_spaces\n lines = x.split('\\n')\n assert lines[-1] == ''\n # skip the final line because it's empty and should not be indented.\n return '\\n'.join(indent_str + line for line in lines[:-1]) + '\\n'\n\n\[email protected]_util.register_pytree_node_class\nclass FrozenDict(Mapping[K, V]):\n \"\"\"An immutable variant of the Python dict.\"\"\"\n __slots__ = ('_dict', '_hash')\n\n def __init__(self, *args, __unsafe_skip_copy__=False, **kwargs):\n # make sure the dict is as\n xs = dict(*args, **kwargs)\n if __unsafe_skip_copy__:\n self._dict = xs\n else:\n self._dict = _prepare_freeze(xs)\n\n self._hash = None\n\n def __getitem__(self, key):\n v = self._dict[key]\n if isinstance(v, dict):\n return FrozenDict(v)\n return v\n\n def __setitem__(self, key, value):\n raise ValueError('FrozenDict is immutable.')\n\n def __contains__(self, key):\n return key in self._dict\n\n def __iter__(self):\n return iter(self._dict)\n\n def __len__(self):\n return len(self._dict)\n\n def __repr__(self):\n return self.pretty_repr()\n\n def __reduce__(self):\n return FrozenDict, (self.unfreeze(),)\n\n def pretty_repr(self, num_spaces=4):\n \"\"\"Returns an indented representation of the nested dictionary.\"\"\"\n def pretty_dict(x):\n if not isinstance(x, dict):\n return repr(x)\n rep = ''\n for key, val in x.items():\n rep += f'{key}: {pretty_dict(val)},\\n'\n if rep:\n return '{\\n' + _indent(rep, num_spaces) + '}'\n else:\n return '{}'\n return f'FrozenDict({pretty_dict(self._dict)})'\n\n def __hash__(self):\n if self._hash is None:\n h = 0\n for key, value in self.items():\n h ^= hash((key, value))\n self._hash = h\n return self._hash\n\n def copy(self, add_or_replace: Mapping[K, V]) -> 'FrozenDict[K, V]':\n \"\"\"Create a new FrozenDict with additional or replaced entries.\"\"\"\n return type(self)(self, **unfreeze(add_or_replace))\n\n def items(self):\n for key in self._dict:\n yield (key, self[key])\n\n def pop(self, key: K) -> Tuple['FrozenDict[K, V]', V]:\n \"\"\"Create a new FrozenDict where one entry is removed.\n\n Example::\n\n state, params = variables.pop('params')\n\n Args:\n key: the key to remove from the dict\n Returns:\n A pair with the new FrozenDict and the removed value.\n \"\"\"\n value = self[key]\n new_dict = dict(self._dict)\n new_dict.pop(key)\n new_self = type(self)(new_dict)\n return new_self, value\n\n def unfreeze(self) -> Dict[K, V]:\n \"\"\"Unfreeze this FrozenDict.\n\n Returns:\n An unfrozen version of this FrozenDict instance.\n \"\"\"\n return unfreeze(self)\n\n def tree_flatten(self) -> Tuple[Tuple[Dict[Any, Any]], Tuple[()]]:\n \"\"\"Flattens this FrozenDict.\n\n Returns:\n A flattened version of this FrozenDict instance.\n \"\"\"\n return (self._dict,), ()\n\n @classmethod\n def tree_unflatten(cls, _, data):\n # data is already deep copied due to tree map mechanism\n # we can skip the deep copy in the constructor\n return cls(*data, __unsafe_skip_copy__=True)\n\n\ndef _prepare_freeze(xs: Any) -> Any:\n \"\"\"Deep copy unfrozen dicts to make the dictionary FrozenDict safe.\"\"\"\n if isinstance(xs, FrozenDict):\n # we can safely ref share the internal state of a FrozenDict\n # because it is immutable.\n return xs._dict # pylint: disable=protected-access\n if not isinstance(xs, dict):\n # return a leaf as is.\n return xs\n # recursively copy dictionary to avoid ref sharing\n return {key: _prepare_freeze(val) for key, val in xs.items()}\n\n\ndef freeze(xs: Mapping[Any, Any]) -> FrozenDict[Any, Any]:\n \"\"\"Freeze a nested dict.\n\n Makes a nested `dict` immutable by transforming it into `FrozenDict`.\n \"\"\"\n return FrozenDict(xs)\n\n\ndef unfreeze(x: FrozenDict[Any, Any]) -> Dict[Any, Any]:\n \"\"\"Unfreeze a FrozenDict.\n\n Makes a mutable copy of a `FrozenDict` mutable by transforming\n it into (nested) dict.\n \"\"\"\n if isinstance(x, FrozenDict):\n # deep copy internal state of a FrozenDict\n # the dict branch would also work here but\n # it is much less performant because jax.tree_map\n # uses an optimized C implementation.\n return jax.tree_map(lambda y: y, x._dict)\n elif isinstance(x, dict):\n ys = {}\n for key, value in x.items():\n ys[key] = unfreeze(value)\n return ys\n else:\n return x\n\n\ndef _frozen_dict_state_dict(xs):\n return {key: serialization.to_state_dict(value) for key, value in xs.items()}\n\n\ndef _restore_frozen_dict(xs, states):\n return FrozenDict(\n {key: serialization.from_state_dict(value, states[key])\n for key, value in xs.items()})\n\n\nserialization.register_serialization_state(\n FrozenDict,\n _frozen_dict_state_dict,\n _restore_frozen_dict)\n", "path": "flax/core/frozen_dict.py"}], "after_files": [{"content": "# Copyright 2021 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Frozen Dictionary.\"\"\"\n\nfrom typing import Any, TypeVar, Mapping, Dict, Tuple\n\nfrom flax import serialization\nimport jax\n\n\nK = TypeVar('K')\nV = TypeVar('V')\n\n\ndef _indent(x, num_spaces):\n indent_str = ' ' * num_spaces\n lines = x.split('\\n')\n assert lines[-1] == ''\n # skip the final line because it's empty and should not be indented.\n return '\\n'.join(indent_str + line for line in lines[:-1]) + '\\n'\n\n\[email protected]_util.register_pytree_node_class\nclass FrozenDict(Mapping[K, V]):\n \"\"\"An immutable variant of the Python dict.\"\"\"\n __slots__ = ('_dict', '_hash')\n\n def __init__(self, *args, __unsafe_skip_copy__=False, **kwargs):\n # make sure the dict is as\n xs = dict(*args, **kwargs)\n if __unsafe_skip_copy__:\n self._dict = xs\n else:\n self._dict = _prepare_freeze(xs)\n\n self._hash = None\n\n def __getitem__(self, key):\n v = self._dict[key]\n if isinstance(v, dict):\n return FrozenDict(v)\n return v\n\n def __setitem__(self, key, value):\n raise ValueError('FrozenDict is immutable.')\n\n def __contains__(self, key):\n return key in self._dict\n\n def __iter__(self):\n return iter(self._dict)\n\n def __len__(self):\n return len(self._dict)\n\n def __repr__(self):\n return self.pretty_repr()\n\n def __reduce__(self):\n return FrozenDict, (self.unfreeze(),)\n\n def pretty_repr(self, num_spaces=4):\n \"\"\"Returns an indented representation of the nested dictionary.\"\"\"\n def pretty_dict(x):\n if not isinstance(x, dict):\n return repr(x)\n rep = ''\n for key, val in x.items():\n rep += f'{key}: {pretty_dict(val)},\\n'\n if rep:\n return '{\\n' + _indent(rep, num_spaces) + '}'\n else:\n return '{}'\n return f'FrozenDict({pretty_dict(self._dict)})'\n\n def __hash__(self):\n if self._hash is None:\n h = 0\n for key, value in self.items():\n h ^= hash((key, value))\n self._hash = h\n return self._hash\n\n def copy(self, add_or_replace: Mapping[K, V]) -> 'FrozenDict[K, V]':\n \"\"\"Create a new FrozenDict with additional or replaced entries.\"\"\"\n return type(self)({**self, **unfreeze(add_or_replace)})\n\n def items(self):\n for key in self._dict:\n yield (key, self[key])\n\n def pop(self, key: K) -> Tuple['FrozenDict[K, V]', V]:\n \"\"\"Create a new FrozenDict where one entry is removed.\n\n Example::\n\n state, params = variables.pop('params')\n\n Args:\n key: the key to remove from the dict\n Returns:\n A pair with the new FrozenDict and the removed value.\n \"\"\"\n value = self[key]\n new_dict = dict(self._dict)\n new_dict.pop(key)\n new_self = type(self)(new_dict)\n return new_self, value\n\n def unfreeze(self) -> Dict[K, V]:\n \"\"\"Unfreeze this FrozenDict.\n\n Returns:\n An unfrozen version of this FrozenDict instance.\n \"\"\"\n return unfreeze(self)\n\n def tree_flatten(self) -> Tuple[Tuple[Dict[Any, Any]], Tuple[()]]:\n \"\"\"Flattens this FrozenDict.\n\n Returns:\n A flattened version of this FrozenDict instance.\n \"\"\"\n return (self._dict,), ()\n\n @classmethod\n def tree_unflatten(cls, _, data):\n # data is already deep copied due to tree map mechanism\n # we can skip the deep copy in the constructor\n return cls(*data, __unsafe_skip_copy__=True)\n\n\ndef _prepare_freeze(xs: Any) -> Any:\n \"\"\"Deep copy unfrozen dicts to make the dictionary FrozenDict safe.\"\"\"\n if isinstance(xs, FrozenDict):\n # we can safely ref share the internal state of a FrozenDict\n # because it is immutable.\n return xs._dict # pylint: disable=protected-access\n if not isinstance(xs, dict):\n # return a leaf as is.\n return xs\n # recursively copy dictionary to avoid ref sharing\n return {key: _prepare_freeze(val) for key, val in xs.items()}\n\n\ndef freeze(xs: Mapping[Any, Any]) -> FrozenDict[Any, Any]:\n \"\"\"Freeze a nested dict.\n\n Makes a nested `dict` immutable by transforming it into `FrozenDict`.\n \"\"\"\n return FrozenDict(xs)\n\n\ndef unfreeze(x: FrozenDict[Any, Any]) -> Dict[Any, Any]:\n \"\"\"Unfreeze a FrozenDict.\n\n Makes a mutable copy of a `FrozenDict` mutable by transforming\n it into (nested) dict.\n \"\"\"\n if isinstance(x, FrozenDict):\n # deep copy internal state of a FrozenDict\n # the dict branch would also work here but\n # it is much less performant because jax.tree_map\n # uses an optimized C implementation.\n return jax.tree_map(lambda y: y, x._dict)\n elif isinstance(x, dict):\n ys = {}\n for key, value in x.items():\n ys[key] = unfreeze(value)\n return ys\n else:\n return x\n\n\ndef _frozen_dict_state_dict(xs):\n return {key: serialization.to_state_dict(value) for key, value in xs.items()}\n\n\ndef _restore_frozen_dict(xs, states):\n return FrozenDict(\n {key: serialization.from_state_dict(value, states[key])\n for key, value in xs.items()})\n\n\nserialization.register_serialization_state(\n FrozenDict,\n _frozen_dict_state_dict,\n _restore_frozen_dict)\n", "path": "flax/core/frozen_dict.py"}]}
2,460
135
gh_patches_debug_7557
rasdani/github-patches
git_diff
PlasmaPy__PlasmaPy-123
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- ModuleNotFoundError: No module named 'plasmapy.classes' on plasmapy import On importing freshly installed plasmapy into a new environment: (plasmapy) [~]$ python Python 3.6.2 |Continuum Analytics, Inc.| (default, Jul 20 2017, 13:51:32) [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import plasmapy Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/dominik/.anaconda3/envs/plasmapy/lib/python3.6/site-packages/plasmapy/__init__.py", line 8, in <module> from .classes import Plasma ModuleNotFoundError: No module named 'plasmapy.classes' The goal of this one is being able to import plasmapy. At all. The issue likely lies in `plasmapy/__init__.py`. To quote @cadair 's words of encouragement on this bugfixing journey, *packaging is a special kind of hell*. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `setup.py` Content: ``` 1 from setuptools import setup 2 3 4 # Package metadata 5 metadata = {} 6 with open('plasmapy/_metadata.py', 'r') as metadata_file: 7 exec(metadata_file.read(), metadata) 8 9 # Requirements 10 with open('requirements/base.txt', 'r') as req_file: 11 requirements = req_file.read().splitlines() 12 13 setup(name=metadata['name'], 14 version=metadata['version'], 15 description="Python package for plasma physics", 16 requires=requirements, 17 install_requires=requirements, 18 provides=[metadata['name']], 19 author=metadata['author'], 20 author_email="[email protected]", # until we get an email address 21 license="BSD", 22 url="https://github.com/PlasmaPy/PlasmaPy", # until we make a webpage 23 long_description=metadata['description'], 24 keywords=['plasma', 'plasma physics', 'science'], 25 classifiers=[ 26 'Intended Audience :: Science/Research', 27 'License :: OSI Approved :: BSD License', 28 'Operating System :: OS Independent', 29 'Programming Language :: Python :: 3 :: Only', 30 'Programming Language :: Python :: 3.6', 31 'Topic :: Scientific/Engineering :: Physics', 32 'Topic :: Scientific/Engineering :: Astronomy', 33 'Development Status :: 2 - Pre-Alpha', 34 ], 35 packages=["plasmapy"], 36 zip_safe=False, 37 use_2to3=False, 38 python_requires='>=3.6', 39 ) 40 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -1,4 +1,4 @@ -from setuptools import setup +from setuptools import setup, find_packages # Package metadata @@ -32,7 +32,7 @@ 'Topic :: Scientific/Engineering :: Astronomy', 'Development Status :: 2 - Pre-Alpha', ], - packages=["plasmapy"], + packages=find_packages(), zip_safe=False, use_2to3=False, python_requires='>=3.6',
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,4 +1,4 @@\n-from setuptools import setup\n+from setuptools import setup, find_packages\n \n \n # Package metadata\n@@ -32,7 +32,7 @@\n 'Topic :: Scientific/Engineering :: Astronomy',\n 'Development Status :: 2 - Pre-Alpha',\n ],\n- packages=[\"plasmapy\"],\n+ packages=find_packages(),\n zip_safe=False,\n use_2to3=False,\n python_requires='>=3.6',\n", "issue": "ModuleNotFoundError: No module named 'plasmapy.classes' on plasmapy import\nOn importing freshly installed plasmapy into a new environment:\r\n\r\n (plasmapy) [~]$ python\r\n Python 3.6.2 |Continuum Analytics, Inc.| (default, Jul 20 2017, 13:51:32) \r\n [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux\r\n Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n >>> import plasmapy\r\n Traceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/dominik/.anaconda3/envs/plasmapy/lib/python3.6/site-packages/plasmapy/__init__.py\", line 8, in <module>\r\n from .classes import Plasma\r\n ModuleNotFoundError: No module named 'plasmapy.classes'\r\n\r\nThe goal of this one is being able to import plasmapy. At all.\r\n\r\nThe issue likely lies in `plasmapy/__init__.py`. \r\n\r\nTo quote @cadair 's words of encouragement on this bugfixing journey, *packaging is a special kind of hell*. \n", "before_files": [{"content": "from setuptools import setup\n\n\n# Package metadata\nmetadata = {}\nwith open('plasmapy/_metadata.py', 'r') as metadata_file:\n exec(metadata_file.read(), metadata)\n\n# Requirements\nwith open('requirements/base.txt', 'r') as req_file:\n requirements = req_file.read().splitlines()\n\nsetup(name=metadata['name'],\n version=metadata['version'],\n description=\"Python package for plasma physics\",\n requires=requirements,\n install_requires=requirements,\n provides=[metadata['name']],\n author=metadata['author'],\n author_email=\"[email protected]\", # until we get an email address\n license=\"BSD\",\n url=\"https://github.com/PlasmaPy/PlasmaPy\", # until we make a webpage\n long_description=metadata['description'],\n keywords=['plasma', 'plasma physics', 'science'],\n classifiers=[\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 3 :: Only',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: Scientific/Engineering :: Physics',\n 'Topic :: Scientific/Engineering :: Astronomy',\n 'Development Status :: 2 - Pre-Alpha',\n ],\n packages=[\"plasmapy\"],\n zip_safe=False,\n use_2to3=False,\n python_requires='>=3.6',\n )\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup, find_packages\n\n\n# Package metadata\nmetadata = {}\nwith open('plasmapy/_metadata.py', 'r') as metadata_file:\n exec(metadata_file.read(), metadata)\n\n# Requirements\nwith open('requirements/base.txt', 'r') as req_file:\n requirements = req_file.read().splitlines()\n\nsetup(name=metadata['name'],\n version=metadata['version'],\n description=\"Python package for plasma physics\",\n requires=requirements,\n install_requires=requirements,\n provides=[metadata['name']],\n author=metadata['author'],\n author_email=\"[email protected]\", # until we get an email address\n license=\"BSD\",\n url=\"https://github.com/PlasmaPy/PlasmaPy\", # until we make a webpage\n long_description=metadata['description'],\n keywords=['plasma', 'plasma physics', 'science'],\n classifiers=[\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 3 :: Only',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: Scientific/Engineering :: Physics',\n 'Topic :: Scientific/Engineering :: Astronomy',\n 'Development Status :: 2 - Pre-Alpha',\n ],\n packages=find_packages(),\n zip_safe=False,\n use_2to3=False,\n python_requires='>=3.6',\n )\n", "path": "setup.py"}]}
928
122
gh_patches_debug_39325
rasdani/github-patches
git_diff
cowrie__cowrie-763
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Implement regular expressions in userdb.txt The file that contains the combinations of usernames and passwords that Cowrie accepts from the attackers (`data/userdb.txt`) currently handles 3 special characters - `#`, which means a comment till the end of the line, `!`, which means negation, and `*`, which means "anything" (in either the username or the password field). Would it be possible to allow any regular expression instead of the special characters '!' and '*'? I've seen attackers use variations of the password "honeypot" to determine that they are dealing with a honeypot and refuse to conduct their usual attack. Examples include "Honeypot321" (309 times), "honeypot" (6 times), and "nologinissahoneypotlmao" (once) over a 17-month period. I could, of course, explicitly block just these 3 passwords, but I'd like to disallow any password with the word "honeypot" (case-insensitive) in it. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `cowrie/core/auth.py` Content: ``` 1 # Copyright (c) 2009-2014 Upi Tamminen <[email protected]> 2 # See the COPYRIGHT file for more information 3 4 """ 5 This module contains ... 6 """ 7 8 from __future__ import division, absolute_import 9 10 import json 11 from os import path 12 from random import randint 13 14 from twisted.python import log 15 16 from cowrie.core.config import CONFIG 17 18 class UserDB(object): 19 """ 20 By Walter de Jong <[email protected]> 21 """ 22 23 def __init__(self): 24 self.userdb = [] 25 self.userdb_file = '%s/userdb.txt' % CONFIG.get('honeypot', 'data_path') 26 self.load() 27 28 29 def load(self): 30 """ 31 load the user db 32 """ 33 34 with open(self.userdb_file, 'rb') as f: 35 while True: 36 rawline = f.readline() 37 if not rawline: 38 break 39 40 line = rawline.strip() 41 if not line: 42 continue 43 44 if line.startswith(b'#'): 45 continue 46 47 (login, uid, passwd) = line.split(b':', 2) 48 49 self.userdb.append((login, passwd)) 50 51 52 def save(self): 53 """ 54 save the user db 55 """ 56 57 # Note: this is subject to races between cowrie instances, but hey ... 58 with open(self.userdb_file, 'w') as f: 59 for (login, passwd) in self.userdb: 60 f.write('%s:x:%s\n' % (login, passwd)) 61 62 63 def checklogin(self, thelogin, thepasswd, src_ip='0.0.0.0'): 64 """ 65 check entered username/password against database 66 note that it allows multiple passwords for a single username 67 it also knows wildcard '*' for any username or password 68 prepend password with ! to explicitly deny it. Denials must come before wildcards 69 """ 70 for (login, passwd) in self.userdb: 71 # Explicitly fail on !password 72 if login == thelogin and passwd == b'!' + thepasswd: 73 return False 74 if login in (thelogin, b'*') and passwd in (thepasswd, b'*'): 75 return True 76 return False 77 78 79 def user_password_exists(self, thelogin, thepasswd): 80 """ 81 """ 82 for (login, passwd) in self.userdb: 83 if login == thelogin and passwd == thepasswd: 84 return True 85 return False 86 87 88 def adduser(self, login, passwd): 89 """ 90 """ 91 if self.user_password_exists(login, passwd): 92 return 93 self.userdb.append((login, passwd)) 94 self.save() 95 96 97 98 class AuthRandom(object): 99 """ 100 Alternative class that defines the checklogin() method. 101 Users will be authenticated after a random number of attempts. 102 """ 103 104 def __init__(self): 105 # Default values 106 self.mintry, self.maxtry, self.maxcache = 2, 5, 10 107 108 # Are there auth_class parameters? 109 if CONFIG.has_option('honeypot', 'auth_class_parameters'): 110 parameters = CONFIG.get('honeypot', 'auth_class_parameters') 111 parlist = parameters.split(',') 112 if len(parlist) == 3: 113 self.mintry = int(parlist[0]) 114 self.maxtry = int(parlist[1]) 115 self.maxcache = int(parlist[2]) 116 117 if self.maxtry < self.mintry: 118 self.maxtry = self.mintry + 1 119 log.msg('maxtry < mintry, adjusting maxtry to: %d' % (self.maxtry,)) 120 self.uservar = {} 121 self.uservar_file = '%s/uservar.json' % CONFIG.get('honeypot', 'data_path') 122 self.loadvars() 123 124 125 def loadvars(self): 126 """ 127 Load user vars from json file 128 """ 129 if path.isfile(self.uservar_file): 130 with open(self.uservar_file, 'rb') as fp: 131 try: 132 self.uservar = json.load(fp) 133 except: 134 self.uservar = {} 135 136 137 def savevars(self): 138 """ 139 Save the user vars to json file 140 """ 141 data = self.uservar 142 # Note: this is subject to races between cowrie logins 143 with open(self.uservar_file, 'wb') as fp: 144 json.dump(data, fp) 145 146 147 def checklogin(self, thelogin, thepasswd, src_ip): 148 """ 149 Every new source IP will have to try a random number of times between 150 'mintry' and 'maxtry' before succeeding to login. 151 All username/password combinations must be different. 152 The successful login combination is stored with the IP address. 153 Successful username/passwords pairs are also cached for 'maxcache' times. 154 This is to allow access for returns from different IP addresses. 155 Variables are saved in 'uservar.json' in the data directory. 156 """ 157 158 auth = False 159 userpass = thelogin + ':' + thepasswd 160 161 if not 'cache' in self.uservar: 162 self.uservar['cache'] = [] 163 cache = self.uservar['cache'] 164 165 # Check if it is the first visit from src_ip 166 if src_ip not in self.uservar: 167 self.uservar[src_ip] = {} 168 ipinfo = self.uservar[src_ip] 169 ipinfo['try'] = 0 170 if userpass in cache: 171 log.msg('first time for %s, found cached: %s' % (src_ip, userpass)) 172 ipinfo['max'] = 1 173 ipinfo['user'] = thelogin 174 ipinfo['pw'] = thepasswd 175 auth = True 176 self.savevars() 177 return auth 178 else: 179 ipinfo['max'] = randint(self.mintry, self.maxtry) 180 log.msg('first time for %s, need: %d' % (src_ip, ipinfo['max'])) 181 182 ipinfo = self.uservar[src_ip] 183 184 # Fill in missing variables 185 if not 'max' in ipinfo: 186 ipinfo['max'] = randint(self.mintry, self.maxtry) 187 if not 'try' in ipinfo: 188 ipinfo['try'] = 0 189 if not 'tried' in ipinfo: 190 ipinfo['tried'] = [] 191 192 # Don't count repeated username/password combinations 193 if userpass in ipinfo['tried']: 194 log.msg('already tried this combination') 195 self.savevars() 196 return auth 197 198 ipinfo['try'] += 1 199 attempts = ipinfo['try'] 200 need = ipinfo['max'] 201 log.msg('login attempt: %d' % (attempts,)) 202 203 # Check if enough login attempts are tried 204 if attempts < need: 205 self.uservar[src_ip]['tried'].append(userpass) 206 elif attempts == need: 207 ipinfo['user'] = thelogin 208 ipinfo['pw'] = thepasswd 209 cache.append(userpass) 210 if len(cache) > self.maxcache: 211 cache.pop(0) 212 auth = True 213 # Returning after successful login 214 elif attempts > need: 215 if not 'user' in ipinfo or not 'pw' in ipinfo: 216 log.msg('return, but username or password not set!!!') 217 ipinfo['tried'].append(userpass) 218 ipinfo['try'] = 1 219 else: 220 log.msg('login return, expect: [%s/%s]' % (ipinfo['user'], ipinfo['pw'])) 221 if thelogin == ipinfo['user'] and thepasswd == ipinfo['pw']: 222 auth = True 223 self.savevars() 224 return auth 225 226 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/cowrie/core/auth.py b/cowrie/core/auth.py --- a/cowrie/core/auth.py +++ b/cowrie/core/auth.py @@ -7,6 +7,7 @@ from __future__ import division, absolute_import +import re import json from os import path from random import randint @@ -21,7 +22,7 @@ """ def __init__(self): - self.userdb = [] + self.userdb = {} self.userdb_file = '%s/userdb.txt' % CONFIG.get('honeypot', 'data_path') self.load() @@ -44,55 +45,50 @@ if line.startswith(b'#'): continue - (login, uid, passwd) = line.split(b':', 2) + login, passwd = re.split(br':\w+:', line, 1) + self.adduser(login, passwd) - self.userdb.append((login, passwd)) + def checklogin(self, thelogin, thepasswd, src_ip='0.0.0.0'): + for credentials, policy in self.userdb.items(): + login, passwd = credentials - def save(self): - """ - save the user db - """ + if self.match_rule(login, thelogin): + if self.match_rule(passwd, thepasswd): + return policy - # Note: this is subject to races between cowrie instances, but hey ... - with open(self.userdb_file, 'w') as f: - for (login, passwd) in self.userdb: - f.write('%s:x:%s\n' % (login, passwd)) + return False - def checklogin(self, thelogin, thepasswd, src_ip='0.0.0.0'): - """ - check entered username/password against database - note that it allows multiple passwords for a single username - it also knows wildcard '*' for any username or password - prepend password with ! to explicitly deny it. Denials must come before wildcards - """ - for (login, passwd) in self.userdb: - # Explicitly fail on !password - if login == thelogin and passwd == b'!' + thepasswd: - return False - if login in (thelogin, b'*') and passwd in (thepasswd, b'*'): - return True - return False + def match_rule(self, rule, input): + if type(rule) is bytes: + return rule in [b'*', input] + else: + return bool(rule.search(input)) - def user_password_exists(self, thelogin, thepasswd): + def re_or_str(self, rule): """ + Convert a /.../ type rule to a regex, otherwise return the string as-is """ - for (login, passwd) in self.userdb: - if login == thelogin and passwd == thepasswd: - return True - return False + res = re.match(br'/(.+)/(i)?$', rule) + if res: + return re.compile(res.group(1), re.IGNORECASE if res.group(2) else 0) + + return rule def adduser(self, login, passwd): - """ - """ - if self.user_password_exists(login, passwd): - return - self.userdb.append((login, passwd)) - self.save() + login = self.re_or_str(login) + + if passwd.startswith(b'!'): + policy = False + passwd = passwd[1:] + else: + policy = True + passwd = self.re_or_str(passwd) + self.userdb[(login, passwd)] = policy class AuthRandom(object):
{"golden_diff": "diff --git a/cowrie/core/auth.py b/cowrie/core/auth.py\n--- a/cowrie/core/auth.py\n+++ b/cowrie/core/auth.py\n@@ -7,6 +7,7 @@\n \n from __future__ import division, absolute_import\n \n+import re\n import json\n from os import path\n from random import randint\n@@ -21,7 +22,7 @@\n \"\"\"\n \n def __init__(self):\n- self.userdb = []\n+ self.userdb = {}\n self.userdb_file = '%s/userdb.txt' % CONFIG.get('honeypot', 'data_path')\n self.load()\n \n@@ -44,55 +45,50 @@\n if line.startswith(b'#'):\n continue\n \n- (login, uid, passwd) = line.split(b':', 2)\n+ login, passwd = re.split(br':\\w+:', line, 1)\n+ self.adduser(login, passwd)\n \n- self.userdb.append((login, passwd))\n \n+ def checklogin(self, thelogin, thepasswd, src_ip='0.0.0.0'):\n+ for credentials, policy in self.userdb.items():\n+ login, passwd = credentials\n \n- def save(self):\n- \"\"\"\n- save the user db\n- \"\"\"\n+ if self.match_rule(login, thelogin):\n+ if self.match_rule(passwd, thepasswd):\n+ return policy\n \n- # Note: this is subject to races between cowrie instances, but hey ...\n- with open(self.userdb_file, 'w') as f:\n- for (login, passwd) in self.userdb:\n- f.write('%s:x:%s\\n' % (login, passwd))\n+ return False\n \n \n- def checklogin(self, thelogin, thepasswd, src_ip='0.0.0.0'):\n- \"\"\"\n- check entered username/password against database\n- note that it allows multiple passwords for a single username\n- it also knows wildcard '*' for any username or password\n- prepend password with ! to explicitly deny it. Denials must come before wildcards\n- \"\"\"\n- for (login, passwd) in self.userdb:\n- # Explicitly fail on !password\n- if login == thelogin and passwd == b'!' + thepasswd:\n- return False\n- if login in (thelogin, b'*') and passwd in (thepasswd, b'*'):\n- return True\n- return False\n+ def match_rule(self, rule, input):\n+ if type(rule) is bytes:\n+ return rule in [b'*', input]\n+ else:\n+ return bool(rule.search(input))\n \n \n- def user_password_exists(self, thelogin, thepasswd):\n+ def re_or_str(self, rule):\n \"\"\"\n+ Convert a /.../ type rule to a regex, otherwise return the string as-is\n \"\"\"\n- for (login, passwd) in self.userdb:\n- if login == thelogin and passwd == thepasswd:\n- return True\n- return False\n+ res = re.match(br'/(.+)/(i)?$', rule)\n+ if res:\n+ return re.compile(res.group(1), re.IGNORECASE if res.group(2) else 0)\n+\n+ return rule\n \n \n def adduser(self, login, passwd):\n- \"\"\"\n- \"\"\"\n- if self.user_password_exists(login, passwd):\n- return\n- self.userdb.append((login, passwd))\n- self.save()\n+ login = self.re_or_str(login)\n+\n+ if passwd.startswith(b'!'):\n+ policy = False\n+ passwd = passwd[1:]\n+ else:\n+ policy = True\n \n+ passwd = self.re_or_str(passwd)\n+ self.userdb[(login, passwd)] = policy\n \n \n class AuthRandom(object):\n", "issue": "Implement regular expressions in userdb.txt\nThe file that contains the combinations of usernames and passwords that Cowrie accepts from the attackers (`data/userdb.txt`) currently handles 3 special characters - `#`, which means a comment till the end of the line, `!`, which means negation, and `*`, which means \"anything\" (in either the username or the password field).\r\n\r\nWould it be possible to allow any regular expression instead of the special characters '!' and '*'?\r\n\r\nI've seen attackers use variations of the password \"honeypot\" to determine that they are dealing with a honeypot and refuse to conduct their usual attack. Examples include \"Honeypot321\" (309 times), \"honeypot\" (6 times), and \"nologinissahoneypotlmao\" (once) over a 17-month period.\r\n\r\nI could, of course, explicitly block just these 3 passwords, but I'd like to disallow any password with the word \"honeypot\" (case-insensitive) in it.\n", "before_files": [{"content": "# Copyright (c) 2009-2014 Upi Tamminen <[email protected]>\n# See the COPYRIGHT file for more information\n\n\"\"\"\nThis module contains ...\n\"\"\"\n\nfrom __future__ import division, absolute_import\n\nimport json\nfrom os import path\nfrom random import randint\n\nfrom twisted.python import log\n\nfrom cowrie.core.config import CONFIG\n\nclass UserDB(object):\n \"\"\"\n By Walter de Jong <[email protected]>\n \"\"\"\n\n def __init__(self):\n self.userdb = []\n self.userdb_file = '%s/userdb.txt' % CONFIG.get('honeypot', 'data_path')\n self.load()\n\n\n def load(self):\n \"\"\"\n load the user db\n \"\"\"\n\n with open(self.userdb_file, 'rb') as f:\n while True:\n rawline = f.readline()\n if not rawline:\n break\n\n line = rawline.strip()\n if not line:\n continue\n\n if line.startswith(b'#'):\n continue\n\n (login, uid, passwd) = line.split(b':', 2)\n\n self.userdb.append((login, passwd))\n\n\n def save(self):\n \"\"\"\n save the user db\n \"\"\"\n\n # Note: this is subject to races between cowrie instances, but hey ...\n with open(self.userdb_file, 'w') as f:\n for (login, passwd) in self.userdb:\n f.write('%s:x:%s\\n' % (login, passwd))\n\n\n def checklogin(self, thelogin, thepasswd, src_ip='0.0.0.0'):\n \"\"\"\n check entered username/password against database\n note that it allows multiple passwords for a single username\n it also knows wildcard '*' for any username or password\n prepend password with ! to explicitly deny it. Denials must come before wildcards\n \"\"\"\n for (login, passwd) in self.userdb:\n # Explicitly fail on !password\n if login == thelogin and passwd == b'!' + thepasswd:\n return False\n if login in (thelogin, b'*') and passwd in (thepasswd, b'*'):\n return True\n return False\n\n\n def user_password_exists(self, thelogin, thepasswd):\n \"\"\"\n \"\"\"\n for (login, passwd) in self.userdb:\n if login == thelogin and passwd == thepasswd:\n return True\n return False\n\n\n def adduser(self, login, passwd):\n \"\"\"\n \"\"\"\n if self.user_password_exists(login, passwd):\n return\n self.userdb.append((login, passwd))\n self.save()\n\n\n\nclass AuthRandom(object):\n \"\"\"\n Alternative class that defines the checklogin() method.\n Users will be authenticated after a random number of attempts.\n \"\"\"\n\n def __init__(self):\n # Default values\n self.mintry, self.maxtry, self.maxcache = 2, 5, 10\n\n # Are there auth_class parameters?\n if CONFIG.has_option('honeypot', 'auth_class_parameters'):\n parameters = CONFIG.get('honeypot', 'auth_class_parameters')\n parlist = parameters.split(',')\n if len(parlist) == 3:\n self.mintry = int(parlist[0])\n self.maxtry = int(parlist[1])\n self.maxcache = int(parlist[2])\n\n if self.maxtry < self.mintry:\n self.maxtry = self.mintry + 1\n log.msg('maxtry < mintry, adjusting maxtry to: %d' % (self.maxtry,))\n self.uservar = {}\n self.uservar_file = '%s/uservar.json' % CONFIG.get('honeypot', 'data_path')\n self.loadvars()\n\n\n def loadvars(self):\n \"\"\"\n Load user vars from json file\n \"\"\"\n if path.isfile(self.uservar_file):\n with open(self.uservar_file, 'rb') as fp:\n try:\n self.uservar = json.load(fp)\n except:\n self.uservar = {}\n\n\n def savevars(self):\n \"\"\"\n Save the user vars to json file\n \"\"\"\n data = self.uservar\n # Note: this is subject to races between cowrie logins\n with open(self.uservar_file, 'wb') as fp:\n json.dump(data, fp)\n\n\n def checklogin(self, thelogin, thepasswd, src_ip):\n \"\"\"\n Every new source IP will have to try a random number of times between\n 'mintry' and 'maxtry' before succeeding to login.\n All username/password combinations must be different.\n The successful login combination is stored with the IP address.\n Successful username/passwords pairs are also cached for 'maxcache' times.\n This is to allow access for returns from different IP addresses.\n Variables are saved in 'uservar.json' in the data directory.\n \"\"\"\n\n auth = False\n userpass = thelogin + ':' + thepasswd\n\n if not 'cache' in self.uservar:\n self.uservar['cache'] = []\n cache = self.uservar['cache']\n\n # Check if it is the first visit from src_ip\n if src_ip not in self.uservar:\n self.uservar[src_ip] = {}\n ipinfo = self.uservar[src_ip]\n ipinfo['try'] = 0\n if userpass in cache:\n log.msg('first time for %s, found cached: %s' % (src_ip, userpass))\n ipinfo['max'] = 1\n ipinfo['user'] = thelogin\n ipinfo['pw'] = thepasswd\n auth = True\n self.savevars()\n return auth\n else:\n ipinfo['max'] = randint(self.mintry, self.maxtry)\n log.msg('first time for %s, need: %d' % (src_ip, ipinfo['max']))\n\n ipinfo = self.uservar[src_ip]\n\n # Fill in missing variables\n if not 'max' in ipinfo:\n ipinfo['max'] = randint(self.mintry, self.maxtry)\n if not 'try' in ipinfo:\n ipinfo['try'] = 0\n if not 'tried' in ipinfo:\n ipinfo['tried'] = []\n\n # Don't count repeated username/password combinations\n if userpass in ipinfo['tried']:\n log.msg('already tried this combination')\n self.savevars()\n return auth\n\n ipinfo['try'] += 1\n attempts = ipinfo['try']\n need = ipinfo['max']\n log.msg('login attempt: %d' % (attempts,))\n\n # Check if enough login attempts are tried\n if attempts < need:\n self.uservar[src_ip]['tried'].append(userpass)\n elif attempts == need:\n ipinfo['user'] = thelogin\n ipinfo['pw'] = thepasswd\n cache.append(userpass)\n if len(cache) > self.maxcache:\n cache.pop(0)\n auth = True\n # Returning after successful login\n elif attempts > need:\n if not 'user' in ipinfo or not 'pw' in ipinfo:\n log.msg('return, but username or password not set!!!')\n ipinfo['tried'].append(userpass)\n ipinfo['try'] = 1\n else:\n log.msg('login return, expect: [%s/%s]' % (ipinfo['user'], ipinfo['pw']))\n if thelogin == ipinfo['user'] and thepasswd == ipinfo['pw']:\n auth = True\n self.savevars()\n return auth\n\n", "path": "cowrie/core/auth.py"}], "after_files": [{"content": "# Copyright (c) 2009-2014 Upi Tamminen <[email protected]>\n# See the COPYRIGHT file for more information\n\n\"\"\"\nThis module contains ...\n\"\"\"\n\nfrom __future__ import division, absolute_import\n\nimport re\nimport json\nfrom os import path\nfrom random import randint\n\nfrom twisted.python import log\n\nfrom cowrie.core.config import CONFIG\n\nclass UserDB(object):\n \"\"\"\n By Walter de Jong <[email protected]>\n \"\"\"\n\n def __init__(self):\n self.userdb = {}\n self.userdb_file = '%s/userdb.txt' % CONFIG.get('honeypot', 'data_path')\n self.load()\n\n\n def load(self):\n \"\"\"\n load the user db\n \"\"\"\n\n with open(self.userdb_file, 'rb') as f:\n while True:\n rawline = f.readline()\n if not rawline:\n break\n\n line = rawline.strip()\n if not line:\n continue\n\n if line.startswith(b'#'):\n continue\n\n login, passwd = re.split(br':\\w+:', line, 1)\n self.adduser(login, passwd)\n\n\n def checklogin(self, thelogin, thepasswd, src_ip='0.0.0.0'):\n for credentials, policy in self.userdb.items():\n login, passwd = credentials\n\n if self.match_rule(login, thelogin):\n if self.match_rule(passwd, thepasswd):\n return policy\n\n return False\n\n\n def match_rule(self, rule, input):\n if type(rule) is bytes:\n return rule in [b'*', input]\n else:\n return bool(rule.search(input))\n\n\n def re_or_str(self, rule):\n \"\"\"\n Convert a /.../ type rule to a regex, otherwise return the string as-is\n \"\"\"\n res = re.match(br'/(.+)/(i)?$', rule)\n if res:\n return re.compile(res.group(1), re.IGNORECASE if res.group(2) else 0)\n\n return rule\n\n\n def adduser(self, login, passwd):\n login = self.re_or_str(login)\n\n if passwd.startswith(b'!'):\n policy = False\n passwd = passwd[1:]\n else:\n policy = True\n\n passwd = self.re_or_str(passwd)\n self.userdb[(login, passwd)] = policy\n\n\nclass AuthRandom(object):\n \"\"\"\n Alternative class that defines the checklogin() method.\n Users will be authenticated after a random number of attempts.\n \"\"\"\n\n def __init__(self):\n # Default values\n self.mintry, self.maxtry, self.maxcache = 2, 5, 10\n\n # Are there auth_class parameters?\n if CONFIG.has_option('honeypot', 'auth_class_parameters'):\n parameters = CONFIG.get('honeypot', 'auth_class_parameters')\n parlist = parameters.split(',')\n if len(parlist) == 3:\n self.mintry = int(parlist[0])\n self.maxtry = int(parlist[1])\n self.maxcache = int(parlist[2])\n\n if self.maxtry < self.mintry:\n self.maxtry = self.mintry + 1\n log.msg('maxtry < mintry, adjusting maxtry to: %d' % (self.maxtry,))\n self.uservar = {}\n self.uservar_file = '%s/uservar.json' % CONFIG.get('honeypot', 'data_path')\n self.loadvars()\n\n\n def loadvars(self):\n \"\"\"\n Load user vars from json file\n \"\"\"\n if path.isfile(self.uservar_file):\n with open(self.uservar_file, 'rb') as fp:\n try:\n self.uservar = json.load(fp)\n except:\n self.uservar = {}\n\n\n def savevars(self):\n \"\"\"\n Save the user vars to json file\n \"\"\"\n data = self.uservar\n # Note: this is subject to races between cowrie logins\n with open(self.uservar_file, 'wb') as fp:\n json.dump(data, fp)\n\n\n def checklogin(self, thelogin, thepasswd, src_ip):\n \"\"\"\n Every new source IP will have to try a random number of times between\n 'mintry' and 'maxtry' before succeeding to login.\n All username/password combinations must be different.\n The successful login combination is stored with the IP address.\n Successful username/passwords pairs are also cached for 'maxcache' times.\n This is to allow access for returns from different IP addresses.\n Variables are saved in 'uservar.json' in the data directory.\n \"\"\"\n\n auth = False\n userpass = thelogin + ':' + thepasswd\n\n if not 'cache' in self.uservar:\n self.uservar['cache'] = []\n cache = self.uservar['cache']\n\n # Check if it is the first visit from src_ip\n if src_ip not in self.uservar:\n self.uservar[src_ip] = {}\n ipinfo = self.uservar[src_ip]\n ipinfo['try'] = 0\n if userpass in cache:\n log.msg('first time for %s, found cached: %s' % (src_ip, userpass))\n ipinfo['max'] = 1\n ipinfo['user'] = thelogin\n ipinfo['pw'] = thepasswd\n auth = True\n self.savevars()\n return auth\n else:\n ipinfo['max'] = randint(self.mintry, self.maxtry)\n log.msg('first time for %s, need: %d' % (src_ip, ipinfo['max']))\n\n ipinfo = self.uservar[src_ip]\n\n # Fill in missing variables\n if not 'max' in ipinfo:\n ipinfo['max'] = randint(self.mintry, self.maxtry)\n if not 'try' in ipinfo:\n ipinfo['try'] = 0\n if not 'tried' in ipinfo:\n ipinfo['tried'] = []\n\n # Don't count repeated username/password combinations\n if userpass in ipinfo['tried']:\n log.msg('already tried this combination')\n self.savevars()\n return auth\n\n ipinfo['try'] += 1\n attempts = ipinfo['try']\n need = ipinfo['max']\n log.msg('login attempt: %d' % (attempts,))\n\n # Check if enough login attempts are tried\n if attempts < need:\n self.uservar[src_ip]['tried'].append(userpass)\n elif attempts == need:\n ipinfo['user'] = thelogin\n ipinfo['pw'] = thepasswd\n cache.append(userpass)\n if len(cache) > self.maxcache:\n cache.pop(0)\n auth = True\n # Returning after successful login\n elif attempts > need:\n if not 'user' in ipinfo or not 'pw' in ipinfo:\n log.msg('return, but username or password not set!!!')\n ipinfo['tried'].append(userpass)\n ipinfo['try'] = 1\n else:\n log.msg('login return, expect: [%s/%s]' % (ipinfo['user'], ipinfo['pw']))\n if thelogin == ipinfo['user'] and thepasswd == ipinfo['pw']:\n auth = True\n self.savevars()\n return auth\n\n", "path": "cowrie/core/auth.py"}]}
2,736
846
gh_patches_debug_14227
rasdani/github-patches
git_diff
castorini__pyserini-1626
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Error for SPLADE on-the-fly encoding with pytorch command used: ```bash python -m pyserini.search.lucene --threads 12 --batch-size 128 \ --index msmarco-v1-passage-splade-pp-ed \ --topics msmarco-passage-dev-subset \ --encoder naver/splade-cocondenser-ensembledistil \ --output run.msmarco-v1-passage.splade-pp-ed-pytorch.dev.txt \ --hits 1000 --impact ``` error message: > ... > File "/home/arthur/workplace/pyserini/pyserini/encode/_splade.py", line 28, in encode > raw_weights = self._output_to_weight_dicts(batch_token_ids, batch_weights) > NameError: name 'batch_token_ids' is not defined --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `pyserini/encode/_splade.py` Content: ``` 1 import torch 2 from transformers import AutoModelForMaskedLM, AutoTokenizer 3 import numpy as np 4 5 from pyserini.encode import QueryEncoder 6 7 8 class SpladeQueryEncoder(QueryEncoder): 9 def __init__(self, model_name_or_path, tokenizer_name=None, device='cpu'): 10 self.device = device 11 self.model = AutoModelForMaskedLM.from_pretrained(model_name_or_path) 12 self.model.to(self.device) 13 self.tokenizer = AutoTokenizer.from_pretrained(tokenizer_name or model_name_or_path) 14 self.reverse_voc = {v: k for k, v in self.tokenizer.vocab.items()} 15 self.weight_range = 5 16 self.quant_range = 256 17 18 def encode(self, text, max_length=256, **kwargs): 19 inputs = self.tokenizer([text], max_length=max_length, padding='longest', 20 truncation=True, add_special_tokens=True, 21 return_tensors='pt').to(self.device) 22 input_ids = inputs['input_ids'] 23 input_attention = inputs['attention_mask'] 24 batch_logits = self.model(input_ids)['logits'] 25 batch_aggregated_logits, _ = torch.max(torch.log(1 + torch.relu(batch_logits)) 26 * input_attention.unsqueeze(-1), dim=1) 27 batch_aggregated_logits = batch_aggregated_logits.cpu().detach().numpy() 28 raw_weights = self._output_to_weight_dicts(batch_token_ids, batch_weights) 29 return self._get_encoded_query_token_wight_dicts(raw_weights)[0] 30 31 def _output_to_weight_dicts(self, batch_aggregated_logits): 32 to_return = [] 33 for aggregated_logits in batch_aggregated_logits: 34 col = np.nonzero(aggregated_logits)[0] 35 weights = aggregated_logits[col] 36 d = {self.reverse_voc[k]: float(v) for k, v in zip(list(col), list(weights))} 37 to_return.append(d) 38 return to_return 39 40 def _get_encoded_query_token_wight_dicts(self, tok_weights): 41 to_return = [] 42 for _tok_weight in tok_weights: 43 _weights = {} 44 for token, weight in _tok_weight.items(): 45 weight_quanted = round(weight / self.weight_range * self.quant_range) 46 _weights[token] = weight_quanted 47 to_return.append(_weights) 48 return to_return 49 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/pyserini/encode/_splade.py b/pyserini/encode/_splade.py --- a/pyserini/encode/_splade.py +++ b/pyserini/encode/_splade.py @@ -25,7 +25,7 @@ batch_aggregated_logits, _ = torch.max(torch.log(1 + torch.relu(batch_logits)) * input_attention.unsqueeze(-1), dim=1) batch_aggregated_logits = batch_aggregated_logits.cpu().detach().numpy() - raw_weights = self._output_to_weight_dicts(batch_token_ids, batch_weights) + raw_weights = self._output_to_weight_dicts(batch_aggregated_logits) return self._get_encoded_query_token_wight_dicts(raw_weights)[0] def _output_to_weight_dicts(self, batch_aggregated_logits):
{"golden_diff": "diff --git a/pyserini/encode/_splade.py b/pyserini/encode/_splade.py\n--- a/pyserini/encode/_splade.py\n+++ b/pyserini/encode/_splade.py\n@@ -25,7 +25,7 @@\n batch_aggregated_logits, _ = torch.max(torch.log(1 + torch.relu(batch_logits))\n * input_attention.unsqueeze(-1), dim=1)\n batch_aggregated_logits = batch_aggregated_logits.cpu().detach().numpy()\n- raw_weights = self._output_to_weight_dicts(batch_token_ids, batch_weights)\n+ raw_weights = self._output_to_weight_dicts(batch_aggregated_logits)\n return self._get_encoded_query_token_wight_dicts(raw_weights)[0]\n \n def _output_to_weight_dicts(self, batch_aggregated_logits):\n", "issue": "Error for SPLADE on-the-fly encoding with pytorch \ncommand used:\r\n```bash\r\npython -m pyserini.search.lucene --threads 12 --batch-size 128 \\\r\n --index msmarco-v1-passage-splade-pp-ed \\\r\n --topics msmarco-passage-dev-subset \\\r\n --encoder naver/splade-cocondenser-ensembledistil \\\r\n --output run.msmarco-v1-passage.splade-pp-ed-pytorch.dev.txt \\\r\n --hits 1000 --impact\r\n```\r\n\r\nerror message:\r\n> ...\r\n> File \"/home/arthur/workplace/pyserini/pyserini/encode/_splade.py\", line 28, in encode\r\n> raw_weights = self._output_to_weight_dicts(batch_token_ids, batch_weights)\r\n> NameError: name 'batch_token_ids' is not defined\r\n\n", "before_files": [{"content": "import torch\nfrom transformers import AutoModelForMaskedLM, AutoTokenizer\nimport numpy as np\n\nfrom pyserini.encode import QueryEncoder\n\n\nclass SpladeQueryEncoder(QueryEncoder):\n def __init__(self, model_name_or_path, tokenizer_name=None, device='cpu'):\n self.device = device\n self.model = AutoModelForMaskedLM.from_pretrained(model_name_or_path)\n self.model.to(self.device)\n self.tokenizer = AutoTokenizer.from_pretrained(tokenizer_name or model_name_or_path)\n self.reverse_voc = {v: k for k, v in self.tokenizer.vocab.items()}\n self.weight_range = 5\n self.quant_range = 256\n\n def encode(self, text, max_length=256, **kwargs):\n inputs = self.tokenizer([text], max_length=max_length, padding='longest',\n truncation=True, add_special_tokens=True,\n return_tensors='pt').to(self.device)\n input_ids = inputs['input_ids']\n input_attention = inputs['attention_mask']\n batch_logits = self.model(input_ids)['logits']\n batch_aggregated_logits, _ = torch.max(torch.log(1 + torch.relu(batch_logits))\n * input_attention.unsqueeze(-1), dim=1)\n batch_aggregated_logits = batch_aggregated_logits.cpu().detach().numpy()\n raw_weights = self._output_to_weight_dicts(batch_token_ids, batch_weights)\n return self._get_encoded_query_token_wight_dicts(raw_weights)[0]\n\n def _output_to_weight_dicts(self, batch_aggregated_logits):\n to_return = []\n for aggregated_logits in batch_aggregated_logits:\n col = np.nonzero(aggregated_logits)[0]\n weights = aggregated_logits[col]\n d = {self.reverse_voc[k]: float(v) for k, v in zip(list(col), list(weights))}\n to_return.append(d)\n return to_return\n\n def _get_encoded_query_token_wight_dicts(self, tok_weights):\n to_return = []\n for _tok_weight in tok_weights:\n _weights = {}\n for token, weight in _tok_weight.items():\n weight_quanted = round(weight / self.weight_range * self.quant_range)\n _weights[token] = weight_quanted\n to_return.append(_weights)\n return to_return\n", "path": "pyserini/encode/_splade.py"}], "after_files": [{"content": "import torch\nfrom transformers import AutoModelForMaskedLM, AutoTokenizer\nimport numpy as np\n\nfrom pyserini.encode import QueryEncoder\n\n\nclass SpladeQueryEncoder(QueryEncoder):\n def __init__(self, model_name_or_path, tokenizer_name=None, device='cpu'):\n self.device = device\n self.model = AutoModelForMaskedLM.from_pretrained(model_name_or_path)\n self.model.to(self.device)\n self.tokenizer = AutoTokenizer.from_pretrained(tokenizer_name or model_name_or_path)\n self.reverse_voc = {v: k for k, v in self.tokenizer.vocab.items()}\n self.weight_range = 5\n self.quant_range = 256\n\n def encode(self, text, max_length=256, **kwargs):\n inputs = self.tokenizer([text], max_length=max_length, padding='longest',\n truncation=True, add_special_tokens=True,\n return_tensors='pt').to(self.device)\n input_ids = inputs['input_ids']\n input_attention = inputs['attention_mask']\n batch_logits = self.model(input_ids)['logits']\n batch_aggregated_logits, _ = torch.max(torch.log(1 + torch.relu(batch_logits))\n * input_attention.unsqueeze(-1), dim=1)\n batch_aggregated_logits = batch_aggregated_logits.cpu().detach().numpy()\n raw_weights = self._output_to_weight_dicts(batch_aggregated_logits)\n return self._get_encoded_query_token_wight_dicts(raw_weights)[0]\n\n def _output_to_weight_dicts(self, batch_aggregated_logits):\n to_return = []\n for aggregated_logits in batch_aggregated_logits:\n col = np.nonzero(aggregated_logits)[0]\n weights = aggregated_logits[col]\n d = {self.reverse_voc[k]: float(v) for k, v in zip(list(col), list(weights))}\n to_return.append(d)\n return to_return\n\n def _get_encoded_query_token_wight_dicts(self, tok_weights):\n to_return = []\n for _tok_weight in tok_weights:\n _weights = {}\n for token, weight in _tok_weight.items():\n weight_quanted = round(weight / self.weight_range * self.quant_range)\n _weights[token] = weight_quanted\n to_return.append(_weights)\n return to_return\n", "path": "pyserini/encode/_splade.py"}]}
1,030
173
gh_patches_debug_9661
rasdani/github-patches
git_diff
psychopy__psychopy-2339
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Setting "Custom code" in StaticComponent doesn't seem to have any effect The generated script doesn't contain any traces of the `Custom code` entered in the `StaticComponent`'s properties dialog. `psychopy:master` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `psychopy/experiment/components/static/__init__.py` Content: ``` 1 #!/usr/bin/env python 2 # -*- coding: utf-8 -*- 3 4 """ 5 Part of the PsychoPy library 6 Copyright (C) 2018 Jonathan Peirce 7 Distributed under the terms of the GNU General Public License (GPL). 8 """ 9 10 from __future__ import absolute_import, print_function 11 12 from builtins import str 13 from os import path 14 from psychopy.experiment.components import BaseComponent, Param, _translate 15 16 __author__ = 'Jon Peirce' 17 18 # the absolute path to the folder containing this path 19 thisFolder = path.abspath(path.dirname(__file__)) 20 iconFile = path.join(thisFolder, 'static.png') 21 tooltip = _translate('Static: Static screen period (e.g. an ISI). ' 22 'Useful for pre-loading stimuli.') 23 _localized = {'Custom code': _translate('Custom code')} 24 25 26 class StaticComponent(BaseComponent): 27 """A Static Component, allowing frame rendering to pause. 28 29 E.g., pause while disk is accessed for loading an image 30 """ 31 # override the categories property below 32 # an attribute of the class, determines the section in the components panel 33 categories = ['Custom'] 34 35 def __init__(self, exp, parentName, name='ISI', 36 startType='time (s)', startVal=0.0, 37 stopType='duration (s)', stopVal=0.5, 38 startEstim='', durationEstim=''): 39 BaseComponent.__init__(self, exp, parentName, name=name) 40 self.updatesList = [] # a list of dicts {compParams, fieldName} 41 self.type = 'Static' 42 self.url = "http://www.psychopy.org/builder/components/static.html" 43 hnt = _translate( 44 "Custom code to be run during the static period (after updates)") 45 self.params['code'] = Param("", valType='code', 46 hint=hnt, 47 label=_localized['Custom code']) 48 self.order = ['name'] # make name come first (others don't matter) 49 50 hnt = _translate("How do you want to define your start point?") 51 self.params['startType'] = Param(startType, valType='str', 52 allowedVals=['time (s)', 'frame N'], 53 hint=hnt) 54 hnt = _translate("How do you want to define your end point?") 55 _allow = ['duration (s)', 'duration (frames)', 'time (s)', 'frame N'] 56 self.params['stopType'] = Param(stopType, valType='str', 57 allowedVals=_allow, # copy not needed 58 hint=hnt) 59 hnt = _translate("When does the component start?") 60 self.params['startVal'] = Param(startVal, valType='code', 61 allowedTypes=[], 62 hint=hnt) 63 hnt = _translate("When does the component end? (blank is endless)") 64 self.params['stopVal'] = Param(stopVal, valType='code', 65 allowedTypes=[], 66 updates='constant', allowedUpdates=[], 67 hint=hnt) 68 hnt = _translate("(Optional) expected start (s), purely for " 69 "representing in the timeline") 70 self.params['startEstim'] = Param(startEstim, valType='code', 71 allowedTypes=[], 72 hint=hnt) 73 hnt = _translate("(Optional) expected duration (s), purely for " 74 "representing in the timeline") 75 self.params['durationEstim'] = Param(durationEstim, valType='code', 76 allowedTypes=[], 77 hint=hnt) 78 79 def addComponentUpdate(self, routine, compName, fieldName): 80 self.updatesList.append({'compName': compName, 81 'fieldName': fieldName, 82 'routine': routine}) 83 84 def remComponentUpdate(self, routine, compName, fieldName): 85 # have to do this in a loop rather than a simple remove 86 target = {'compName': compName, 'fieldName': fieldName, 87 'routine': routine} 88 for item in self.updatesList: 89 if item == target: 90 self.updatesList.remove(item) 91 92 def writeInitCode(self, buff): 93 code = ("%(name)s = clock.StaticPeriod(win=win, " 94 "screenHz=expInfo['frameRate'], name='%(name)s')\n") 95 buff.writeIndented(code % self.params) 96 97 def writeFrameCode(self, buff): 98 self.writeStartTestCode(buff) 99 # to get out of the if statement 100 buff.setIndentLevel(-1, relative=True) 101 self.writeStopTestCode(buff) 102 103 def writeStartTestCode(self, buff): 104 """This will be executed as the final component in the routine 105 """ 106 buff.writeIndented("# *%s* period\n" % (self.params['name'])) 107 BaseComponent.writeStartTestCode(self, buff) 108 109 if self.params['stopType'].val == 'time (s)': 110 durationSecsStr = "%(stopVal)s-t" % (self.params) 111 elif self.params['stopType'].val == 'duration (s)': 112 durationSecsStr = "%(stopVal)s" % (self.params) 113 elif self.params['stopType'].val == 'duration (frames)': 114 durationSecsStr = "%(stopVal)s*frameDur" % (self.params) 115 elif self.params['stopType'].val == 'frame N': 116 durationSecsStr = "(%(stopVal)s-frameN)*frameDur" % (self.params) 117 else: 118 msg = ("Couldn't deduce end point for startType=%(startType)s, " 119 "stopType=%(stopType)s") 120 raise Exception(msg % self.params) 121 vals = (self.params['name'], durationSecsStr) 122 buff.writeIndented("%s.start(%s)\n" % vals) 123 124 def writeStopTestCode(self, buff): 125 """Test whether we need to stop 126 """ 127 code = ("elif %(name)s.status == STARTED: # one frame should " 128 "pass before updating params and completing\n") 129 buff.writeIndented(code % self.params) 130 buff.setIndentLevel(+1, relative=True) # entered an if statement 131 self.writeParamUpdates(buff) 132 code = "%(name)s.complete() # finish the static period\n" 133 buff.writeIndented(code % self.params) 134 # to get out of the if statement 135 buff.setIndentLevel(-1, relative=True) 136 137 # pass # the clock.StaticPeriod class handles its own stopping 138 139 def writeParamUpdates(self, buff, updateType=None, paramNames=None): 140 """Write updates. Unlike most components, which us this method 141 to update themselves, the Static Component uses this to update 142 *other* components 143 """ 144 if updateType == 'set every repeat': 145 return # the static component doesn't need to change itself 146 if len(self.updatesList): 147 code = "# updating other components during *%s*\n" 148 buff.writeIndented(code % self.params['name']) 149 for update in self.updatesList: 150 # update = {'compName':compName,'fieldName':fieldName, 151 # 'routine':routine} 152 compName = update['compName'] 153 fieldName = update['fieldName'] 154 routine = self.exp.routines[update['routine']] 155 if hasattr(compName, 'params'): 156 prms = compName.params # it's already a compon so get params 157 else: 158 # it's a name so get compon and then get params 159 prms = self.exp.getComponentFromName(str(compName)).params 160 self.writeParamUpdate(buff, compName=compName, 161 paramName=fieldName, 162 val=prms[fieldName], 163 updateType=prms[fieldName].updates, 164 params=prms) 165 code = "# component updates done\n" 166 buff.writeIndented(code) 167 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/psychopy/experiment/components/static/__init__.py b/psychopy/experiment/components/static/__init__.py --- a/psychopy/experiment/components/static/__init__.py +++ b/psychopy/experiment/components/static/__init__.py @@ -163,4 +163,11 @@ updateType=prms[fieldName].updates, params=prms) code = "# component updates done\n" - buff.writeIndented(code) + + # Write custom code + if self.params['code']: + code += ("# Adding custom code for {name}\n" + "{code}\n".format(name=self.params['name'], + code=self.params['code'])) + + buff.writeIndentedLines(code)
{"golden_diff": "diff --git a/psychopy/experiment/components/static/__init__.py b/psychopy/experiment/components/static/__init__.py\n--- a/psychopy/experiment/components/static/__init__.py\n+++ b/psychopy/experiment/components/static/__init__.py\n@@ -163,4 +163,11 @@\n updateType=prms[fieldName].updates,\n params=prms)\n code = \"# component updates done\\n\"\n- buff.writeIndented(code)\n+\n+ # Write custom code\n+ if self.params['code']:\n+ code += (\"# Adding custom code for {name}\\n\"\n+ \"{code}\\n\".format(name=self.params['name'],\n+ code=self.params['code']))\n+\n+ buff.writeIndentedLines(code)\n", "issue": "Setting \"Custom code\" in StaticComponent doesn't seem to have any effect\nThe generated script doesn't contain any traces of the `Custom code` entered in the `StaticComponent`'s properties dialog.\r\n\r\n`psychopy:master`\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\nPart of the PsychoPy library\nCopyright (C) 2018 Jonathan Peirce\nDistributed under the terms of the GNU General Public License (GPL).\n\"\"\"\n\nfrom __future__ import absolute_import, print_function\n\nfrom builtins import str\nfrom os import path\nfrom psychopy.experiment.components import BaseComponent, Param, _translate\n\n__author__ = 'Jon Peirce'\n\n# the absolute path to the folder containing this path\nthisFolder = path.abspath(path.dirname(__file__))\niconFile = path.join(thisFolder, 'static.png')\ntooltip = _translate('Static: Static screen period (e.g. an ISI). '\n 'Useful for pre-loading stimuli.')\n_localized = {'Custom code': _translate('Custom code')}\n\n\nclass StaticComponent(BaseComponent):\n \"\"\"A Static Component, allowing frame rendering to pause.\n\n E.g., pause while disk is accessed for loading an image\n \"\"\"\n # override the categories property below\n # an attribute of the class, determines the section in the components panel\n categories = ['Custom']\n\n def __init__(self, exp, parentName, name='ISI',\n startType='time (s)', startVal=0.0,\n stopType='duration (s)', stopVal=0.5,\n startEstim='', durationEstim=''):\n BaseComponent.__init__(self, exp, parentName, name=name)\n self.updatesList = [] # a list of dicts {compParams, fieldName}\n self.type = 'Static'\n self.url = \"http://www.psychopy.org/builder/components/static.html\"\n hnt = _translate(\n \"Custom code to be run during the static period (after updates)\")\n self.params['code'] = Param(\"\", valType='code',\n hint=hnt,\n label=_localized['Custom code'])\n self.order = ['name'] # make name come first (others don't matter)\n\n hnt = _translate(\"How do you want to define your start point?\")\n self.params['startType'] = Param(startType, valType='str',\n allowedVals=['time (s)', 'frame N'],\n hint=hnt)\n hnt = _translate(\"How do you want to define your end point?\")\n _allow = ['duration (s)', 'duration (frames)', 'time (s)', 'frame N']\n self.params['stopType'] = Param(stopType, valType='str',\n allowedVals=_allow, # copy not needed\n hint=hnt)\n hnt = _translate(\"When does the component start?\")\n self.params['startVal'] = Param(startVal, valType='code',\n allowedTypes=[],\n hint=hnt)\n hnt = _translate(\"When does the component end? (blank is endless)\")\n self.params['stopVal'] = Param(stopVal, valType='code',\n allowedTypes=[],\n updates='constant', allowedUpdates=[],\n hint=hnt)\n hnt = _translate(\"(Optional) expected start (s), purely for \"\n \"representing in the timeline\")\n self.params['startEstim'] = Param(startEstim, valType='code',\n allowedTypes=[],\n hint=hnt)\n hnt = _translate(\"(Optional) expected duration (s), purely for \"\n \"representing in the timeline\")\n self.params['durationEstim'] = Param(durationEstim, valType='code',\n allowedTypes=[],\n hint=hnt)\n\n def addComponentUpdate(self, routine, compName, fieldName):\n self.updatesList.append({'compName': compName,\n 'fieldName': fieldName,\n 'routine': routine})\n\n def remComponentUpdate(self, routine, compName, fieldName):\n # have to do this in a loop rather than a simple remove\n target = {'compName': compName, 'fieldName': fieldName,\n 'routine': routine}\n for item in self.updatesList:\n if item == target:\n self.updatesList.remove(item)\n\n def writeInitCode(self, buff):\n code = (\"%(name)s = clock.StaticPeriod(win=win, \"\n \"screenHz=expInfo['frameRate'], name='%(name)s')\\n\")\n buff.writeIndented(code % self.params)\n\n def writeFrameCode(self, buff):\n self.writeStartTestCode(buff)\n # to get out of the if statement\n buff.setIndentLevel(-1, relative=True)\n self.writeStopTestCode(buff)\n\n def writeStartTestCode(self, buff):\n \"\"\"This will be executed as the final component in the routine\n \"\"\"\n buff.writeIndented(\"# *%s* period\\n\" % (self.params['name']))\n BaseComponent.writeStartTestCode(self, buff)\n\n if self.params['stopType'].val == 'time (s)':\n durationSecsStr = \"%(stopVal)s-t\" % (self.params)\n elif self.params['stopType'].val == 'duration (s)':\n durationSecsStr = \"%(stopVal)s\" % (self.params)\n elif self.params['stopType'].val == 'duration (frames)':\n durationSecsStr = \"%(stopVal)s*frameDur\" % (self.params)\n elif self.params['stopType'].val == 'frame N':\n durationSecsStr = \"(%(stopVal)s-frameN)*frameDur\" % (self.params)\n else:\n msg = (\"Couldn't deduce end point for startType=%(startType)s, \"\n \"stopType=%(stopType)s\")\n raise Exception(msg % self.params)\n vals = (self.params['name'], durationSecsStr)\n buff.writeIndented(\"%s.start(%s)\\n\" % vals)\n\n def writeStopTestCode(self, buff):\n \"\"\"Test whether we need to stop\n \"\"\"\n code = (\"elif %(name)s.status == STARTED: # one frame should \"\n \"pass before updating params and completing\\n\")\n buff.writeIndented(code % self.params)\n buff.setIndentLevel(+1, relative=True) # entered an if statement\n self.writeParamUpdates(buff)\n code = \"%(name)s.complete() # finish the static period\\n\"\n buff.writeIndented(code % self.params)\n # to get out of the if statement\n buff.setIndentLevel(-1, relative=True)\n\n # pass # the clock.StaticPeriod class handles its own stopping\n\n def writeParamUpdates(self, buff, updateType=None, paramNames=None):\n \"\"\"Write updates. Unlike most components, which us this method\n to update themselves, the Static Component uses this to update\n *other* components\n \"\"\"\n if updateType == 'set every repeat':\n return # the static component doesn't need to change itself\n if len(self.updatesList):\n code = \"# updating other components during *%s*\\n\"\n buff.writeIndented(code % self.params['name'])\n for update in self.updatesList:\n # update = {'compName':compName,'fieldName':fieldName,\n # 'routine':routine}\n compName = update['compName']\n fieldName = update['fieldName']\n routine = self.exp.routines[update['routine']]\n if hasattr(compName, 'params'):\n prms = compName.params # it's already a compon so get params\n else:\n # it's a name so get compon and then get params\n prms = self.exp.getComponentFromName(str(compName)).params\n self.writeParamUpdate(buff, compName=compName,\n paramName=fieldName,\n val=prms[fieldName],\n updateType=prms[fieldName].updates,\n params=prms)\n code = \"# component updates done\\n\"\n buff.writeIndented(code)\n", "path": "psychopy/experiment/components/static/__init__.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\nPart of the PsychoPy library\nCopyright (C) 2018 Jonathan Peirce\nDistributed under the terms of the GNU General Public License (GPL).\n\"\"\"\n\nfrom __future__ import absolute_import, print_function\n\nfrom builtins import str\nfrom os import path\nfrom psychopy.experiment.components import BaseComponent, Param, _translate\n\n__author__ = 'Jon Peirce'\n\n# the absolute path to the folder containing this path\nthisFolder = path.abspath(path.dirname(__file__))\niconFile = path.join(thisFolder, 'static.png')\ntooltip = _translate('Static: Static screen period (e.g. an ISI). '\n 'Useful for pre-loading stimuli.')\n_localized = {'Custom code': _translate('Custom code')}\n\n\nclass StaticComponent(BaseComponent):\n \"\"\"A Static Component, allowing frame rendering to pause.\n\n E.g., pause while disk is accessed for loading an image\n \"\"\"\n # override the categories property below\n # an attribute of the class, determines the section in the components panel\n categories = ['Custom']\n\n def __init__(self, exp, parentName, name='ISI',\n startType='time (s)', startVal=0.0,\n stopType='duration (s)', stopVal=0.5,\n startEstim='', durationEstim=''):\n BaseComponent.__init__(self, exp, parentName, name=name)\n self.updatesList = [] # a list of dicts {compParams, fieldName}\n self.type = 'Static'\n self.url = \"http://www.psychopy.org/builder/components/static.html\"\n hnt = _translate(\n \"Custom code to be run during the static period (after updates)\")\n self.params['code'] = Param(\"\", valType='code',\n hint=hnt,\n label=_localized['Custom code'])\n self.order = ['name'] # make name come first (others don't matter)\n\n hnt = _translate(\"How do you want to define your start point?\")\n self.params['startType'] = Param(startType, valType='str',\n allowedVals=['time (s)', 'frame N'],\n hint=hnt)\n hnt = _translate(\"How do you want to define your end point?\")\n _allow = ['duration (s)', 'duration (frames)', 'time (s)', 'frame N']\n self.params['stopType'] = Param(stopType, valType='str',\n allowedVals=_allow, # copy not needed\n hint=hnt)\n hnt = _translate(\"When does the component start?\")\n self.params['startVal'] = Param(startVal, valType='code',\n allowedTypes=[],\n hint=hnt)\n hnt = _translate(\"When does the component end? (blank is endless)\")\n self.params['stopVal'] = Param(stopVal, valType='code',\n allowedTypes=[],\n updates='constant', allowedUpdates=[],\n hint=hnt)\n hnt = _translate(\"(Optional) expected start (s), purely for \"\n \"representing in the timeline\")\n self.params['startEstim'] = Param(startEstim, valType='code',\n allowedTypes=[],\n hint=hnt)\n hnt = _translate(\"(Optional) expected duration (s), purely for \"\n \"representing in the timeline\")\n self.params['durationEstim'] = Param(durationEstim, valType='code',\n allowedTypes=[],\n hint=hnt)\n\n def addComponentUpdate(self, routine, compName, fieldName):\n self.updatesList.append({'compName': compName,\n 'fieldName': fieldName,\n 'routine': routine})\n\n def remComponentUpdate(self, routine, compName, fieldName):\n # have to do this in a loop rather than a simple remove\n target = {'compName': compName, 'fieldName': fieldName,\n 'routine': routine}\n for item in self.updatesList:\n if item == target:\n self.updatesList.remove(item)\n\n def writeInitCode(self, buff):\n code = (\"%(name)s = clock.StaticPeriod(win=win, \"\n \"screenHz=expInfo['frameRate'], name='%(name)s')\\n\")\n buff.writeIndented(code % self.params)\n\n def writeFrameCode(self, buff):\n self.writeStartTestCode(buff)\n # to get out of the if statement\n buff.setIndentLevel(-1, relative=True)\n self.writeStopTestCode(buff)\n\n def writeStartTestCode(self, buff):\n \"\"\"This will be executed as the final component in the routine\n \"\"\"\n buff.writeIndented(\"# *%s* period\\n\" % (self.params['name']))\n BaseComponent.writeStartTestCode(self, buff)\n\n if self.params['stopType'].val == 'time (s)':\n durationSecsStr = \"%(stopVal)s-t\" % (self.params)\n elif self.params['stopType'].val == 'duration (s)':\n durationSecsStr = \"%(stopVal)s\" % (self.params)\n elif self.params['stopType'].val == 'duration (frames)':\n durationSecsStr = \"%(stopVal)s*frameDur\" % (self.params)\n elif self.params['stopType'].val == 'frame N':\n durationSecsStr = \"(%(stopVal)s-frameN)*frameDur\" % (self.params)\n else:\n msg = (\"Couldn't deduce end point for startType=%(startType)s, \"\n \"stopType=%(stopType)s\")\n raise Exception(msg % self.params)\n vals = (self.params['name'], durationSecsStr)\n buff.writeIndented(\"%s.start(%s)\\n\" % vals)\n\n def writeStopTestCode(self, buff):\n \"\"\"Test whether we need to stop\n \"\"\"\n code = (\"elif %(name)s.status == STARTED: # one frame should \"\n \"pass before updating params and completing\\n\")\n buff.writeIndented(code % self.params)\n buff.setIndentLevel(+1, relative=True) # entered an if statement\n self.writeParamUpdates(buff)\n code = \"%(name)s.complete() # finish the static period\\n\"\n buff.writeIndented(code % self.params)\n # to get out of the if statement\n buff.setIndentLevel(-1, relative=True)\n\n # pass # the clock.StaticPeriod class handles its own stopping\n\n def writeParamUpdates(self, buff, updateType=None, paramNames=None):\n \"\"\"Write updates. Unlike most components, which us this method\n to update themselves, the Static Component uses this to update\n *other* components\n \"\"\"\n if updateType == 'set every repeat':\n return # the static component doesn't need to change itself\n if len(self.updatesList):\n code = \"# updating other components during *%s*\\n\"\n buff.writeIndented(code % self.params['name'])\n for update in self.updatesList:\n # update = {'compName':compName,'fieldName':fieldName,\n # 'routine':routine}\n compName = update['compName']\n fieldName = update['fieldName']\n routine = self.exp.routines[update['routine']]\n if hasattr(compName, 'params'):\n prms = compName.params # it's already a compon so get params\n else:\n # it's a name so get compon and then get params\n prms = self.exp.getComponentFromName(str(compName)).params\n self.writeParamUpdate(buff, compName=compName,\n paramName=fieldName,\n val=prms[fieldName],\n updateType=prms[fieldName].updates,\n params=prms)\n code = \"# component updates done\\n\"\n\n # Write custom code\n if self.params['code']:\n code += (\"# Adding custom code for {name}\\n\"\n \"{code}\\n\".format(name=self.params['name'],\n code=self.params['code']))\n\n buff.writeIndentedLines(code)\n", "path": "psychopy/experiment/components/static/__init__.py"}]}
2,381
166
gh_patches_debug_9108
rasdani/github-patches
git_diff
Kinto__kinto-726
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- The /permissions endpoint is broken To reproduce just access https://kinto-ota.dev.mozaws.net/v1/permissions ``` File "/home/ubuntu/venvs/kinto/local/lib/python2.7/site-packages/kinto/core/permission/re dis.py", line 103, in get_accessible_objects _, object_id, permission = key.decode('utf-8').split(':') ValueError: too many values to unpack ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `kinto/core/permission/redis.py` Content: ``` 1 from __future__ import absolute_import 2 3 from collections import defaultdict 4 5 from kinto.core.permission import PermissionBase 6 from kinto.core.storage.redis import create_from_config, wrap_redis_error 7 8 9 class Permission(PermissionBase): 10 """Permission backend implementation using Redis. 11 12 Enable in configuration:: 13 14 kinto.permission_backend = kinto.core.permission.redis 15 16 *(Optional)* Instance location URI can be customized:: 17 18 kinto.permission_url = redis://localhost:6379/2 19 20 A threaded connection pool is enabled by default:: 21 22 kinto.permission_pool_size = 50 23 24 :noindex: 25 """ 26 27 def __init__(self, client, *args, **kwargs): 28 super(Permission, self).__init__(*args, **kwargs) 29 self._client = client 30 31 @property 32 def settings(self): 33 return dict(self._client.connection_pool.connection_kwargs) 34 35 def initialize_schema(self): 36 # Nothing to do. 37 pass 38 39 def _decode_set(self, results): 40 return set([r.decode('utf-8') for r in results]) 41 42 @wrap_redis_error 43 def flush(self): 44 self._client.flushdb() 45 46 @wrap_redis_error 47 def add_user_principal(self, user_id, principal): 48 user_key = 'user:%s' % user_id 49 self._client.sadd(user_key, principal) 50 51 @wrap_redis_error 52 def remove_user_principal(self, user_id, principal): 53 user_key = 'user:%s' % user_id 54 self._client.srem(user_key, principal) 55 if self._client.scard(user_key) == 0: 56 self._client.delete(user_key) 57 58 def remove_principal(self, principal): 59 with self._client.pipeline() as pipe: 60 user_keys = self._client.scan_iter(match='user:*') 61 for user_key in user_keys: 62 pipe.srem(user_key, principal) 63 pipe.execute() 64 65 @wrap_redis_error 66 def get_user_principals(self, user_id): 67 user_key = 'user:%s' % user_id 68 return self._decode_set(self._client.smembers(user_key)) 69 70 @wrap_redis_error 71 def add_principal_to_ace(self, object_id, permission, principal): 72 permission_key = 'permission:%s:%s' % (object_id, permission) 73 self._client.sadd(permission_key, principal) 74 75 @wrap_redis_error 76 def remove_principal_from_ace(self, object_id, permission, principal): 77 permission_key = 'permission:%s:%s' % (object_id, permission) 78 self._client.srem(permission_key, principal) 79 if self._client.scard(permission_key) == 0: 80 self._client.delete(permission_key) 81 82 @wrap_redis_error 83 def get_object_permission_principals(self, object_id, permission): 84 permission_key = 'permission:%s:%s' % (object_id, permission) 85 members = self._client.smembers(permission_key) 86 return self._decode_set(members) 87 88 @wrap_redis_error 89 def get_accessible_objects(self, principals, bound_permissions=None): 90 principals = set(principals) 91 92 if bound_permissions: 93 keys = ['permission:%s:%s' % op for op in bound_permissions] 94 else: 95 keys = ['permission:*'] 96 97 perms_by_id = dict() 98 for key_pattern in keys: 99 matched = self._client.scan_iter(match=key_pattern) 100 for key in matched: 101 authorized = self._decode_set(self._client.smembers(key)) 102 if len(authorized & principals) > 0: 103 _, object_id, permission = key.decode('utf-8').split(':') 104 perms_by_id.setdefault(object_id, set()).add(permission) 105 106 return perms_by_id 107 108 @wrap_redis_error 109 def get_authorized_principals(self, bound_permissions): 110 keys = ['permission:%s:%s' % (o, p) for (o, p) in bound_permissions] 111 if keys: 112 return self._decode_set(self._client.sunion(*list(keys))) 113 return set() 114 115 @wrap_redis_error 116 def get_objects_permissions(self, objects_ids, permissions=None): 117 objects_perms = [] 118 for object_id in objects_ids: 119 if permissions is not None: 120 keys = ['permission:%s:%s' % (object_id, permission) 121 for permission in permissions] 122 else: 123 keys = [key.decode('utf-8') for key in self._client.scan_iter( 124 match='permission:%s:*' % object_id)] 125 126 with self._client.pipeline() as pipe: 127 for permission_key in keys: 128 pipe.smembers(permission_key) 129 130 results = pipe.execute() 131 132 permissions = defaultdict(set) 133 for i, result in enumerate(results): 134 permission = keys[i].split(':', 2)[-1] 135 permissions[permission] = self._decode_set(result) 136 objects_perms.append(permissions) 137 return objects_perms 138 139 @wrap_redis_error 140 def replace_object_permissions(self, object_id, permissions): 141 keys = ['permission:%s:%s' % (object_id, permission) 142 for permission in permissions] 143 with self._client.pipeline() as pipe: 144 for key in keys: 145 pipe.delete(key) 146 permission = key.split(':', 2)[-1] 147 principals = permissions[permission] 148 if len(principals) > 0: 149 pipe.sadd(key, *principals) 150 pipe.execute() 151 152 @wrap_redis_error 153 def delete_object_permissions(self, *object_id_list): 154 with self._client.pipeline() as pipe: 155 for object_id in object_id_list: 156 keys = list(self._client.scan_iter( 157 match='permission:%s:*' % object_id)) 158 if len(keys) > 0: 159 pipe.delete(*keys) 160 pipe.execute() 161 162 163 def load_from_config(config): 164 client = create_from_config(config, prefix='permission_') 165 return Permission(client) 166 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/kinto/core/permission/redis.py b/kinto/core/permission/redis.py --- a/kinto/core/permission/redis.py +++ b/kinto/core/permission/redis.py @@ -100,8 +100,8 @@ for key in matched: authorized = self._decode_set(self._client.smembers(key)) if len(authorized & principals) > 0: - _, object_id, permission = key.decode('utf-8').split(':') - perms_by_id.setdefault(object_id, set()).add(permission) + _, obj_id, permission = key.decode('utf-8').split(':', 2) + perms_by_id.setdefault(obj_id, set()).add(permission) return perms_by_id
{"golden_diff": "diff --git a/kinto/core/permission/redis.py b/kinto/core/permission/redis.py\n--- a/kinto/core/permission/redis.py\n+++ b/kinto/core/permission/redis.py\n@@ -100,8 +100,8 @@\n for key in matched:\n authorized = self._decode_set(self._client.smembers(key))\n if len(authorized & principals) > 0:\n- _, object_id, permission = key.decode('utf-8').split(':')\n- perms_by_id.setdefault(object_id, set()).add(permission)\n+ _, obj_id, permission = key.decode('utf-8').split(':', 2)\n+ perms_by_id.setdefault(obj_id, set()).add(permission)\n \n return perms_by_id\n", "issue": "The /permissions endpoint is broken\nTo reproduce just access https://kinto-ota.dev.mozaws.net/v1/permissions\n\n```\n File \"/home/ubuntu/venvs/kinto/local/lib/python2.7/site-packages/kinto/core/permission/re\ndis.py\", line 103, in get_accessible_objects\n _, object_id, permission = key.decode('utf-8').split(':')\nValueError: too many values to unpack\n```\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nfrom collections import defaultdict\n\nfrom kinto.core.permission import PermissionBase\nfrom kinto.core.storage.redis import create_from_config, wrap_redis_error\n\n\nclass Permission(PermissionBase):\n \"\"\"Permission backend implementation using Redis.\n\n Enable in configuration::\n\n kinto.permission_backend = kinto.core.permission.redis\n\n *(Optional)* Instance location URI can be customized::\n\n kinto.permission_url = redis://localhost:6379/2\n\n A threaded connection pool is enabled by default::\n\n kinto.permission_pool_size = 50\n\n :noindex:\n \"\"\"\n\n def __init__(self, client, *args, **kwargs):\n super(Permission, self).__init__(*args, **kwargs)\n self._client = client\n\n @property\n def settings(self):\n return dict(self._client.connection_pool.connection_kwargs)\n\n def initialize_schema(self):\n # Nothing to do.\n pass\n\n def _decode_set(self, results):\n return set([r.decode('utf-8') for r in results])\n\n @wrap_redis_error\n def flush(self):\n self._client.flushdb()\n\n @wrap_redis_error\n def add_user_principal(self, user_id, principal):\n user_key = 'user:%s' % user_id\n self._client.sadd(user_key, principal)\n\n @wrap_redis_error\n def remove_user_principal(self, user_id, principal):\n user_key = 'user:%s' % user_id\n self._client.srem(user_key, principal)\n if self._client.scard(user_key) == 0:\n self._client.delete(user_key)\n\n def remove_principal(self, principal):\n with self._client.pipeline() as pipe:\n user_keys = self._client.scan_iter(match='user:*')\n for user_key in user_keys:\n pipe.srem(user_key, principal)\n pipe.execute()\n\n @wrap_redis_error\n def get_user_principals(self, user_id):\n user_key = 'user:%s' % user_id\n return self._decode_set(self._client.smembers(user_key))\n\n @wrap_redis_error\n def add_principal_to_ace(self, object_id, permission, principal):\n permission_key = 'permission:%s:%s' % (object_id, permission)\n self._client.sadd(permission_key, principal)\n\n @wrap_redis_error\n def remove_principal_from_ace(self, object_id, permission, principal):\n permission_key = 'permission:%s:%s' % (object_id, permission)\n self._client.srem(permission_key, principal)\n if self._client.scard(permission_key) == 0:\n self._client.delete(permission_key)\n\n @wrap_redis_error\n def get_object_permission_principals(self, object_id, permission):\n permission_key = 'permission:%s:%s' % (object_id, permission)\n members = self._client.smembers(permission_key)\n return self._decode_set(members)\n\n @wrap_redis_error\n def get_accessible_objects(self, principals, bound_permissions=None):\n principals = set(principals)\n\n if bound_permissions:\n keys = ['permission:%s:%s' % op for op in bound_permissions]\n else:\n keys = ['permission:*']\n\n perms_by_id = dict()\n for key_pattern in keys:\n matched = self._client.scan_iter(match=key_pattern)\n for key in matched:\n authorized = self._decode_set(self._client.smembers(key))\n if len(authorized & principals) > 0:\n _, object_id, permission = key.decode('utf-8').split(':')\n perms_by_id.setdefault(object_id, set()).add(permission)\n\n return perms_by_id\n\n @wrap_redis_error\n def get_authorized_principals(self, bound_permissions):\n keys = ['permission:%s:%s' % (o, p) for (o, p) in bound_permissions]\n if keys:\n return self._decode_set(self._client.sunion(*list(keys)))\n return set()\n\n @wrap_redis_error\n def get_objects_permissions(self, objects_ids, permissions=None):\n objects_perms = []\n for object_id in objects_ids:\n if permissions is not None:\n keys = ['permission:%s:%s' % (object_id, permission)\n for permission in permissions]\n else:\n keys = [key.decode('utf-8') for key in self._client.scan_iter(\n match='permission:%s:*' % object_id)]\n\n with self._client.pipeline() as pipe:\n for permission_key in keys:\n pipe.smembers(permission_key)\n\n results = pipe.execute()\n\n permissions = defaultdict(set)\n for i, result in enumerate(results):\n permission = keys[i].split(':', 2)[-1]\n permissions[permission] = self._decode_set(result)\n objects_perms.append(permissions)\n return objects_perms\n\n @wrap_redis_error\n def replace_object_permissions(self, object_id, permissions):\n keys = ['permission:%s:%s' % (object_id, permission)\n for permission in permissions]\n with self._client.pipeline() as pipe:\n for key in keys:\n pipe.delete(key)\n permission = key.split(':', 2)[-1]\n principals = permissions[permission]\n if len(principals) > 0:\n pipe.sadd(key, *principals)\n pipe.execute()\n\n @wrap_redis_error\n def delete_object_permissions(self, *object_id_list):\n with self._client.pipeline() as pipe:\n for object_id in object_id_list:\n keys = list(self._client.scan_iter(\n match='permission:%s:*' % object_id))\n if len(keys) > 0:\n pipe.delete(*keys)\n pipe.execute()\n\n\ndef load_from_config(config):\n client = create_from_config(config, prefix='permission_')\n return Permission(client)\n", "path": "kinto/core/permission/redis.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nfrom collections import defaultdict\n\nfrom kinto.core.permission import PermissionBase\nfrom kinto.core.storage.redis import create_from_config, wrap_redis_error\n\n\nclass Permission(PermissionBase):\n \"\"\"Permission backend implementation using Redis.\n\n Enable in configuration::\n\n kinto.permission_backend = kinto.core.permission.redis\n\n *(Optional)* Instance location URI can be customized::\n\n kinto.permission_url = redis://localhost:6379/2\n\n A threaded connection pool is enabled by default::\n\n kinto.permission_pool_size = 50\n\n :noindex:\n \"\"\"\n\n def __init__(self, client, *args, **kwargs):\n super(Permission, self).__init__(*args, **kwargs)\n self._client = client\n\n @property\n def settings(self):\n return dict(self._client.connection_pool.connection_kwargs)\n\n def initialize_schema(self):\n # Nothing to do.\n pass\n\n def _decode_set(self, results):\n return set([r.decode('utf-8') for r in results])\n\n @wrap_redis_error\n def flush(self):\n self._client.flushdb()\n\n @wrap_redis_error\n def add_user_principal(self, user_id, principal):\n user_key = 'user:%s' % user_id\n self._client.sadd(user_key, principal)\n\n @wrap_redis_error\n def remove_user_principal(self, user_id, principal):\n user_key = 'user:%s' % user_id\n self._client.srem(user_key, principal)\n if self._client.scard(user_key) == 0:\n self._client.delete(user_key)\n\n def remove_principal(self, principal):\n with self._client.pipeline() as pipe:\n user_keys = self._client.scan_iter(match='user:*')\n for user_key in user_keys:\n pipe.srem(user_key, principal)\n pipe.execute()\n\n @wrap_redis_error\n def get_user_principals(self, user_id):\n user_key = 'user:%s' % user_id\n return self._decode_set(self._client.smembers(user_key))\n\n @wrap_redis_error\n def add_principal_to_ace(self, object_id, permission, principal):\n permission_key = 'permission:%s:%s' % (object_id, permission)\n self._client.sadd(permission_key, principal)\n\n @wrap_redis_error\n def remove_principal_from_ace(self, object_id, permission, principal):\n permission_key = 'permission:%s:%s' % (object_id, permission)\n self._client.srem(permission_key, principal)\n if self._client.scard(permission_key) == 0:\n self._client.delete(permission_key)\n\n @wrap_redis_error\n def get_object_permission_principals(self, object_id, permission):\n permission_key = 'permission:%s:%s' % (object_id, permission)\n members = self._client.smembers(permission_key)\n return self._decode_set(members)\n\n @wrap_redis_error\n def get_accessible_objects(self, principals, bound_permissions=None):\n principals = set(principals)\n\n if bound_permissions:\n keys = ['permission:%s:%s' % op for op in bound_permissions]\n else:\n keys = ['permission:*']\n\n perms_by_id = dict()\n for key_pattern in keys:\n matched = self._client.scan_iter(match=key_pattern)\n for key in matched:\n authorized = self._decode_set(self._client.smembers(key))\n if len(authorized & principals) > 0:\n _, obj_id, permission = key.decode('utf-8').split(':', 2)\n perms_by_id.setdefault(obj_id, set()).add(permission)\n\n return perms_by_id\n\n @wrap_redis_error\n def get_authorized_principals(self, bound_permissions):\n keys = ['permission:%s:%s' % (o, p) for (o, p) in bound_permissions]\n if keys:\n return self._decode_set(self._client.sunion(*list(keys)))\n return set()\n\n @wrap_redis_error\n def get_objects_permissions(self, objects_ids, permissions=None):\n objects_perms = []\n for object_id in objects_ids:\n if permissions is not None:\n keys = ['permission:%s:%s' % (object_id, permission)\n for permission in permissions]\n else:\n keys = [key.decode('utf-8') for key in self._client.scan_iter(\n match='permission:%s:*' % object_id)]\n\n with self._client.pipeline() as pipe:\n for permission_key in keys:\n pipe.smembers(permission_key)\n\n results = pipe.execute()\n\n permissions = defaultdict(set)\n for i, result in enumerate(results):\n permission = keys[i].split(':', 2)[-1]\n permissions[permission] = self._decode_set(result)\n objects_perms.append(permissions)\n return objects_perms\n\n @wrap_redis_error\n def replace_object_permissions(self, object_id, permissions):\n keys = ['permission:%s:%s' % (object_id, permission)\n for permission in permissions]\n with self._client.pipeline() as pipe:\n for key in keys:\n pipe.delete(key)\n permission = key.split(':', 2)[-1]\n principals = permissions[permission]\n if len(principals) > 0:\n pipe.sadd(key, *principals)\n pipe.execute()\n\n @wrap_redis_error\n def delete_object_permissions(self, *object_id_list):\n with self._client.pipeline() as pipe:\n for object_id in object_id_list:\n keys = list(self._client.scan_iter(\n match='permission:%s:*' % object_id))\n if len(keys) > 0:\n pipe.delete(*keys)\n pipe.execute()\n\n\ndef load_from_config(config):\n client = create_from_config(config, prefix='permission_')\n return Permission(client)\n", "path": "kinto/core/permission/redis.py"}]}
2,018
163
gh_patches_debug_24023
rasdani/github-patches
git_diff
apache__airflow-13371
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- AirflowMacroPluginRemovedRule fails on non-python files **Apache Airflow version**: 1.10.14 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): **Environment**: - **Cloud provider or hardware configuration**: X - **OS** (e.g. from /etc/os-release): X - **Kernel** (e.g. `uname -a`): X - **Install tools**: X - **Others**: X **What happened**: The `AirflowMacroPluginRemovedRule` seems unable to process non-standard python files (e.g. `.xlsx`) and chokes out with an unhelpful error message.: ```python ========================================================================================================================================================== STATUS ========================================================================================================================================================== Check for latest versions of apache-airflow and checker...........................................................................................................................................................................................................................................................SUCCESS Traceback (most recent call last): File "/Users/madison/programs/anaconda3/envs/memphis-airflow/bin/airflow", line 37, in <module> args.func(args) File "/Users/madison/programs/anaconda3/envs/memphis-airflow/lib/python3.8/site-packages/airflow/upgrade/checker.py", line 88, in run all_problems = check_upgrade(formatter, rules) File "/Users/madison/programs/anaconda3/envs/memphis-airflow/lib/python3.8/site-packages/airflow/upgrade/checker.py", line 37, in check_upgrade rule_status = RuleStatus.from_rule(rule) File "/Users/madison/programs/anaconda3/envs/memphis-airflow/lib/python3.8/site-packages/airflow/upgrade/problem.py", line 44, in from_rule result = rule.check() File "/Users/madison/programs/anaconda3/envs/memphis-airflow/lib/python3.8/site-packages/airflow/upgrade/rules/airflow_macro_plugin_removed.py", line 52, in check problems.extend(self._check_file(file_path)) File "/Users/madison/programs/anaconda3/envs/memphis-airflow/lib/python3.8/site-packages/airflow/upgrade/rules/airflow_macro_plugin_removed.py", line 42, in _check_file for line_number, line in enumerate(file_pointer, 1): File "/Users/madison/programs/anaconda3/envs/memphis-airflow/lib/python3.8/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x82 in position 16: invalid start byte ``` **What you expected to happen**: I expected the macro to skip over files it could not process/understand **How to reproduce it**: Add an `.xlsx` or other binary document to the DAGs folder and run the upgrade check. **Suggested resolution**: I think it's fine to fail out on these files (it led us to add certain items to the `.airflowignore` which should have been there anyway) but I had to modify the upgrade rule directly to tell me _which_ files were the problem. A more helpful error message here, and possibly a message prompting users to add said files to their `.airflowignore` would be ideal. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `airflow/upgrade/rules/airflow_macro_plugin_removed.py` Content: ``` 1 # Licensed to the Apache Software Foundation (ASF) under one 2 # or more contributor license agreements. See the NOTICE file 3 # distributed with this work for additional information 4 # regarding copyright ownership. The ASF licenses this file 5 # to you under the Apache License, Version 2.0 (the 6 # "License"); you may not use this file except in compliance 7 # with the License. You may obtain a copy of the License at 8 # 9 # http://www.apache.org/licenses/LICENSE-2.0 10 # 11 # Unless required by applicable law or agreed to in writing, 12 # software distributed under the License is distributed on an 13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 14 # KIND, either express or implied. See the License for the 15 # specific language governing permissions and limitations 16 # under the License. 17 18 from __future__ import absolute_import 19 20 from airflow import conf 21 from airflow.upgrade.rules.base_rule import BaseRule 22 from airflow.utils.dag_processing import list_py_file_paths 23 24 25 class AirflowMacroPluginRemovedRule(BaseRule): 26 27 title = "Remove airflow.AirflowMacroPlugin class" 28 29 description = "The airflow.AirflowMacroPlugin class has been removed." 30 31 MACRO_PLUGIN_CLASS = "airflow.AirflowMacroPlugin" 32 33 def _change_info(self, file_path, line_number): 34 return "{} will be removed. Affected file: {} (line {})".format( 35 self.MACRO_PLUGIN_CLASS, file_path, line_number 36 ) 37 38 def _check_file(self, file_path): 39 problems = [] 40 class_name_to_check = self.MACRO_PLUGIN_CLASS.split(".")[-1] 41 with open(file_path, "r") as file_pointer: 42 for line_number, line in enumerate(file_pointer, 1): 43 if class_name_to_check in line: 44 problems.append(self._change_info(file_path, line_number)) 45 return problems 46 47 def check(self): 48 dag_folder = conf.get("core", "dags_folder") 49 file_paths = list_py_file_paths(directory=dag_folder, include_examples=False) 50 problems = [] 51 for file_path in file_paths: 52 problems.extend(self._check_file(file_path)) 53 return problems 54 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/airflow/upgrade/rules/airflow_macro_plugin_removed.py b/airflow/upgrade/rules/airflow_macro_plugin_removed.py --- a/airflow/upgrade/rules/airflow_macro_plugin_removed.py +++ b/airflow/upgrade/rules/airflow_macro_plugin_removed.py @@ -39,9 +39,12 @@ problems = [] class_name_to_check = self.MACRO_PLUGIN_CLASS.split(".")[-1] with open(file_path, "r") as file_pointer: - for line_number, line in enumerate(file_pointer, 1): - if class_name_to_check in line: - problems.append(self._change_info(file_path, line_number)) + try: + for line_number, line in enumerate(file_pointer, 1): + if class_name_to_check in line: + problems.append(self._change_info(file_path, line_number)) + except UnicodeDecodeError: + problems.append("Unable to read python file {}".format(file_path)) return problems def check(self): @@ -49,5 +52,7 @@ file_paths = list_py_file_paths(directory=dag_folder, include_examples=False) problems = [] for file_path in file_paths: + if not file_path.endswith(".py"): + continue problems.extend(self._check_file(file_path)) return problems
{"golden_diff": "diff --git a/airflow/upgrade/rules/airflow_macro_plugin_removed.py b/airflow/upgrade/rules/airflow_macro_plugin_removed.py\n--- a/airflow/upgrade/rules/airflow_macro_plugin_removed.py\n+++ b/airflow/upgrade/rules/airflow_macro_plugin_removed.py\n@@ -39,9 +39,12 @@\n problems = []\n class_name_to_check = self.MACRO_PLUGIN_CLASS.split(\".\")[-1]\n with open(file_path, \"r\") as file_pointer:\n- for line_number, line in enumerate(file_pointer, 1):\n- if class_name_to_check in line:\n- problems.append(self._change_info(file_path, line_number))\n+ try:\n+ for line_number, line in enumerate(file_pointer, 1):\n+ if class_name_to_check in line:\n+ problems.append(self._change_info(file_path, line_number))\n+ except UnicodeDecodeError:\n+ problems.append(\"Unable to read python file {}\".format(file_path))\n return problems\n \n def check(self):\n@@ -49,5 +52,7 @@\n file_paths = list_py_file_paths(directory=dag_folder, include_examples=False)\n problems = []\n for file_path in file_paths:\n+ if not file_path.endswith(\".py\"):\n+ continue\n problems.extend(self._check_file(file_path))\n return problems\n", "issue": "AirflowMacroPluginRemovedRule fails on non-python files\n**Apache Airflow version**: 1.10.14\r\n\r\n\r\n**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):\r\n\r\n**Environment**:\r\n\r\n- **Cloud provider or hardware configuration**: X\r\n- **OS** (e.g. from /etc/os-release): X\r\n- **Kernel** (e.g. `uname -a`): X\r\n- **Install tools**: X\r\n- **Others**: X\r\n\r\n**What happened**:\r\n\r\nThe `AirflowMacroPluginRemovedRule` seems unable to process non-standard python files (e.g. `.xlsx`) and chokes out with an unhelpful error message.:\r\n\r\n```python\r\n========================================================================================================================================================== STATUS ==========================================================================================================================================================\r\n\r\nCheck for latest versions of apache-airflow and checker...........................................................................................................................................................................................................................................................SUCCESS\r\nTraceback (most recent call last):\r\n File \"/Users/madison/programs/anaconda3/envs/memphis-airflow/bin/airflow\", line 37, in <module>\r\n args.func(args)\r\n File \"/Users/madison/programs/anaconda3/envs/memphis-airflow/lib/python3.8/site-packages/airflow/upgrade/checker.py\", line 88, in run\r\n all_problems = check_upgrade(formatter, rules)\r\n File \"/Users/madison/programs/anaconda3/envs/memphis-airflow/lib/python3.8/site-packages/airflow/upgrade/checker.py\", line 37, in check_upgrade\r\n rule_status = RuleStatus.from_rule(rule)\r\n File \"/Users/madison/programs/anaconda3/envs/memphis-airflow/lib/python3.8/site-packages/airflow/upgrade/problem.py\", line 44, in from_rule\r\n result = rule.check()\r\n File \"/Users/madison/programs/anaconda3/envs/memphis-airflow/lib/python3.8/site-packages/airflow/upgrade/rules/airflow_macro_plugin_removed.py\", line 52, in check\r\n problems.extend(self._check_file(file_path))\r\n File \"/Users/madison/programs/anaconda3/envs/memphis-airflow/lib/python3.8/site-packages/airflow/upgrade/rules/airflow_macro_plugin_removed.py\", line 42, in _check_file\r\n for line_number, line in enumerate(file_pointer, 1):\r\n File \"/Users/madison/programs/anaconda3/envs/memphis-airflow/lib/python3.8/codecs.py\", line 322, in decode\r\n (result, consumed) = self._buffer_decode(data, self.errors, final)\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0x82 in position 16: invalid start byte\r\n```\r\n\r\n**What you expected to happen**:\r\n\r\nI expected the macro to skip over files it could not process/understand\r\n\r\n**How to reproduce it**:\r\n\r\nAdd an `.xlsx` or other binary document to the DAGs folder and run the upgrade check.\r\n\r\n\r\n**Suggested resolution**:\r\n\r\nI think it's fine to fail out on these files (it led us to add certain items to the `.airflowignore` which should have been there anyway) but I had to modify the upgrade rule directly to tell me _which_ files were the problem. A more helpful error message here, and possibly a message prompting users to add said files to their `.airflowignore` would be ideal.\r\n\n", "before_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nfrom __future__ import absolute_import\n\nfrom airflow import conf\nfrom airflow.upgrade.rules.base_rule import BaseRule\nfrom airflow.utils.dag_processing import list_py_file_paths\n\n\nclass AirflowMacroPluginRemovedRule(BaseRule):\n\n title = \"Remove airflow.AirflowMacroPlugin class\"\n\n description = \"The airflow.AirflowMacroPlugin class has been removed.\"\n\n MACRO_PLUGIN_CLASS = \"airflow.AirflowMacroPlugin\"\n\n def _change_info(self, file_path, line_number):\n return \"{} will be removed. Affected file: {} (line {})\".format(\n self.MACRO_PLUGIN_CLASS, file_path, line_number\n )\n\n def _check_file(self, file_path):\n problems = []\n class_name_to_check = self.MACRO_PLUGIN_CLASS.split(\".\")[-1]\n with open(file_path, \"r\") as file_pointer:\n for line_number, line in enumerate(file_pointer, 1):\n if class_name_to_check in line:\n problems.append(self._change_info(file_path, line_number))\n return problems\n\n def check(self):\n dag_folder = conf.get(\"core\", \"dags_folder\")\n file_paths = list_py_file_paths(directory=dag_folder, include_examples=False)\n problems = []\n for file_path in file_paths:\n problems.extend(self._check_file(file_path))\n return problems\n", "path": "airflow/upgrade/rules/airflow_macro_plugin_removed.py"}], "after_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nfrom __future__ import absolute_import\n\nfrom airflow import conf\nfrom airflow.upgrade.rules.base_rule import BaseRule\nfrom airflow.utils.dag_processing import list_py_file_paths\n\n\nclass AirflowMacroPluginRemovedRule(BaseRule):\n\n title = \"Remove airflow.AirflowMacroPlugin class\"\n\n description = \"The airflow.AirflowMacroPlugin class has been removed.\"\n\n MACRO_PLUGIN_CLASS = \"airflow.AirflowMacroPlugin\"\n\n def _change_info(self, file_path, line_number):\n return \"{} will be removed. Affected file: {} (line {})\".format(\n self.MACRO_PLUGIN_CLASS, file_path, line_number\n )\n\n def _check_file(self, file_path):\n problems = []\n class_name_to_check = self.MACRO_PLUGIN_CLASS.split(\".\")[-1]\n with open(file_path, \"r\") as file_pointer:\n try:\n for line_number, line in enumerate(file_pointer, 1):\n if class_name_to_check in line:\n problems.append(self._change_info(file_path, line_number))\n except UnicodeDecodeError:\n problems.append(\"Unable to read python file {}\".format(file_path))\n return problems\n\n def check(self):\n dag_folder = conf.get(\"core\", \"dags_folder\")\n file_paths = list_py_file_paths(directory=dag_folder, include_examples=False)\n problems = []\n for file_path in file_paths:\n if not file_path.endswith(\".py\"):\n continue\n problems.extend(self._check_file(file_path))\n return problems\n", "path": "airflow/upgrade/rules/airflow_macro_plugin_removed.py"}]}
1,567
293
gh_patches_debug_9399
rasdani/github-patches
git_diff
liqd__a4-meinberlin-2541
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- bplan template dates saved but not shown in Dashboard URL: https://mein.berlin.de/dashboard/projects/erweiterung-mauerpark-bebauungsplan-3-64-im-bezirk/bplan/ user: initiator expected behaviour: date and time that I have entered are still shown after saving form behaviour: dates are no longer shown after saving, no error message, I can still publish the project and date is shown correctly on project tile device & browser: Desktop, mac, chrome Version 76.0.3809.132 (Offizieller Build) (64-Bit) Importance: relevant bug, fix before next release --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `meinberlin/apps/bplan/forms.py` Content: ``` 1 from django import forms 2 3 from meinberlin.apps.extprojects.forms import ExternalProjectCreateForm 4 from meinberlin.apps.extprojects.forms import ExternalProjectForm 5 6 from . import models 7 8 9 class StatementForm(forms.ModelForm): 10 class Meta: 11 model = models.Statement 12 fields = ['name', 'email', 'statement', 13 'street_number', 'postal_code_city'] 14 15 16 class BplanProjectCreateForm(ExternalProjectCreateForm): 17 18 class Meta: 19 model = models.Bplan 20 fields = ['name', 'description', 'tile_image', 'tile_image_copyright'] 21 22 23 class BplanProjectForm(ExternalProjectForm): 24 25 class Meta: 26 model = models.Bplan 27 fields = ['name', 'identifier', 'url', 'description', 'tile_image', 28 'tile_image_copyright', 'is_archived', 'office_worker_email'] 29 required_for_project_publish = ['name', 'url', 'description', 30 'office_worker_email'] 31 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/meinberlin/apps/bplan/forms.py b/meinberlin/apps/bplan/forms.py --- a/meinberlin/apps/bplan/forms.py +++ b/meinberlin/apps/bplan/forms.py @@ -25,6 +25,7 @@ class Meta: model = models.Bplan fields = ['name', 'identifier', 'url', 'description', 'tile_image', - 'tile_image_copyright', 'is_archived', 'office_worker_email'] + 'tile_image_copyright', 'is_archived', 'office_worker_email', + 'start_date', 'end_date'] required_for_project_publish = ['name', 'url', 'description', 'office_worker_email']
{"golden_diff": "diff --git a/meinberlin/apps/bplan/forms.py b/meinberlin/apps/bplan/forms.py\n--- a/meinberlin/apps/bplan/forms.py\n+++ b/meinberlin/apps/bplan/forms.py\n@@ -25,6 +25,7 @@\n class Meta:\n model = models.Bplan\n fields = ['name', 'identifier', 'url', 'description', 'tile_image',\n- 'tile_image_copyright', 'is_archived', 'office_worker_email']\n+ 'tile_image_copyright', 'is_archived', 'office_worker_email',\n+ 'start_date', 'end_date']\n required_for_project_publish = ['name', 'url', 'description',\n 'office_worker_email']\n", "issue": "bplan template dates saved but not shown in Dashboard\nURL: https://mein.berlin.de/dashboard/projects/erweiterung-mauerpark-bebauungsplan-3-64-im-bezirk/bplan/\r\nuser: initiator\r\nexpected behaviour: date and time that I have entered are still shown after saving form\r\nbehaviour: dates are no longer shown after saving, no error message, I can still publish the project and date is shown correctly on project tile\r\ndevice & browser: Desktop, mac, chrome Version 76.0.3809.132 (Offizieller Build) (64-Bit)\r\nImportance: relevant bug, fix before next release\n", "before_files": [{"content": "from django import forms\n\nfrom meinberlin.apps.extprojects.forms import ExternalProjectCreateForm\nfrom meinberlin.apps.extprojects.forms import ExternalProjectForm\n\nfrom . import models\n\n\nclass StatementForm(forms.ModelForm):\n class Meta:\n model = models.Statement\n fields = ['name', 'email', 'statement',\n 'street_number', 'postal_code_city']\n\n\nclass BplanProjectCreateForm(ExternalProjectCreateForm):\n\n class Meta:\n model = models.Bplan\n fields = ['name', 'description', 'tile_image', 'tile_image_copyright']\n\n\nclass BplanProjectForm(ExternalProjectForm):\n\n class Meta:\n model = models.Bplan\n fields = ['name', 'identifier', 'url', 'description', 'tile_image',\n 'tile_image_copyright', 'is_archived', 'office_worker_email']\n required_for_project_publish = ['name', 'url', 'description',\n 'office_worker_email']\n", "path": "meinberlin/apps/bplan/forms.py"}], "after_files": [{"content": "from django import forms\n\nfrom meinberlin.apps.extprojects.forms import ExternalProjectCreateForm\nfrom meinberlin.apps.extprojects.forms import ExternalProjectForm\n\nfrom . import models\n\n\nclass StatementForm(forms.ModelForm):\n class Meta:\n model = models.Statement\n fields = ['name', 'email', 'statement',\n 'street_number', 'postal_code_city']\n\n\nclass BplanProjectCreateForm(ExternalProjectCreateForm):\n\n class Meta:\n model = models.Bplan\n fields = ['name', 'description', 'tile_image', 'tile_image_copyright']\n\n\nclass BplanProjectForm(ExternalProjectForm):\n\n class Meta:\n model = models.Bplan\n fields = ['name', 'identifier', 'url', 'description', 'tile_image',\n 'tile_image_copyright', 'is_archived', 'office_worker_email',\n 'start_date', 'end_date']\n required_for_project_publish = ['name', 'url', 'description',\n 'office_worker_email']\n", "path": "meinberlin/apps/bplan/forms.py"}]}
667
157
gh_patches_debug_29689
rasdani/github-patches
git_diff
readthedocs__readthedocs.org-2787
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- subproject alias 302 redirect to top-level project sharing alias' name ## Details I have set up a subproject, Subproject: cloudify-openstack-plugin-fh Alias: openstack * Project URL: http://cfy-rtd-demo.readthedocs.io/ * Build URL (if applicable): https://readthedocs.org/projects/cfy-rtd-demo/builds/5162299/ * Read the Docs username (if applicable): funkyhat ## Expected Result When navigating to http://cfy-rtd-demo.readthedocs.io/projects/openstack I expect to see (presumably via a redirect to http://cfy-rtd-demo.readthedocs.io/projects/openstack/en/sphinxify-rtd-demo/) my subproject's docs (`sphinxify-rtd-demo` is the current "active branch" for the subproject). ## Actual Result I am redirected to http://openstack.readthedocs.io/en/latest/ which is unrelated to my project: ``` < HTTP/1.1 302 Found * Server nginx/1.10.0 (Ubuntu) is not blacklisted < Server: nginx/1.10.0 (Ubuntu) < Date: Fri, 17 Mar 2017 13:00:04 GMT < Content-Type: text/html; charset=utf-8 < Transfer-Encoding: chunked < Connection: keep-alive < Vary: Accept-Language, Cookie < Location: http://openstack.readthedocs.io/en/latest/ < Content-Language: en < X-Fallback: True < X-Served: Django < X-Deity: web03 ``` I have tried rebuilding the main project, which produced no change. This seems potentially related to #1602 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `readthedocs/core/views/serve.py` Content: ``` 1 # -*- coding: utf-8 -*- 2 """ 3 Doc serving from Python. 4 5 In production there are two modes, 6 * Serving from public symlinks in nginx (readthedocs.org & readthedocs.com) 7 * Serving from private symlinks in Python (readthedocs.com only) 8 9 In development, we have two modes: 10 * Serving from public symlinks in Python 11 * Serving from private symlinks in Python 12 13 This means we should only serve from public symlinks in dev, 14 and generally default to serving from private symlinks in Python only. 15 16 Privacy 17 ------- 18 19 These views will take into account the version privacy level. 20 21 Settings 22 -------- 23 24 PYTHON_MEDIA (False) - Set this to True to serve docs & media from Python 25 SERVE_DOCS (['private']) - The list of ['private', 'public'] docs to serve. 26 """ 27 28 from __future__ import ( 29 absolute_import, division, print_function, unicode_literals) 30 31 import logging 32 import mimetypes 33 import os 34 from functools import wraps 35 36 from django.conf import settings 37 from django.http import Http404, HttpResponse, HttpResponseRedirect 38 from django.shortcuts import render 39 from django.views.static import serve 40 41 from readthedocs.builds.models import Version 42 from readthedocs.core.permissions import AdminPermission 43 from readthedocs.core.resolver import resolve, resolve_path 44 from readthedocs.core.symlink import PrivateSymlink, PublicSymlink 45 from readthedocs.projects import constants 46 from readthedocs.projects.models import Project, ProjectRelationship 47 48 log = logging.getLogger(__name__) 49 50 51 def map_subproject_slug(view_func): 52 """ 53 A decorator that maps a ``subproject_slug`` URL param into a Project. 54 55 :raises: Http404 if the Project doesn't exist 56 57 .. warning:: Does not take into account any kind of privacy settings. 58 """ 59 @wraps(view_func) 60 def inner_view( 61 request, subproject=None, subproject_slug=None, *args, **kwargs): 62 if subproject is None and subproject_slug: 63 try: 64 subproject = Project.objects.get(slug=subproject_slug) 65 except Project.DoesNotExist: 66 try: 67 # Depends on a project passed into kwargs 68 rel = ProjectRelationship.objects.get( 69 parent=kwargs['project'], 70 alias=subproject_slug, 71 ) 72 subproject = rel.child 73 except (ProjectRelationship.DoesNotExist, KeyError): 74 raise Http404 75 return view_func(request, subproject=subproject, *args, **kwargs) 76 77 return inner_view 78 79 80 def map_project_slug(view_func): 81 """ 82 A decorator that maps a ``project_slug`` URL param into a Project. 83 84 :raises: Http404 if the Project doesn't exist 85 86 .. warning:: Does not take into account any kind of privacy settings. 87 """ 88 @wraps(view_func) 89 def inner_view(request, project=None, project_slug=None, *args, **kwargs): 90 if project is None: 91 if not project_slug: 92 project_slug = request.slug 93 try: 94 project = Project.objects.get(slug=project_slug) 95 except Project.DoesNotExist: 96 raise Http404('Project does not exist.') 97 return view_func(request, project=project, *args, **kwargs) 98 99 return inner_view 100 101 102 @map_project_slug 103 @map_subproject_slug 104 def redirect_project_slug(request, project, subproject): # pylint: disable=unused-argument 105 """Handle / -> /en/latest/ directs on subdomains.""" 106 return HttpResponseRedirect(resolve(subproject or project)) 107 108 109 @map_project_slug 110 @map_subproject_slug 111 def redirect_page_with_filename(request, project, subproject, filename): # pylint: disable=unused-argument # noqa 112 """Redirect /page/file.html to /en/latest/file.html.""" 113 return HttpResponseRedirect( 114 resolve(subproject or project, filename=filename)) 115 116 117 def _serve_401(request, project): 118 res = render(request, '401.html') 119 res.status_code = 401 120 log.error('Unauthorized access to {0} documentation'.format(project.slug)) 121 return res 122 123 124 def _serve_file(request, filename, basepath): 125 # Serve the file from the proper location 126 if settings.DEBUG or getattr(settings, 'PYTHON_MEDIA', False): 127 # Serve from Python 128 return serve(request, filename, basepath) 129 else: 130 # Serve from Nginx 131 content_type, encoding = mimetypes.guess_type( 132 os.path.join(basepath, filename)) 133 content_type = content_type or 'application/octet-stream' 134 response = HttpResponse(content_type=content_type) 135 if encoding: 136 response['Content-Encoding'] = encoding 137 try: 138 response['X-Accel-Redirect'] = os.path.join( 139 basepath[len(settings.SITE_ROOT):], 140 filename, 141 ) 142 except UnicodeEncodeError: 143 raise Http404 144 145 return response 146 147 148 @map_project_slug 149 @map_subproject_slug 150 def serve_docs( 151 request, project, subproject, lang_slug=None, version_slug=None, 152 filename=''): 153 """Exists to map existing proj, lang, version, filename views to the file format.""" 154 if not version_slug: 155 version_slug = project.get_default_version() 156 try: 157 version = project.versions.public(request.user).get(slug=version_slug) 158 except Version.DoesNotExist: 159 # Properly raise a 404 if the version doesn't exist & a 401 if it does 160 if project.versions.filter(slug=version_slug).exists(): 161 return _serve_401(request, project) 162 raise Http404('Version does not exist.') 163 filename = resolve_path( 164 subproject or project, # Resolve the subproject if it exists 165 version_slug=version_slug, 166 language=lang_slug, 167 filename=filename, 168 subdomain=True, # subdomain will make it a "full" path without a URL prefix 169 ) 170 if (version.privacy_level == constants.PRIVATE and 171 not AdminPermission.is_member(user=request.user, obj=project)): 172 return _serve_401(request, project) 173 return _serve_symlink_docs( 174 request, 175 filename=filename, 176 project=project, 177 privacy_level=version.privacy_level, 178 ) 179 180 181 @map_project_slug 182 def _serve_symlink_docs(request, project, privacy_level, filename=''): 183 """Serve a file by symlink, or a 404 if not found.""" 184 # Handle indexes 185 if filename == '' or filename[-1] == '/': 186 filename += 'index.html' 187 188 # This breaks path joining, by ignoring the root when given an "absolute" path 189 if filename[0] == '/': 190 filename = filename[1:] 191 192 log.info('Serving %s for %s', filename, project) 193 194 files_tried = [] 195 196 serve_docs = getattr(settings, 'SERVE_DOCS', [constants.PRIVATE]) 197 198 if (settings.DEBUG or constants.PUBLIC in serve_docs) and privacy_level != constants.PRIVATE: # yapf: disable # noqa 199 public_symlink = PublicSymlink(project) 200 basepath = public_symlink.project_root 201 if os.path.exists(os.path.join(basepath, filename)): 202 return _serve_file(request, filename, basepath) 203 else: 204 files_tried.append(os.path.join(basepath, filename)) 205 206 if (settings.DEBUG or constants.PRIVATE in serve_docs) and privacy_level == constants.PRIVATE: # yapf: disable # noqa 207 # Handle private 208 private_symlink = PrivateSymlink(project) 209 basepath = private_symlink.project_root 210 211 if os.path.exists(os.path.join(basepath, filename)): 212 return _serve_file(request, filename, basepath) 213 else: 214 files_tried.append(os.path.join(basepath, filename)) 215 216 raise Http404( 217 'File not found. Tried these files: %s' % ','.join(files_tried)) 218 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/readthedocs/core/views/serve.py b/readthedocs/core/views/serve.py --- a/readthedocs/core/views/serve.py +++ b/readthedocs/core/views/serve.py @@ -34,7 +34,8 @@ from functools import wraps from django.conf import settings -from django.http import Http404, HttpResponse, HttpResponseRedirect +from django.http import HttpResponse, HttpResponseRedirect, Http404 +from django.shortcuts import get_object_or_404 from django.shortcuts import render from django.views.static import serve @@ -60,18 +61,17 @@ def inner_view( request, subproject=None, subproject_slug=None, *args, **kwargs): if subproject is None and subproject_slug: + # Try to fetch by subproject alias first, otherwise we might end up + # redirected to an unrelated project. try: - subproject = Project.objects.get(slug=subproject_slug) - except Project.DoesNotExist: - try: - # Depends on a project passed into kwargs - rel = ProjectRelationship.objects.get( - parent=kwargs['project'], - alias=subproject_slug, - ) - subproject = rel.child - except (ProjectRelationship.DoesNotExist, KeyError): - raise Http404 + # Depends on a project passed into kwargs + rel = ProjectRelationship.objects.get( + parent=kwargs['project'], + alias=subproject_slug, + ) + subproject = rel.child + except (ProjectRelationship.DoesNotExist, KeyError): + get_object_or_404(Project, slug=subproject_slug) return view_func(request, subproject=subproject, *args, **kwargs) return inner_view
{"golden_diff": "diff --git a/readthedocs/core/views/serve.py b/readthedocs/core/views/serve.py\n--- a/readthedocs/core/views/serve.py\n+++ b/readthedocs/core/views/serve.py\n@@ -34,7 +34,8 @@\n from functools import wraps\n \n from django.conf import settings\n-from django.http import Http404, HttpResponse, HttpResponseRedirect\n+from django.http import HttpResponse, HttpResponseRedirect, Http404\n+from django.shortcuts import get_object_or_404\n from django.shortcuts import render\n from django.views.static import serve\n \n@@ -60,18 +61,17 @@\n def inner_view(\n request, subproject=None, subproject_slug=None, *args, **kwargs):\n if subproject is None and subproject_slug:\n+ # Try to fetch by subproject alias first, otherwise we might end up\n+ # redirected to an unrelated project.\n try:\n- subproject = Project.objects.get(slug=subproject_slug)\n- except Project.DoesNotExist:\n- try:\n- # Depends on a project passed into kwargs\n- rel = ProjectRelationship.objects.get(\n- parent=kwargs['project'],\n- alias=subproject_slug,\n- )\n- subproject = rel.child\n- except (ProjectRelationship.DoesNotExist, KeyError):\n- raise Http404\n+ # Depends on a project passed into kwargs\n+ rel = ProjectRelationship.objects.get(\n+ parent=kwargs['project'],\n+ alias=subproject_slug,\n+ )\n+ subproject = rel.child\n+ except (ProjectRelationship.DoesNotExist, KeyError):\n+ get_object_or_404(Project, slug=subproject_slug)\n return view_func(request, subproject=subproject, *args, **kwargs)\n \n return inner_view\n", "issue": "subproject alias 302 redirect to top-level project sharing alias' name\n## Details\r\nI have set up a subproject,\r\nSubproject: cloudify-openstack-plugin-fh\r\nAlias: openstack\r\n\r\n* Project URL: http://cfy-rtd-demo.readthedocs.io/\r\n* Build URL (if applicable): https://readthedocs.org/projects/cfy-rtd-demo/builds/5162299/\r\n* Read the Docs username (if applicable): funkyhat\r\n\r\n## Expected Result\r\nWhen navigating to http://cfy-rtd-demo.readthedocs.io/projects/openstack\r\n\r\nI expect to see (presumably via a redirect to http://cfy-rtd-demo.readthedocs.io/projects/openstack/en/sphinxify-rtd-demo/) my subproject's docs (`sphinxify-rtd-demo` is the current \"active branch\" for the subproject).\r\n\r\n## Actual Result\r\nI am redirected to http://openstack.readthedocs.io/en/latest/ which is unrelated to my project:\r\n```\r\n< HTTP/1.1 302 Found\r\n* Server nginx/1.10.0 (Ubuntu) is not blacklisted\r\n< Server: nginx/1.10.0 (Ubuntu)\r\n< Date: Fri, 17 Mar 2017 13:00:04 GMT\r\n< Content-Type: text/html; charset=utf-8\r\n< Transfer-Encoding: chunked\r\n< Connection: keep-alive\r\n< Vary: Accept-Language, Cookie\r\n< Location: http://openstack.readthedocs.io/en/latest/\r\n< Content-Language: en\r\n< X-Fallback: True\r\n< X-Served: Django\r\n< X-Deity: web03\r\n```\r\n\r\nI have tried rebuilding the main project, which produced no change.\r\n\r\nThis seems potentially related to #1602 \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nDoc serving from Python.\n\nIn production there are two modes,\n* Serving from public symlinks in nginx (readthedocs.org & readthedocs.com)\n* Serving from private symlinks in Python (readthedocs.com only)\n\nIn development, we have two modes:\n* Serving from public symlinks in Python\n* Serving from private symlinks in Python\n\nThis means we should only serve from public symlinks in dev,\nand generally default to serving from private symlinks in Python only.\n\nPrivacy\n-------\n\nThese views will take into account the version privacy level.\n\nSettings\n--------\n\nPYTHON_MEDIA (False) - Set this to True to serve docs & media from Python\nSERVE_DOCS (['private']) - The list of ['private', 'public'] docs to serve.\n\"\"\"\n\nfrom __future__ import (\n absolute_import, division, print_function, unicode_literals)\n\nimport logging\nimport mimetypes\nimport os\nfrom functools import wraps\n\nfrom django.conf import settings\nfrom django.http import Http404, HttpResponse, HttpResponseRedirect\nfrom django.shortcuts import render\nfrom django.views.static import serve\n\nfrom readthedocs.builds.models import Version\nfrom readthedocs.core.permissions import AdminPermission\nfrom readthedocs.core.resolver import resolve, resolve_path\nfrom readthedocs.core.symlink import PrivateSymlink, PublicSymlink\nfrom readthedocs.projects import constants\nfrom readthedocs.projects.models import Project, ProjectRelationship\n\nlog = logging.getLogger(__name__)\n\n\ndef map_subproject_slug(view_func):\n \"\"\"\n A decorator that maps a ``subproject_slug`` URL param into a Project.\n\n :raises: Http404 if the Project doesn't exist\n\n .. warning:: Does not take into account any kind of privacy settings.\n \"\"\"\n @wraps(view_func)\n def inner_view(\n request, subproject=None, subproject_slug=None, *args, **kwargs):\n if subproject is None and subproject_slug:\n try:\n subproject = Project.objects.get(slug=subproject_slug)\n except Project.DoesNotExist:\n try:\n # Depends on a project passed into kwargs\n rel = ProjectRelationship.objects.get(\n parent=kwargs['project'],\n alias=subproject_slug,\n )\n subproject = rel.child\n except (ProjectRelationship.DoesNotExist, KeyError):\n raise Http404\n return view_func(request, subproject=subproject, *args, **kwargs)\n\n return inner_view\n\n\ndef map_project_slug(view_func):\n \"\"\"\n A decorator that maps a ``project_slug`` URL param into a Project.\n\n :raises: Http404 if the Project doesn't exist\n\n .. warning:: Does not take into account any kind of privacy settings.\n \"\"\"\n @wraps(view_func)\n def inner_view(request, project=None, project_slug=None, *args, **kwargs):\n if project is None:\n if not project_slug:\n project_slug = request.slug\n try:\n project = Project.objects.get(slug=project_slug)\n except Project.DoesNotExist:\n raise Http404('Project does not exist.')\n return view_func(request, project=project, *args, **kwargs)\n\n return inner_view\n\n\n@map_project_slug\n@map_subproject_slug\ndef redirect_project_slug(request, project, subproject): # pylint: disable=unused-argument\n \"\"\"Handle / -> /en/latest/ directs on subdomains.\"\"\"\n return HttpResponseRedirect(resolve(subproject or project))\n\n\n@map_project_slug\n@map_subproject_slug\ndef redirect_page_with_filename(request, project, subproject, filename): # pylint: disable=unused-argument # noqa\n \"\"\"Redirect /page/file.html to /en/latest/file.html.\"\"\"\n return HttpResponseRedirect(\n resolve(subproject or project, filename=filename))\n\n\ndef _serve_401(request, project):\n res = render(request, '401.html')\n res.status_code = 401\n log.error('Unauthorized access to {0} documentation'.format(project.slug))\n return res\n\n\ndef _serve_file(request, filename, basepath):\n # Serve the file from the proper location\n if settings.DEBUG or getattr(settings, 'PYTHON_MEDIA', False):\n # Serve from Python\n return serve(request, filename, basepath)\n else:\n # Serve from Nginx\n content_type, encoding = mimetypes.guess_type(\n os.path.join(basepath, filename))\n content_type = content_type or 'application/octet-stream'\n response = HttpResponse(content_type=content_type)\n if encoding:\n response['Content-Encoding'] = encoding\n try:\n response['X-Accel-Redirect'] = os.path.join(\n basepath[len(settings.SITE_ROOT):],\n filename,\n )\n except UnicodeEncodeError:\n raise Http404\n\n return response\n\n\n@map_project_slug\n@map_subproject_slug\ndef serve_docs(\n request, project, subproject, lang_slug=None, version_slug=None,\n filename=''):\n \"\"\"Exists to map existing proj, lang, version, filename views to the file format.\"\"\"\n if not version_slug:\n version_slug = project.get_default_version()\n try:\n version = project.versions.public(request.user).get(slug=version_slug)\n except Version.DoesNotExist:\n # Properly raise a 404 if the version doesn't exist & a 401 if it does\n if project.versions.filter(slug=version_slug).exists():\n return _serve_401(request, project)\n raise Http404('Version does not exist.')\n filename = resolve_path(\n subproject or project, # Resolve the subproject if it exists\n version_slug=version_slug,\n language=lang_slug,\n filename=filename,\n subdomain=True, # subdomain will make it a \"full\" path without a URL prefix\n )\n if (version.privacy_level == constants.PRIVATE and\n not AdminPermission.is_member(user=request.user, obj=project)):\n return _serve_401(request, project)\n return _serve_symlink_docs(\n request,\n filename=filename,\n project=project,\n privacy_level=version.privacy_level,\n )\n\n\n@map_project_slug\ndef _serve_symlink_docs(request, project, privacy_level, filename=''):\n \"\"\"Serve a file by symlink, or a 404 if not found.\"\"\"\n # Handle indexes\n if filename == '' or filename[-1] == '/':\n filename += 'index.html'\n\n # This breaks path joining, by ignoring the root when given an \"absolute\" path\n if filename[0] == '/':\n filename = filename[1:]\n\n log.info('Serving %s for %s', filename, project)\n\n files_tried = []\n\n serve_docs = getattr(settings, 'SERVE_DOCS', [constants.PRIVATE])\n\n if (settings.DEBUG or constants.PUBLIC in serve_docs) and privacy_level != constants.PRIVATE: # yapf: disable # noqa\n public_symlink = PublicSymlink(project)\n basepath = public_symlink.project_root\n if os.path.exists(os.path.join(basepath, filename)):\n return _serve_file(request, filename, basepath)\n else:\n files_tried.append(os.path.join(basepath, filename))\n\n if (settings.DEBUG or constants.PRIVATE in serve_docs) and privacy_level == constants.PRIVATE: # yapf: disable # noqa\n # Handle private\n private_symlink = PrivateSymlink(project)\n basepath = private_symlink.project_root\n\n if os.path.exists(os.path.join(basepath, filename)):\n return _serve_file(request, filename, basepath)\n else:\n files_tried.append(os.path.join(basepath, filename))\n\n raise Http404(\n 'File not found. Tried these files: %s' % ','.join(files_tried))\n", "path": "readthedocs/core/views/serve.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nDoc serving from Python.\n\nIn production there are two modes,\n* Serving from public symlinks in nginx (readthedocs.org & readthedocs.com)\n* Serving from private symlinks in Python (readthedocs.com only)\n\nIn development, we have two modes:\n* Serving from public symlinks in Python\n* Serving from private symlinks in Python\n\nThis means we should only serve from public symlinks in dev,\nand generally default to serving from private symlinks in Python only.\n\nPrivacy\n-------\n\nThese views will take into account the version privacy level.\n\nSettings\n--------\n\nPYTHON_MEDIA (False) - Set this to True to serve docs & media from Python\nSERVE_DOCS (['private']) - The list of ['private', 'public'] docs to serve.\n\"\"\"\n\nfrom __future__ import (\n absolute_import, division, print_function, unicode_literals)\n\nimport logging\nimport mimetypes\nimport os\nfrom functools import wraps\n\nfrom django.conf import settings\nfrom django.http import HttpResponse, HttpResponseRedirect, Http404\nfrom django.shortcuts import get_object_or_404\nfrom django.shortcuts import render\nfrom django.views.static import serve\n\nfrom readthedocs.builds.models import Version\nfrom readthedocs.core.permissions import AdminPermission\nfrom readthedocs.core.resolver import resolve, resolve_path\nfrom readthedocs.core.symlink import PrivateSymlink, PublicSymlink\nfrom readthedocs.projects import constants\nfrom readthedocs.projects.models import Project, ProjectRelationship\n\nlog = logging.getLogger(__name__)\n\n\ndef map_subproject_slug(view_func):\n \"\"\"\n A decorator that maps a ``subproject_slug`` URL param into a Project.\n\n :raises: Http404 if the Project doesn't exist\n\n .. warning:: Does not take into account any kind of privacy settings.\n \"\"\"\n @wraps(view_func)\n def inner_view(\n request, subproject=None, subproject_slug=None, *args, **kwargs):\n if subproject is None and subproject_slug:\n # Try to fetch by subproject alias first, otherwise we might end up\n # redirected to an unrelated project.\n try:\n # Depends on a project passed into kwargs\n rel = ProjectRelationship.objects.get(\n parent=kwargs['project'],\n alias=subproject_slug,\n )\n subproject = rel.child\n except (ProjectRelationship.DoesNotExist, KeyError):\n get_object_or_404(Project, slug=subproject_slug)\n return view_func(request, subproject=subproject, *args, **kwargs)\n\n return inner_view\n\n\ndef map_project_slug(view_func):\n \"\"\"\n A decorator that maps a ``project_slug`` URL param into a Project.\n\n :raises: Http404 if the Project doesn't exist\n\n .. warning:: Does not take into account any kind of privacy settings.\n \"\"\"\n @wraps(view_func)\n def inner_view(request, project=None, project_slug=None, *args, **kwargs):\n if project is None:\n if not project_slug:\n project_slug = request.slug\n try:\n project = Project.objects.get(slug=project_slug)\n except Project.DoesNotExist:\n raise Http404('Project does not exist.')\n return view_func(request, project=project, *args, **kwargs)\n\n return inner_view\n\n\n@map_project_slug\n@map_subproject_slug\ndef redirect_project_slug(request, project, subproject): # pylint: disable=unused-argument\n \"\"\"Handle / -> /en/latest/ directs on subdomains.\"\"\"\n return HttpResponseRedirect(resolve(subproject or project))\n\n\n@map_project_slug\n@map_subproject_slug\ndef redirect_page_with_filename(request, project, subproject, filename): # pylint: disable=unused-argument # noqa\n \"\"\"Redirect /page/file.html to /en/latest/file.html.\"\"\"\n return HttpResponseRedirect(\n resolve(subproject or project, filename=filename))\n\n\ndef _serve_401(request, project):\n res = render(request, '401.html')\n res.status_code = 401\n log.error('Unauthorized access to {0} documentation'.format(project.slug))\n return res\n\n\ndef _serve_file(request, filename, basepath):\n # Serve the file from the proper location\n if settings.DEBUG or getattr(settings, 'PYTHON_MEDIA', False):\n # Serve from Python\n return serve(request, filename, basepath)\n else:\n # Serve from Nginx\n content_type, encoding = mimetypes.guess_type(\n os.path.join(basepath, filename))\n content_type = content_type or 'application/octet-stream'\n response = HttpResponse(content_type=content_type)\n if encoding:\n response['Content-Encoding'] = encoding\n try:\n response['X-Accel-Redirect'] = os.path.join(\n basepath[len(settings.SITE_ROOT):],\n filename,\n )\n except UnicodeEncodeError:\n raise Http404\n\n return response\n\n\n@map_project_slug\n@map_subproject_slug\ndef serve_docs(\n request, project, subproject, lang_slug=None, version_slug=None,\n filename=''):\n \"\"\"Exists to map existing proj, lang, version, filename views to the file format.\"\"\"\n if not version_slug:\n version_slug = project.get_default_version()\n try:\n version = project.versions.public(request.user).get(slug=version_slug)\n except Version.DoesNotExist:\n # Properly raise a 404 if the version doesn't exist & a 401 if it does\n if project.versions.filter(slug=version_slug).exists():\n return _serve_401(request, project)\n raise Http404('Version does not exist.')\n filename = resolve_path(\n subproject or project, # Resolve the subproject if it exists\n version_slug=version_slug,\n language=lang_slug,\n filename=filename,\n subdomain=True, # subdomain will make it a \"full\" path without a URL prefix\n )\n if (version.privacy_level == constants.PRIVATE and\n not AdminPermission.is_member(user=request.user, obj=project)):\n return _serve_401(request, project)\n return _serve_symlink_docs(\n request,\n filename=filename,\n project=project,\n privacy_level=version.privacy_level,\n )\n\n\n@map_project_slug\ndef _serve_symlink_docs(request, project, privacy_level, filename=''):\n \"\"\"Serve a file by symlink, or a 404 if not found.\"\"\"\n # Handle indexes\n if filename == '' or filename[-1] == '/':\n filename += 'index.html'\n\n # This breaks path joining, by ignoring the root when given an \"absolute\" path\n if filename[0] == '/':\n filename = filename[1:]\n\n log.info('Serving %s for %s', filename, project)\n\n files_tried = []\n\n serve_docs = getattr(settings, 'SERVE_DOCS', [constants.PRIVATE])\n\n if (settings.DEBUG or constants.PUBLIC in serve_docs) and privacy_level != constants.PRIVATE: # yapf: disable # noqa\n public_symlink = PublicSymlink(project)\n basepath = public_symlink.project_root\n if os.path.exists(os.path.join(basepath, filename)):\n return _serve_file(request, filename, basepath)\n else:\n files_tried.append(os.path.join(basepath, filename))\n\n if (settings.DEBUG or constants.PRIVATE in serve_docs) and privacy_level == constants.PRIVATE: # yapf: disable # noqa\n # Handle private\n private_symlink = PrivateSymlink(project)\n basepath = private_symlink.project_root\n\n if os.path.exists(os.path.join(basepath, filename)):\n return _serve_file(request, filename, basepath)\n else:\n files_tried.append(os.path.join(basepath, filename))\n\n raise Http404(\n 'File not found. Tried these files: %s' % ','.join(files_tried))\n", "path": "readthedocs/core/views/serve.py"}]}
2,890
380
gh_patches_debug_62942
rasdani/github-patches
git_diff
great-expectations__great_expectations-3803
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Use cleaner solution for non-truncating division in python 2 Prefer `from __future__ import division` to `1.*x/y` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `great_expectations/core/usage_statistics/anonymizers/anonymizer.py` Content: ``` 1 import logging 2 from hashlib import md5 3 from typing import Optional 4 5 from great_expectations.util import load_class 6 7 logger = logging.getLogger(__name__) 8 9 10 class Anonymizer: 11 """Anonymize string names in an optionally-consistent way.""" 12 13 def __init__(self, salt=None): 14 if salt is not None and not isinstance(salt, str): 15 logger.error("invalid salt: must provide a string. Setting a random salt.") 16 salt = None 17 if salt is None: 18 import secrets 19 20 self._salt = secrets.token_hex(8) 21 else: 22 self._salt = salt 23 24 @property 25 def salt(self): 26 return self._salt 27 28 def anonymize(self, string_): 29 if string_ is None: 30 return None 31 32 if not isinstance(string_, str): 33 raise TypeError( 34 f"""The type of the "string_" argument must be a string (Python "str"). The type given is 35 "{str(type(string_))}", which is illegal. 36 """ 37 ) 38 salted = self._salt + string_ 39 return md5(salted.encode("utf-8")).hexdigest() 40 41 def anonymize_object_info( 42 self, 43 anonymized_info_dict, 44 ge_classes, 45 object_=None, 46 object_class=None, 47 object_config=None, 48 runtime_environment=None, 49 ) -> dict: 50 assert ( 51 object_ or object_class or object_config 52 ), "Must pass either object_ or object_class or object_config." 53 54 if runtime_environment is None: 55 runtime_environment = {} 56 57 object_class_name: Optional[str] = None 58 try: 59 if object_class is None and object_ is not None: 60 object_class = object_.__class__ 61 elif object_class is None and object_config is not None: 62 object_class_name = object_config.get("class_name") 63 object_module_name = object_config.get( 64 "module_name" 65 ) or runtime_environment.get("module_name") 66 object_class = load_class(object_class_name, object_module_name) 67 object_class_name = object_class.__name__ 68 69 for ge_class in ge_classes: 70 if issubclass(object_class, ge_class): 71 anonymized_info_dict["parent_class"] = ge_class.__name__ 72 if not object_class == ge_class: 73 anonymized_info_dict["anonymized_class"] = self.anonymize( 74 object_class_name 75 ) 76 break 77 78 if not anonymized_info_dict.get("parent_class"): 79 anonymized_info_dict["parent_class"] = "__not_recognized__" 80 anonymized_info_dict["anonymized_class"] = self.anonymize( 81 object_class_name 82 ) 83 except AttributeError: 84 anonymized_info_dict["parent_class"] = "__not_recognized__" 85 anonymized_info_dict["anonymized_class"] = self.anonymize(object_class_name) 86 87 return anonymized_info_dict 88 89 @staticmethod 90 def _is_parent_class_recognized( 91 classes_to_check, 92 object_=None, 93 object_class=None, 94 object_config=None, 95 ) -> Optional[str]: 96 """ 97 Check if the parent class is a subclass of any core GE class. 98 This private method is intended to be used by anonymizers in a public `is_parent_class_recognized()` method. These anonymizers define and provide the core GE classes_to_check. 99 Returns: 100 The name of the parent class found, or None if no parent class was found 101 """ 102 assert ( 103 object_ or object_class or object_config 104 ), "Must pass either object_ or object_class or object_config." 105 try: 106 if object_class is None and object_ is not None: 107 object_class = object_.__class__ 108 elif object_class is None and object_config is not None: 109 object_class_name = object_config.get("class_name") 110 object_module_name = object_config.get("module_name") 111 object_class = load_class(object_class_name, object_module_name) 112 113 for class_to_check in classes_to_check: 114 if issubclass(object_class, class_to_check): 115 return class_to_check.__name__ 116 117 return None 118 119 except AttributeError: 120 return None 121 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/great_expectations/core/usage_statistics/anonymizers/anonymizer.py b/great_expectations/core/usage_statistics/anonymizers/anonymizer.py --- a/great_expectations/core/usage_statistics/anonymizers/anonymizer.py +++ b/great_expectations/core/usage_statistics/anonymizers/anonymizer.py @@ -35,6 +35,7 @@ "{str(type(string_))}", which is illegal. """ ) + salted = self._salt + string_ return md5(salted.encode("utf-8")).hexdigest()
{"golden_diff": "diff --git a/great_expectations/core/usage_statistics/anonymizers/anonymizer.py b/great_expectations/core/usage_statistics/anonymizers/anonymizer.py\n--- a/great_expectations/core/usage_statistics/anonymizers/anonymizer.py\n+++ b/great_expectations/core/usage_statistics/anonymizers/anonymizer.py\n@@ -35,6 +35,7 @@\n \"{str(type(string_))}\", which is illegal.\n \"\"\"\n )\n+\n salted = self._salt + string_\n return md5(salted.encode(\"utf-8\")).hexdigest()\n", "issue": "Use cleaner solution for non-truncating division in python 2\nPrefer `from __future__ import division` to `1.*x/y`\n", "before_files": [{"content": "import logging\nfrom hashlib import md5\nfrom typing import Optional\n\nfrom great_expectations.util import load_class\n\nlogger = logging.getLogger(__name__)\n\n\nclass Anonymizer:\n \"\"\"Anonymize string names in an optionally-consistent way.\"\"\"\n\n def __init__(self, salt=None):\n if salt is not None and not isinstance(salt, str):\n logger.error(\"invalid salt: must provide a string. Setting a random salt.\")\n salt = None\n if salt is None:\n import secrets\n\n self._salt = secrets.token_hex(8)\n else:\n self._salt = salt\n\n @property\n def salt(self):\n return self._salt\n\n def anonymize(self, string_):\n if string_ is None:\n return None\n\n if not isinstance(string_, str):\n raise TypeError(\n f\"\"\"The type of the \"string_\" argument must be a string (Python \"str\"). The type given is\n\"{str(type(string_))}\", which is illegal.\n \"\"\"\n )\n salted = self._salt + string_\n return md5(salted.encode(\"utf-8\")).hexdigest()\n\n def anonymize_object_info(\n self,\n anonymized_info_dict,\n ge_classes,\n object_=None,\n object_class=None,\n object_config=None,\n runtime_environment=None,\n ) -> dict:\n assert (\n object_ or object_class or object_config\n ), \"Must pass either object_ or object_class or object_config.\"\n\n if runtime_environment is None:\n runtime_environment = {}\n\n object_class_name: Optional[str] = None\n try:\n if object_class is None and object_ is not None:\n object_class = object_.__class__\n elif object_class is None and object_config is not None:\n object_class_name = object_config.get(\"class_name\")\n object_module_name = object_config.get(\n \"module_name\"\n ) or runtime_environment.get(\"module_name\")\n object_class = load_class(object_class_name, object_module_name)\n object_class_name = object_class.__name__\n\n for ge_class in ge_classes:\n if issubclass(object_class, ge_class):\n anonymized_info_dict[\"parent_class\"] = ge_class.__name__\n if not object_class == ge_class:\n anonymized_info_dict[\"anonymized_class\"] = self.anonymize(\n object_class_name\n )\n break\n\n if not anonymized_info_dict.get(\"parent_class\"):\n anonymized_info_dict[\"parent_class\"] = \"__not_recognized__\"\n anonymized_info_dict[\"anonymized_class\"] = self.anonymize(\n object_class_name\n )\n except AttributeError:\n anonymized_info_dict[\"parent_class\"] = \"__not_recognized__\"\n anonymized_info_dict[\"anonymized_class\"] = self.anonymize(object_class_name)\n\n return anonymized_info_dict\n\n @staticmethod\n def _is_parent_class_recognized(\n classes_to_check,\n object_=None,\n object_class=None,\n object_config=None,\n ) -> Optional[str]:\n \"\"\"\n Check if the parent class is a subclass of any core GE class.\n This private method is intended to be used by anonymizers in a public `is_parent_class_recognized()` method. These anonymizers define and provide the core GE classes_to_check.\n Returns:\n The name of the parent class found, or None if no parent class was found\n \"\"\"\n assert (\n object_ or object_class or object_config\n ), \"Must pass either object_ or object_class or object_config.\"\n try:\n if object_class is None and object_ is not None:\n object_class = object_.__class__\n elif object_class is None and object_config is not None:\n object_class_name = object_config.get(\"class_name\")\n object_module_name = object_config.get(\"module_name\")\n object_class = load_class(object_class_name, object_module_name)\n\n for class_to_check in classes_to_check:\n if issubclass(object_class, class_to_check):\n return class_to_check.__name__\n\n return None\n\n except AttributeError:\n return None\n", "path": "great_expectations/core/usage_statistics/anonymizers/anonymizer.py"}], "after_files": [{"content": "import logging\nfrom hashlib import md5\nfrom typing import Optional\n\nfrom great_expectations.util import load_class\n\nlogger = logging.getLogger(__name__)\n\n\nclass Anonymizer:\n \"\"\"Anonymize string names in an optionally-consistent way.\"\"\"\n\n def __init__(self, salt=None):\n if salt is not None and not isinstance(salt, str):\n logger.error(\"invalid salt: must provide a string. Setting a random salt.\")\n salt = None\n if salt is None:\n import secrets\n\n self._salt = secrets.token_hex(8)\n else:\n self._salt = salt\n\n @property\n def salt(self):\n return self._salt\n\n def anonymize(self, string_):\n if string_ is None:\n return None\n\n if not isinstance(string_, str):\n raise TypeError(\n f\"\"\"The type of the \"string_\" argument must be a string (Python \"str\"). The type given is\n\"{str(type(string_))}\", which is illegal.\n \"\"\"\n )\n\n salted = self._salt + string_\n return md5(salted.encode(\"utf-8\")).hexdigest()\n\n def anonymize_object_info(\n self,\n anonymized_info_dict,\n ge_classes,\n object_=None,\n object_class=None,\n object_config=None,\n runtime_environment=None,\n ) -> dict:\n assert (\n object_ or object_class or object_config\n ), \"Must pass either object_ or object_class or object_config.\"\n\n if runtime_environment is None:\n runtime_environment = {}\n\n object_class_name: Optional[str] = None\n try:\n if object_class is None and object_ is not None:\n object_class = object_.__class__\n elif object_class is None and object_config is not None:\n object_class_name = object_config.get(\"class_name\")\n object_module_name = object_config.get(\n \"module_name\"\n ) or runtime_environment.get(\"module_name\")\n object_class = load_class(object_class_name, object_module_name)\n object_class_name = object_class.__name__\n\n for ge_class in ge_classes:\n if issubclass(object_class, ge_class):\n anonymized_info_dict[\"parent_class\"] = ge_class.__name__\n if not object_class == ge_class:\n anonymized_info_dict[\"anonymized_class\"] = self.anonymize(\n object_class_name\n )\n break\n\n if not anonymized_info_dict.get(\"parent_class\"):\n anonymized_info_dict[\"parent_class\"] = \"__not_recognized__\"\n anonymized_info_dict[\"anonymized_class\"] = self.anonymize(\n object_class_name\n )\n except AttributeError:\n anonymized_info_dict[\"parent_class\"] = \"__not_recognized__\"\n anonymized_info_dict[\"anonymized_class\"] = self.anonymize(object_class_name)\n\n return anonymized_info_dict\n\n @staticmethod\n def _is_parent_class_recognized(\n classes_to_check,\n object_=None,\n object_class=None,\n object_config=None,\n ) -> Optional[str]:\n \"\"\"\n Check if the parent class is a subclass of any core GE class.\n This private method is intended to be used by anonymizers in a public `is_parent_class_recognized()` method. These anonymizers define and provide the core GE classes_to_check.\n Returns:\n The name of the parent class found, or None if no parent class was found\n \"\"\"\n assert (\n object_ or object_class or object_config\n ), \"Must pass either object_ or object_class or object_config.\"\n try:\n if object_class is None and object_ is not None:\n object_class = object_.__class__\n elif object_class is None and object_config is not None:\n object_class_name = object_config.get(\"class_name\")\n object_module_name = object_config.get(\"module_name\")\n object_class = load_class(object_class_name, object_module_name)\n\n for class_to_check in classes_to_check:\n if issubclass(object_class, class_to_check):\n return class_to_check.__name__\n\n return None\n\n except AttributeError:\n return None\n", "path": "great_expectations/core/usage_statistics/anonymizers/anonymizer.py"}]}
1,438
124
gh_patches_debug_32257
rasdani/github-patches
git_diff
aio-libs__aiohttp-3694
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- sdist build gets crashed under pip>=19 in dev mode ## Long story short We have `cython` as an optional dependency. That's why we install it as a pre-requisite in the CI, as a separate step. New pip creates a separate build virtualenv which doesn't have access to the place with cython installed which causes it to crash. ## Expected behaviour It succeeds ## Actual behaviour It tracebacks ## Steps to reproduce * https://travis-ci.com/aio-libs/aiohttp/jobs/172249543#L198-L219 * https://ci.appveyor.com/project/aio-libs/aiohttp/build/job/uppd0qqw2sbisqtn#L46 ## Your environment Travis CI (doesn't really matter actually) --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `setup.py` Content: ``` 1 import codecs 2 import pathlib 3 import re 4 import sys 5 from distutils.command.build_ext import build_ext 6 from distutils.errors import (CCompilerError, DistutilsExecError, 7 DistutilsPlatformError) 8 9 from setuptools import Extension, setup 10 11 12 if sys.version_info < (3, 5, 3): 13 raise RuntimeError("aiohttp 3.x requires Python 3.5.3+") 14 15 here = pathlib.Path(__file__).parent 16 17 try: 18 from Cython.Build import cythonize 19 USE_CYTHON = True 20 except ImportError: 21 USE_CYTHON = False 22 23 if (here / '.git').exists() and not USE_CYTHON: 24 print("Install cython when building from git clone", file=sys.stderr) 25 print("Hint:", file=sys.stderr) 26 print(" pip install cython", file=sys.stderr) 27 sys.exit(1) 28 29 30 if (here / '.git').exists() and not (here / 'vendor/http-parser/README.md'): 31 print("Install submodules when building from git clone", file=sys.stderr) 32 print("Hint:", file=sys.stderr) 33 print(" git submodule update --init", file=sys.stderr) 34 sys.exit(2) 35 36 37 ext = '.pyx' if USE_CYTHON else '.c' 38 39 40 extensions = [Extension('aiohttp._websocket', ['aiohttp/_websocket' + ext]), 41 Extension('aiohttp._http_parser', 42 ['aiohttp/_http_parser' + ext, 43 'vendor/http-parser/http_parser.c', 44 'aiohttp/_find_header.c'], 45 define_macros=[('HTTP_PARSER_STRICT', 0)], 46 ), 47 Extension('aiohttp._frozenlist', 48 ['aiohttp/_frozenlist' + ext]), 49 Extension('aiohttp._helpers', 50 ['aiohttp/_helpers' + ext]), 51 Extension('aiohttp._http_writer', 52 ['aiohttp/_http_writer' + ext])] 53 54 55 if USE_CYTHON: 56 extensions = cythonize(extensions) 57 58 59 class BuildFailed(Exception): 60 pass 61 62 63 class ve_build_ext(build_ext): 64 # This class allows C extension building to fail. 65 66 def run(self): 67 try: 68 build_ext.run(self) 69 except (DistutilsPlatformError, FileNotFoundError): 70 raise BuildFailed() 71 72 def build_extension(self, ext): 73 try: 74 build_ext.build_extension(self, ext) 75 except (CCompilerError, DistutilsExecError, 76 DistutilsPlatformError, ValueError): 77 raise BuildFailed() 78 79 80 81 txt = (here / 'aiohttp' / '__init__.py').read_text('utf-8') 82 try: 83 version = re.findall(r"^__version__ = '([^']+)'\r?$", 84 txt, re.M)[0] 85 except IndexError: 86 raise RuntimeError('Unable to determine version.') 87 88 install_requires = [ 89 'attrs>=17.3.0', 90 'chardet>=2.0,<4.0', 91 'multidict>=4.0,<5.0', 92 'async_timeout>=3.0,<4.0', 93 'yarl>=1.0,<2.0', 94 'idna-ssl>=1.0; python_version<"3.7"', 95 'typing_extensions>=3.6.5; python_version<"3.7"', 96 ] 97 98 99 def read(f): 100 return (here / f).read_text('utf-8').strip() 101 102 103 NEEDS_PYTEST = {'pytest', 'test'}.intersection(sys.argv) 104 pytest_runner = ['pytest-runner'] if NEEDS_PYTEST else [] 105 106 tests_require = [ 107 'pytest', 'gunicorn', 108 'pytest-timeout', 'async-generator', 109 'pytest-xdist', 110 ] 111 112 113 args = dict( 114 name='aiohttp', 115 version=version, 116 description='Async http client/server framework (asyncio)', 117 long_description='\n\n'.join((read('README.rst'), read('CHANGES.rst'))), 118 classifiers=[ 119 'License :: OSI Approved :: Apache Software License', 120 'Intended Audience :: Developers', 121 'Programming Language :: Python', 122 'Programming Language :: Python :: 3', 123 'Programming Language :: Python :: 3.5', 124 'Programming Language :: Python :: 3.6', 125 'Programming Language :: Python :: 3.7', 126 'Development Status :: 5 - Production/Stable', 127 'Operating System :: POSIX', 128 'Operating System :: MacOS :: MacOS X', 129 'Operating System :: Microsoft :: Windows', 130 'Topic :: Internet :: WWW/HTTP', 131 'Framework :: AsyncIO', 132 ], 133 author='Nikolay Kim', 134 author_email='[email protected]', 135 maintainer=', '.join(('Nikolay Kim <[email protected]>', 136 'Andrew Svetlov <[email protected]>')), 137 maintainer_email='[email protected]', 138 url='https://github.com/aio-libs/aiohttp', 139 project_urls={ 140 'Chat: Gitter': 'https://gitter.im/aio-libs/Lobby', 141 'CI: AppVeyor': 'https://ci.appveyor.com/project/aio-libs/aiohttp', 142 'CI: Circle': 'https://circleci.com/gh/aio-libs/aiohttp', 143 'CI: Shippable': 'https://app.shippable.com/github/aio-libs/aiohttp', 144 'CI: Travis': 'https://travis-ci.com/aio-libs/aiohttp', 145 'Coverage: codecov': 'https://codecov.io/github/aio-libs/aiohttp', 146 'Docs: RTD': 'https://docs.aiohttp.org', 147 'GitHub: issues': 'https://github.com/aio-libs/aiohttp/issues', 148 'GitHub: repo': 'https://github.com/aio-libs/aiohttp', 149 }, 150 license='Apache 2', 151 packages=['aiohttp'], 152 python_requires='>=3.5.3', 153 install_requires=install_requires, 154 extras_require={ 155 'speedups': [ 156 'aiodns', 157 'brotlipy', 158 'cchardet', 159 ], 160 }, 161 tests_require=tests_require, 162 setup_requires=pytest_runner, 163 include_package_data=True, 164 ext_modules=extensions, 165 cmdclass=dict(build_ext=ve_build_ext), 166 ) 167 168 try: 169 setup(**args) 170 except BuildFailed: 171 print("************************************************************") 172 print("Cannot compile C accelerator module, use pure python version") 173 print("************************************************************") 174 del args['ext_modules'] 175 del args['cmdclass'] 176 setup(**args) 177 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -14,18 +14,6 @@ here = pathlib.Path(__file__).parent -try: - from Cython.Build import cythonize - USE_CYTHON = True -except ImportError: - USE_CYTHON = False - -if (here / '.git').exists() and not USE_CYTHON: - print("Install cython when building from git clone", file=sys.stderr) - print("Hint:", file=sys.stderr) - print(" pip install cython", file=sys.stderr) - sys.exit(1) - if (here / '.git').exists() and not (here / 'vendor/http-parser/README.md'): print("Install submodules when building from git clone", file=sys.stderr) @@ -34,26 +22,21 @@ sys.exit(2) -ext = '.pyx' if USE_CYTHON else '.c' +# NOTE: makefile cythonizes all Cython modules - -extensions = [Extension('aiohttp._websocket', ['aiohttp/_websocket' + ext]), +extensions = [Extension('aiohttp._websocket', ['aiohttp/_websocket.c']), Extension('aiohttp._http_parser', - ['aiohttp/_http_parser' + ext, + ['aiohttp/_http_parser.c', 'vendor/http-parser/http_parser.c', 'aiohttp/_find_header.c'], define_macros=[('HTTP_PARSER_STRICT', 0)], ), Extension('aiohttp._frozenlist', - ['aiohttp/_frozenlist' + ext]), + ['aiohttp/_frozenlist.c']), Extension('aiohttp._helpers', - ['aiohttp/_helpers' + ext]), + ['aiohttp/_helpers.c']), Extension('aiohttp._http_writer', - ['aiohttp/_http_writer' + ext])] - - -if USE_CYTHON: - extensions = cythonize(extensions) + ['aiohttp/_http_writer.c'])] class BuildFailed(Exception):
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -14,18 +14,6 @@\n \n here = pathlib.Path(__file__).parent\n \n-try:\n- from Cython.Build import cythonize\n- USE_CYTHON = True\n-except ImportError:\n- USE_CYTHON = False\n-\n-if (here / '.git').exists() and not USE_CYTHON:\n- print(\"Install cython when building from git clone\", file=sys.stderr)\n- print(\"Hint:\", file=sys.stderr)\n- print(\" pip install cython\", file=sys.stderr)\n- sys.exit(1)\n-\n \n if (here / '.git').exists() and not (here / 'vendor/http-parser/README.md'):\n print(\"Install submodules when building from git clone\", file=sys.stderr)\n@@ -34,26 +22,21 @@\n sys.exit(2)\n \n \n-ext = '.pyx' if USE_CYTHON else '.c'\n+# NOTE: makefile cythonizes all Cython modules\n \n-\n-extensions = [Extension('aiohttp._websocket', ['aiohttp/_websocket' + ext]),\n+extensions = [Extension('aiohttp._websocket', ['aiohttp/_websocket.c']),\n Extension('aiohttp._http_parser',\n- ['aiohttp/_http_parser' + ext,\n+ ['aiohttp/_http_parser.c',\n 'vendor/http-parser/http_parser.c',\n 'aiohttp/_find_header.c'],\n define_macros=[('HTTP_PARSER_STRICT', 0)],\n ),\n Extension('aiohttp._frozenlist',\n- ['aiohttp/_frozenlist' + ext]),\n+ ['aiohttp/_frozenlist.c']),\n Extension('aiohttp._helpers',\n- ['aiohttp/_helpers' + ext]),\n+ ['aiohttp/_helpers.c']),\n Extension('aiohttp._http_writer',\n- ['aiohttp/_http_writer' + ext])]\n-\n-\n-if USE_CYTHON:\n- extensions = cythonize(extensions)\n+ ['aiohttp/_http_writer.c'])]\n \n \n class BuildFailed(Exception):\n", "issue": "sdist build gets crashed under pip>=19 in dev mode\n## Long story short\r\n\r\nWe have `cython` as an optional dependency. That's why we install it as a pre-requisite in the CI, as a separate step.\r\nNew pip creates a separate build virtualenv which doesn't have access to the place with cython installed which causes it to crash.\r\n\r\n## Expected behaviour\r\n\r\nIt succeeds\r\n\r\n## Actual behaviour\r\n\r\nIt tracebacks\r\n\r\n## Steps to reproduce\r\n\r\n* https://travis-ci.com/aio-libs/aiohttp/jobs/172249543#L198-L219\r\n* https://ci.appveyor.com/project/aio-libs/aiohttp/build/job/uppd0qqw2sbisqtn#L46\r\n\r\n## Your environment\r\n\r\nTravis CI (doesn't really matter actually)\n", "before_files": [{"content": "import codecs\nimport pathlib\nimport re\nimport sys\nfrom distutils.command.build_ext import build_ext\nfrom distutils.errors import (CCompilerError, DistutilsExecError,\n DistutilsPlatformError)\n\nfrom setuptools import Extension, setup\n\n\nif sys.version_info < (3, 5, 3):\n raise RuntimeError(\"aiohttp 3.x requires Python 3.5.3+\")\n\nhere = pathlib.Path(__file__).parent\n\ntry:\n from Cython.Build import cythonize\n USE_CYTHON = True\nexcept ImportError:\n USE_CYTHON = False\n\nif (here / '.git').exists() and not USE_CYTHON:\n print(\"Install cython when building from git clone\", file=sys.stderr)\n print(\"Hint:\", file=sys.stderr)\n print(\" pip install cython\", file=sys.stderr)\n sys.exit(1)\n\n\nif (here / '.git').exists() and not (here / 'vendor/http-parser/README.md'):\n print(\"Install submodules when building from git clone\", file=sys.stderr)\n print(\"Hint:\", file=sys.stderr)\n print(\" git submodule update --init\", file=sys.stderr)\n sys.exit(2)\n\n\next = '.pyx' if USE_CYTHON else '.c'\n\n\nextensions = [Extension('aiohttp._websocket', ['aiohttp/_websocket' + ext]),\n Extension('aiohttp._http_parser',\n ['aiohttp/_http_parser' + ext,\n 'vendor/http-parser/http_parser.c',\n 'aiohttp/_find_header.c'],\n define_macros=[('HTTP_PARSER_STRICT', 0)],\n ),\n Extension('aiohttp._frozenlist',\n ['aiohttp/_frozenlist' + ext]),\n Extension('aiohttp._helpers',\n ['aiohttp/_helpers' + ext]),\n Extension('aiohttp._http_writer',\n ['aiohttp/_http_writer' + ext])]\n\n\nif USE_CYTHON:\n extensions = cythonize(extensions)\n\n\nclass BuildFailed(Exception):\n pass\n\n\nclass ve_build_ext(build_ext):\n # This class allows C extension building to fail.\n\n def run(self):\n try:\n build_ext.run(self)\n except (DistutilsPlatformError, FileNotFoundError):\n raise BuildFailed()\n\n def build_extension(self, ext):\n try:\n build_ext.build_extension(self, ext)\n except (CCompilerError, DistutilsExecError,\n DistutilsPlatformError, ValueError):\n raise BuildFailed()\n\n\n\ntxt = (here / 'aiohttp' / '__init__.py').read_text('utf-8')\ntry:\n version = re.findall(r\"^__version__ = '([^']+)'\\r?$\",\n txt, re.M)[0]\nexcept IndexError:\n raise RuntimeError('Unable to determine version.')\n\ninstall_requires = [\n 'attrs>=17.3.0',\n 'chardet>=2.0,<4.0',\n 'multidict>=4.0,<5.0',\n 'async_timeout>=3.0,<4.0',\n 'yarl>=1.0,<2.0',\n 'idna-ssl>=1.0; python_version<\"3.7\"',\n 'typing_extensions>=3.6.5; python_version<\"3.7\"',\n]\n\n\ndef read(f):\n return (here / f).read_text('utf-8').strip()\n\n\nNEEDS_PYTEST = {'pytest', 'test'}.intersection(sys.argv)\npytest_runner = ['pytest-runner'] if NEEDS_PYTEST else []\n\ntests_require = [\n 'pytest', 'gunicorn',\n 'pytest-timeout', 'async-generator',\n 'pytest-xdist',\n]\n\n\nargs = dict(\n name='aiohttp',\n version=version,\n description='Async http client/server framework (asyncio)',\n long_description='\\n\\n'.join((read('README.rst'), read('CHANGES.rst'))),\n classifiers=[\n 'License :: OSI Approved :: Apache Software License',\n 'Intended Audience :: Developers',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Development Status :: 5 - Production/Stable',\n 'Operating System :: POSIX',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows',\n 'Topic :: Internet :: WWW/HTTP',\n 'Framework :: AsyncIO',\n ],\n author='Nikolay Kim',\n author_email='[email protected]',\n maintainer=', '.join(('Nikolay Kim <[email protected]>',\n 'Andrew Svetlov <[email protected]>')),\n maintainer_email='[email protected]',\n url='https://github.com/aio-libs/aiohttp',\n project_urls={\n 'Chat: Gitter': 'https://gitter.im/aio-libs/Lobby',\n 'CI: AppVeyor': 'https://ci.appveyor.com/project/aio-libs/aiohttp',\n 'CI: Circle': 'https://circleci.com/gh/aio-libs/aiohttp',\n 'CI: Shippable': 'https://app.shippable.com/github/aio-libs/aiohttp',\n 'CI: Travis': 'https://travis-ci.com/aio-libs/aiohttp',\n 'Coverage: codecov': 'https://codecov.io/github/aio-libs/aiohttp',\n 'Docs: RTD': 'https://docs.aiohttp.org',\n 'GitHub: issues': 'https://github.com/aio-libs/aiohttp/issues',\n 'GitHub: repo': 'https://github.com/aio-libs/aiohttp',\n },\n license='Apache 2',\n packages=['aiohttp'],\n python_requires='>=3.5.3',\n install_requires=install_requires,\n extras_require={\n 'speedups': [\n 'aiodns',\n 'brotlipy',\n 'cchardet',\n ],\n },\n tests_require=tests_require,\n setup_requires=pytest_runner,\n include_package_data=True,\n ext_modules=extensions,\n cmdclass=dict(build_ext=ve_build_ext),\n)\n\ntry:\n setup(**args)\nexcept BuildFailed:\n print(\"************************************************************\")\n print(\"Cannot compile C accelerator module, use pure python version\")\n print(\"************************************************************\")\n del args['ext_modules']\n del args['cmdclass']\n setup(**args)\n", "path": "setup.py"}], "after_files": [{"content": "import codecs\nimport pathlib\nimport re\nimport sys\nfrom distutils.command.build_ext import build_ext\nfrom distutils.errors import (CCompilerError, DistutilsExecError,\n DistutilsPlatformError)\n\nfrom setuptools import Extension, setup\n\n\nif sys.version_info < (3, 5, 3):\n raise RuntimeError(\"aiohttp 3.x requires Python 3.5.3+\")\n\nhere = pathlib.Path(__file__).parent\n\n\nif (here / '.git').exists() and not (here / 'vendor/http-parser/README.md'):\n print(\"Install submodules when building from git clone\", file=sys.stderr)\n print(\"Hint:\", file=sys.stderr)\n print(\" git submodule update --init\", file=sys.stderr)\n sys.exit(2)\n\n\n# NOTE: makefile cythonizes all Cython modules\n\nextensions = [Extension('aiohttp._websocket', ['aiohttp/_websocket.c']),\n Extension('aiohttp._http_parser',\n ['aiohttp/_http_parser.c',\n 'vendor/http-parser/http_parser.c',\n 'aiohttp/_find_header.c'],\n define_macros=[('HTTP_PARSER_STRICT', 0)],\n ),\n Extension('aiohttp._frozenlist',\n ['aiohttp/_frozenlist.c']),\n Extension('aiohttp._helpers',\n ['aiohttp/_helpers.c']),\n Extension('aiohttp._http_writer',\n ['aiohttp/_http_writer.c'])]\n\n\nclass BuildFailed(Exception):\n pass\n\n\nclass ve_build_ext(build_ext):\n # This class allows C extension building to fail.\n\n def run(self):\n try:\n build_ext.run(self)\n except (DistutilsPlatformError, FileNotFoundError):\n raise BuildFailed()\n\n def build_extension(self, ext):\n try:\n build_ext.build_extension(self, ext)\n except (CCompilerError, DistutilsExecError,\n DistutilsPlatformError, ValueError):\n raise BuildFailed()\n\n\n\ntxt = (here / 'aiohttp' / '__init__.py').read_text('utf-8')\ntry:\n version = re.findall(r\"^__version__ = '([^']+)'\\r?$\",\n txt, re.M)[0]\nexcept IndexError:\n raise RuntimeError('Unable to determine version.')\n\ninstall_requires = [\n 'attrs>=17.3.0',\n 'chardet>=2.0,<4.0',\n 'multidict>=4.0,<5.0',\n 'async_timeout>=3.0,<4.0',\n 'yarl>=1.0,<2.0',\n 'idna-ssl>=1.0; python_version<\"3.7\"',\n 'typing_extensions>=3.6.5; python_version<\"3.7\"',\n]\n\n\ndef read(f):\n return (here / f).read_text('utf-8').strip()\n\n\nNEEDS_PYTEST = {'pytest', 'test'}.intersection(sys.argv)\npytest_runner = ['pytest-runner'] if NEEDS_PYTEST else []\n\ntests_require = [\n 'pytest', 'gunicorn',\n 'pytest-timeout', 'async-generator',\n 'pytest-xdist',\n]\n\n\nargs = dict(\n name='aiohttp',\n version=version,\n description='Async http client/server framework (asyncio)',\n long_description='\\n\\n'.join((read('README.rst'), read('CHANGES.rst'))),\n classifiers=[\n 'License :: OSI Approved :: Apache Software License',\n 'Intended Audience :: Developers',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Development Status :: 5 - Production/Stable',\n 'Operating System :: POSIX',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows',\n 'Topic :: Internet :: WWW/HTTP',\n 'Framework :: AsyncIO',\n ],\n author='Nikolay Kim',\n author_email='[email protected]',\n maintainer=', '.join(('Nikolay Kim <[email protected]>',\n 'Andrew Svetlov <[email protected]>')),\n maintainer_email='[email protected]',\n url='https://github.com/aio-libs/aiohttp',\n project_urls={\n 'Chat: Gitter': 'https://gitter.im/aio-libs/Lobby',\n 'CI: AppVeyor': 'https://ci.appveyor.com/project/aio-libs/aiohttp',\n 'CI: Circle': 'https://circleci.com/gh/aio-libs/aiohttp',\n 'CI: Shippable': 'https://app.shippable.com/github/aio-libs/aiohttp',\n 'CI: Travis': 'https://travis-ci.com/aio-libs/aiohttp',\n 'Coverage: codecov': 'https://codecov.io/github/aio-libs/aiohttp',\n 'Docs: RTD': 'https://docs.aiohttp.org',\n 'GitHub: issues': 'https://github.com/aio-libs/aiohttp/issues',\n 'GitHub: repo': 'https://github.com/aio-libs/aiohttp',\n },\n license='Apache 2',\n packages=['aiohttp'],\n python_requires='>=3.5.3',\n install_requires=install_requires,\n extras_require={\n 'speedups': [\n 'aiodns',\n 'brotlipy',\n 'cchardet',\n ],\n },\n tests_require=tests_require,\n setup_requires=pytest_runner,\n include_package_data=True,\n ext_modules=extensions,\n cmdclass=dict(build_ext=ve_build_ext),\n)\n\ntry:\n setup(**args)\nexcept BuildFailed:\n print(\"************************************************************\")\n print(\"Cannot compile C accelerator module, use pure python version\")\n print(\"************************************************************\")\n del args['ext_modules']\n del args['cmdclass']\n setup(**args)\n", "path": "setup.py"}]}
2,283
465
gh_patches_debug_23074
rasdani/github-patches
git_diff
speechbrain__speechbrain-187
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Train Logger use average as the default summary function Right now users have to specify a summary function for each statistic, however average is the function to use in the vast majority of cases (the exception is error rates). Why not make it default? --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `speechbrain/utils/train_logger.py` Content: ``` 1 """ 2 Loggers for experiment monitoring 3 4 Authors 5 * Peter Plantinga 2020 6 """ 7 import logging 8 from speechbrain.utils.edit_distance import wer_summary 9 10 logger = logging.getLogger(__name__) 11 12 13 class TrainLogger: 14 """Abstract class defining an interface for training loggers.""" 15 16 def log_stats( 17 self, 18 stats_meta, 19 train_stats=None, 20 valid_stats=None, 21 test_stats=None, 22 verbose=False, 23 ): 24 """Log the stats for one epoch. 25 26 Arguments 27 --------- 28 stats_meta : dict of str:scalar pairs 29 Meta information about the stats (e.g. epoch, learning-rate, etc.) 30 train_stats : dict of str:list pairs 31 Each loss type is represented with a str : list pair including 32 all the values for the training pass. 33 valid_stats : dict of str:list pairs 34 Each loss type is represented with a str : list pair including 35 all the values for the validation pass. 36 test_stats : dict of str:list pairs 37 Each loss type is represented with a str : list pair including 38 all the values for the test pass. 39 verbose : bool 40 Whether to also put logging information to the standard logger. 41 """ 42 raise NotImplementedError 43 44 45 class FileTrainLogger(TrainLogger): 46 """Text logger of training information 47 48 Arguments 49 --------- 50 save_file : str 51 The file to use for logging train information. 52 summary_fns : dict of str:function pairs 53 Each summary function should take a list produced as output 54 from a training/validation pass and summarize it to a single scalar. 55 """ 56 57 def __init__(self, save_file, summary_fns): 58 self.save_file = save_file 59 self.summary_fns = summary_fns 60 61 def _item_to_string(self, key, value, dataset=None): 62 """Convert one item to string, handling floats""" 63 if isinstance(value, float) and 0.01 < value < 100.0: 64 value = f"{value:.2f}" 65 elif isinstance(value, float): 66 value = f"{value:.2e}" 67 if dataset is not None: 68 key = f"{dataset} {key}" 69 return f"{key}: {value}" 70 71 def _stats_to_string(self, stats, dataset=None): 72 """Convert all stats to a single string summary""" 73 return ", ".join( 74 [self._item_to_string(k, v, dataset) for k, v in stats.items()] 75 ) 76 77 def log_stats( 78 self, 79 stats_meta, 80 train_stats=None, 81 valid_stats=None, 82 test_stats=None, 83 verbose=True, 84 ): 85 """See TrainLogger.log_stats()""" 86 string_summary = self._stats_to_string(stats_meta) 87 for dataset, stats in [ 88 ("train", train_stats), 89 ("valid", valid_stats), 90 ("test", test_stats), 91 ]: 92 if stats is None: 93 continue 94 summary = {} 95 for stat, value_list in stats.items(): 96 summary[stat] = self.summary_fns[stat](value_list) 97 string_summary += " - " + self._stats_to_string(summary, dataset) 98 99 with open(self.save_file, "a") as fout: 100 print(string_summary, file=fout) 101 if verbose: 102 logger.info(string_summary) 103 104 105 class TensorboardLogger(TrainLogger): 106 """Logs training information in the format required by Tensorboard. 107 108 Arguments 109 --------- 110 save_dir : str 111 A directory for storing all the relevant logs 112 113 Raises 114 ------ 115 ImportError if Tensorboard is not installed. 116 """ 117 118 def __init__(self, save_dir): 119 self.save_dir = save_dir 120 121 # Raises ImportError if TensorBoard is not installed 122 from torch.utils.tensorboard import SummaryWriter 123 124 self.writer = SummaryWriter(self.save_dir) 125 self.global_step = {"train": {}, "valid": {}, "meta": 0} 126 127 def log_stats( 128 self, 129 stats_meta, 130 train_stats=None, 131 valid_stats=None, 132 test_stats=None, 133 verbose=False, 134 ): 135 """See TrainLogger.log_stats()""" 136 self.global_step["meta"] += 1 137 for name, value in stats_meta.items(): 138 self.writer.add_scalar(name, value, self.global_step["meta"]) 139 140 for dataset, stats in [ 141 ("train", train_stats), 142 ("valid", valid_stats), 143 ("test", test_stats), 144 ]: 145 if stats is None: 146 continue 147 for stat, value_list in stats.items(): 148 if stat not in self.global_step[dataset]: 149 self.global_step[dataset][stat] = 0 150 tag = f"{stat}/{dataset}" 151 for value in value_list: 152 new_global_step = self.global_step[dataset][stat] + 1 153 self.writer.add_scalar(tag, value, new_global_step) 154 self.global_step[dataset][stat] = new_global_step 155 156 157 def summarize_average(stat_list): 158 return float(sum(stat_list) / len(stat_list)) 159 160 161 def summarize_error_rate(stat_list): 162 summary = wer_summary(stat_list) 163 return summary["WER"] 164 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/speechbrain/utils/train_logger.py b/speechbrain/utils/train_logger.py --- a/speechbrain/utils/train_logger.py +++ b/speechbrain/utils/train_logger.py @@ -54,9 +54,9 @@ from a training/validation pass and summarize it to a single scalar. """ - def __init__(self, save_file, summary_fns): + def __init__(self, save_file, summary_fns=None): self.save_file = save_file - self.summary_fns = summary_fns + self.summary_fns = summary_fns or {} def _item_to_string(self, key, value, dataset=None): """Convert one item to string, handling floats""" @@ -93,7 +93,10 @@ continue summary = {} for stat, value_list in stats.items(): - summary[stat] = self.summary_fns[stat](value_list) + if stat in self.summary_fns: + summary[stat] = self.summary_fns[stat](value_list) + else: + summary[stat] = summarize_average(value_list) string_summary += " - " + self._stats_to_string(summary, dataset) with open(self.save_file, "a") as fout:
{"golden_diff": "diff --git a/speechbrain/utils/train_logger.py b/speechbrain/utils/train_logger.py\n--- a/speechbrain/utils/train_logger.py\n+++ b/speechbrain/utils/train_logger.py\n@@ -54,9 +54,9 @@\n from a training/validation pass and summarize it to a single scalar.\n \"\"\"\n \n- def __init__(self, save_file, summary_fns):\n+ def __init__(self, save_file, summary_fns=None):\n self.save_file = save_file\n- self.summary_fns = summary_fns\n+ self.summary_fns = summary_fns or {}\n \n def _item_to_string(self, key, value, dataset=None):\n \"\"\"Convert one item to string, handling floats\"\"\"\n@@ -93,7 +93,10 @@\n continue\n summary = {}\n for stat, value_list in stats.items():\n- summary[stat] = self.summary_fns[stat](value_list)\n+ if stat in self.summary_fns:\n+ summary[stat] = self.summary_fns[stat](value_list)\n+ else:\n+ summary[stat] = summarize_average(value_list)\n string_summary += \" - \" + self._stats_to_string(summary, dataset)\n \n with open(self.save_file, \"a\") as fout:\n", "issue": "Train Logger use average as the default summary function\nRight now users have to specify a summary function for each statistic, however average is the function to use in the vast majority of cases (the exception is error rates). Why not make it default?\n", "before_files": [{"content": "\"\"\"\nLoggers for experiment monitoring\n\nAuthors\n * Peter Plantinga 2020\n\"\"\"\nimport logging\nfrom speechbrain.utils.edit_distance import wer_summary\n\nlogger = logging.getLogger(__name__)\n\n\nclass TrainLogger:\n \"\"\"Abstract class defining an interface for training loggers.\"\"\"\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=False,\n ):\n \"\"\"Log the stats for one epoch.\n\n Arguments\n ---------\n stats_meta : dict of str:scalar pairs\n Meta information about the stats (e.g. epoch, learning-rate, etc.)\n train_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the training pass.\n valid_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the validation pass.\n test_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the test pass.\n verbose : bool\n Whether to also put logging information to the standard logger.\n \"\"\"\n raise NotImplementedError\n\n\nclass FileTrainLogger(TrainLogger):\n \"\"\"Text logger of training information\n\n Arguments\n ---------\n save_file : str\n The file to use for logging train information.\n summary_fns : dict of str:function pairs\n Each summary function should take a list produced as output\n from a training/validation pass and summarize it to a single scalar.\n \"\"\"\n\n def __init__(self, save_file, summary_fns):\n self.save_file = save_file\n self.summary_fns = summary_fns\n\n def _item_to_string(self, key, value, dataset=None):\n \"\"\"Convert one item to string, handling floats\"\"\"\n if isinstance(value, float) and 0.01 < value < 100.0:\n value = f\"{value:.2f}\"\n elif isinstance(value, float):\n value = f\"{value:.2e}\"\n if dataset is not None:\n key = f\"{dataset} {key}\"\n return f\"{key}: {value}\"\n\n def _stats_to_string(self, stats, dataset=None):\n \"\"\"Convert all stats to a single string summary\"\"\"\n return \", \".join(\n [self._item_to_string(k, v, dataset) for k, v in stats.items()]\n )\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=True,\n ):\n \"\"\"See TrainLogger.log_stats()\"\"\"\n string_summary = self._stats_to_string(stats_meta)\n for dataset, stats in [\n (\"train\", train_stats),\n (\"valid\", valid_stats),\n (\"test\", test_stats),\n ]:\n if stats is None:\n continue\n summary = {}\n for stat, value_list in stats.items():\n summary[stat] = self.summary_fns[stat](value_list)\n string_summary += \" - \" + self._stats_to_string(summary, dataset)\n\n with open(self.save_file, \"a\") as fout:\n print(string_summary, file=fout)\n if verbose:\n logger.info(string_summary)\n\n\nclass TensorboardLogger(TrainLogger):\n \"\"\"Logs training information in the format required by Tensorboard.\n\n Arguments\n ---------\n save_dir : str\n A directory for storing all the relevant logs\n\n Raises\n ------\n ImportError if Tensorboard is not installed.\n \"\"\"\n\n def __init__(self, save_dir):\n self.save_dir = save_dir\n\n # Raises ImportError if TensorBoard is not installed\n from torch.utils.tensorboard import SummaryWriter\n\n self.writer = SummaryWriter(self.save_dir)\n self.global_step = {\"train\": {}, \"valid\": {}, \"meta\": 0}\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=False,\n ):\n \"\"\"See TrainLogger.log_stats()\"\"\"\n self.global_step[\"meta\"] += 1\n for name, value in stats_meta.items():\n self.writer.add_scalar(name, value, self.global_step[\"meta\"])\n\n for dataset, stats in [\n (\"train\", train_stats),\n (\"valid\", valid_stats),\n (\"test\", test_stats),\n ]:\n if stats is None:\n continue\n for stat, value_list in stats.items():\n if stat not in self.global_step[dataset]:\n self.global_step[dataset][stat] = 0\n tag = f\"{stat}/{dataset}\"\n for value in value_list:\n new_global_step = self.global_step[dataset][stat] + 1\n self.writer.add_scalar(tag, value, new_global_step)\n self.global_step[dataset][stat] = new_global_step\n\n\ndef summarize_average(stat_list):\n return float(sum(stat_list) / len(stat_list))\n\n\ndef summarize_error_rate(stat_list):\n summary = wer_summary(stat_list)\n return summary[\"WER\"]\n", "path": "speechbrain/utils/train_logger.py"}], "after_files": [{"content": "\"\"\"\nLoggers for experiment monitoring\n\nAuthors\n * Peter Plantinga 2020\n\"\"\"\nimport logging\nfrom speechbrain.utils.edit_distance import wer_summary\n\nlogger = logging.getLogger(__name__)\n\n\nclass TrainLogger:\n \"\"\"Abstract class defining an interface for training loggers.\"\"\"\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=False,\n ):\n \"\"\"Log the stats for one epoch.\n\n Arguments\n ---------\n stats_meta : dict of str:scalar pairs\n Meta information about the stats (e.g. epoch, learning-rate, etc.)\n train_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the training pass.\n valid_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the validation pass.\n test_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the test pass.\n verbose : bool\n Whether to also put logging information to the standard logger.\n \"\"\"\n raise NotImplementedError\n\n\nclass FileTrainLogger(TrainLogger):\n \"\"\"Text logger of training information\n\n Arguments\n ---------\n save_file : str\n The file to use for logging train information.\n summary_fns : dict of str:function pairs\n Each summary function should take a list produced as output\n from a training/validation pass and summarize it to a single scalar.\n \"\"\"\n\n def __init__(self, save_file, summary_fns=None):\n self.save_file = save_file\n self.summary_fns = summary_fns or {}\n\n def _item_to_string(self, key, value, dataset=None):\n \"\"\"Convert one item to string, handling floats\"\"\"\n if isinstance(value, float) and 0.01 < value < 100.0:\n value = f\"{value:.2f}\"\n elif isinstance(value, float):\n value = f\"{value:.2e}\"\n if dataset is not None:\n key = f\"{dataset} {key}\"\n return f\"{key}: {value}\"\n\n def _stats_to_string(self, stats, dataset=None):\n \"\"\"Convert all stats to a single string summary\"\"\"\n return \", \".join(\n [self._item_to_string(k, v, dataset) for k, v in stats.items()]\n )\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=True,\n ):\n \"\"\"See TrainLogger.log_stats()\"\"\"\n string_summary = self._stats_to_string(stats_meta)\n for dataset, stats in [\n (\"train\", train_stats),\n (\"valid\", valid_stats),\n (\"test\", test_stats),\n ]:\n if stats is None:\n continue\n summary = {}\n for stat, value_list in stats.items():\n if stat in self.summary_fns:\n summary[stat] = self.summary_fns[stat](value_list)\n else:\n summary[stat] = summarize_average(value_list)\n string_summary += \" - \" + self._stats_to_string(summary, dataset)\n\n with open(self.save_file, \"a\") as fout:\n print(string_summary, file=fout)\n if verbose:\n logger.info(string_summary)\n\n\nclass TensorboardLogger(TrainLogger):\n \"\"\"Logs training information in the format required by Tensorboard.\n\n Arguments\n ---------\n save_dir : str\n A directory for storing all the relevant logs\n\n Raises\n ------\n ImportError if Tensorboard is not installed.\n \"\"\"\n\n def __init__(self, save_dir):\n self.save_dir = save_dir\n\n # Raises ImportError if TensorBoard is not installed\n from torch.utils.tensorboard import SummaryWriter\n\n self.writer = SummaryWriter(self.save_dir)\n self.global_step = {\"train\": {}, \"valid\": {}, \"meta\": 0}\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=False,\n ):\n \"\"\"See TrainLogger.log_stats()\"\"\"\n self.global_step[\"meta\"] += 1\n for name, value in stats_meta.items():\n self.writer.add_scalar(name, value, self.global_step[\"meta\"])\n\n for dataset, stats in [\n (\"train\", train_stats),\n (\"valid\", valid_stats),\n (\"test\", test_stats),\n ]:\n if stats is None:\n continue\n for stat, value_list in stats.items():\n if stat not in self.global_step[dataset]:\n self.global_step[dataset][stat] = 0\n tag = f\"{stat}/{dataset}\"\n for value in value_list:\n new_global_step = self.global_step[dataset][stat] + 1\n self.writer.add_scalar(tag, value, new_global_step)\n self.global_step[dataset][stat] = new_global_step\n\n\ndef summarize_average(stat_list):\n return float(sum(stat_list) / len(stat_list))\n\n\ndef summarize_error_rate(stat_list):\n summary = wer_summary(stat_list)\n return summary[\"WER\"]\n", "path": "speechbrain/utils/train_logger.py"}]}
1,792
279
gh_patches_debug_35438
rasdani/github-patches
git_diff
alltheplaces__alltheplaces-1881
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Spider public_storage is broken During the global build at 2021-05-26-14-42-23, spider **public_storage** failed with **0 features** and **0 errors**. Here's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/public_storage.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/public_storage.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/public_storage.geojson)) --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `locations/spiders/public_storage.py` Content: ``` 1 # -*- coding: utf-8 -*- 2 import scrapy 3 import json 4 5 from locations.items import GeojsonPointItem 6 7 8 class PublicStorageSpider(scrapy.Spider): 9 name = "public_storage" 10 item_attributes = { 'brand': "Public Storage" } 11 allowed_domains = ["www.publicstorage.com"] 12 start_urls = ( 13 'https://www.publicstorage.com/handlers/searchcoordinates.ashx?north=90.0&east=180.0&south=-90.0&west=-180.0', 14 ) 15 16 def parse(self, response): 17 data = json.loads(response.body_as_unicode()) 18 19 for store in data['response']['properties']['property']: 20 lat, lon = map(float, store['lat_long'].split(', ')) 21 properties = { 22 "ref": store.get('property_id'), 23 "opening_hours": '; '.join(response.xpath('//time[@itemprop="openingHours"]/@datetime').extract()), 24 "addr_full": store.get('address'), 25 "city": store.get('city'), 26 "state": store.get('state'), 27 "postcode": store.get('zip'), 28 "lat": lat, 29 "lon": lon, 30 } 31 32 yield GeojsonPointItem(**properties) 33 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/locations/spiders/public_storage.py b/locations/spiders/public_storage.py --- a/locations/spiders/public_storage.py +++ b/locations/spiders/public_storage.py @@ -3,6 +3,7 @@ import json from locations.items import GeojsonPointItem +from locations.hours import OpeningHours class PublicStorageSpider(scrapy.Spider): @@ -10,23 +11,45 @@ item_attributes = { 'brand': "Public Storage" } allowed_domains = ["www.publicstorage.com"] start_urls = ( - 'https://www.publicstorage.com/handlers/searchcoordinates.ashx?north=90.0&east=180.0&south=-90.0&west=-180.0', + 'https://www.publicstorage.com/sitemap_plp.xml', ) def parse(self, response): - data = json.loads(response.body_as_unicode()) - - for store in data['response']['properties']['property']: - lat, lon = map(float, store['lat_long'].split(', ')) - properties = { - "ref": store.get('property_id'), - "opening_hours": '; '.join(response.xpath('//time[@itemprop="openingHours"]/@datetime').extract()), - "addr_full": store.get('address'), - "city": store.get('city'), - "state": store.get('state'), - "postcode": store.get('zip'), - "lat": lat, - "lon": lon, - } - - yield GeojsonPointItem(**properties) + response.selector.remove_namespaces() + city_urls = response.xpath('//url/loc/text()').extract() + for path in city_urls: + yield scrapy.Request( + path.strip(), + callback=self.parse_store, + ) + + def parse_hours(self, hours): + opening_hours = OpeningHours() + + for hour in hours: + for day in hour['dayOfWeek']: + opening_hours.add_range( + day=day[:2], + open_time=hour["opens"], + close_time=hour["closes"], + ) + + return opening_hours.as_opening_hours() + + def parse_store(self, response): + data = json.loads(response.xpath('//script[@type="application/ld+json"]/text()').extract_first()) + data = data['@graph'][0] + + properties = { + "ref": data['@id'], + "opening_hours": self.parse_hours(data['openingHoursSpecification']), + "addr_full": data['address']['streetAddress'], + "city": data['address']['addressLocality'], + "state": data['address']['addressRegion'], + "postcode": data['address']['postalCode'], + "phone": data['telephone'], + "lat": data['geo']['latitude'], + "lon": data['geo']['longitude'], + } + + yield GeojsonPointItem(**properties)
{"golden_diff": "diff --git a/locations/spiders/public_storage.py b/locations/spiders/public_storage.py\n--- a/locations/spiders/public_storage.py\n+++ b/locations/spiders/public_storage.py\n@@ -3,6 +3,7 @@\n import json\n \n from locations.items import GeojsonPointItem\n+from locations.hours import OpeningHours\n \n \n class PublicStorageSpider(scrapy.Spider):\n@@ -10,23 +11,45 @@\n item_attributes = { 'brand': \"Public Storage\" }\n allowed_domains = [\"www.publicstorage.com\"]\n start_urls = (\n- 'https://www.publicstorage.com/handlers/searchcoordinates.ashx?north=90.0&east=180.0&south=-90.0&west=-180.0',\n+ 'https://www.publicstorage.com/sitemap_plp.xml',\n )\n \n def parse(self, response):\n- data = json.loads(response.body_as_unicode())\n-\n- for store in data['response']['properties']['property']:\n- lat, lon = map(float, store['lat_long'].split(', '))\n- properties = {\n- \"ref\": store.get('property_id'),\n- \"opening_hours\": '; '.join(response.xpath('//time[@itemprop=\"openingHours\"]/@datetime').extract()),\n- \"addr_full\": store.get('address'),\n- \"city\": store.get('city'),\n- \"state\": store.get('state'),\n- \"postcode\": store.get('zip'),\n- \"lat\": lat,\n- \"lon\": lon,\n- }\n-\n- yield GeojsonPointItem(**properties)\n+ response.selector.remove_namespaces()\n+ city_urls = response.xpath('//url/loc/text()').extract()\n+ for path in city_urls:\n+ yield scrapy.Request(\n+ path.strip(),\n+ callback=self.parse_store,\n+ )\n+\n+ def parse_hours(self, hours):\n+ opening_hours = OpeningHours()\n+\n+ for hour in hours:\n+ for day in hour['dayOfWeek']:\n+ opening_hours.add_range(\n+ day=day[:2],\n+ open_time=hour[\"opens\"],\n+ close_time=hour[\"closes\"],\n+ )\n+\n+ return opening_hours.as_opening_hours()\n+\n+ def parse_store(self, response):\n+ data = json.loads(response.xpath('//script[@type=\"application/ld+json\"]/text()').extract_first())\n+ data = data['@graph'][0]\n+\n+ properties = {\n+ \"ref\": data['@id'],\n+ \"opening_hours\": self.parse_hours(data['openingHoursSpecification']),\n+ \"addr_full\": data['address']['streetAddress'],\n+ \"city\": data['address']['addressLocality'],\n+ \"state\": data['address']['addressRegion'],\n+ \"postcode\": data['address']['postalCode'],\n+ \"phone\": data['telephone'],\n+ \"lat\": data['geo']['latitude'],\n+ \"lon\": data['geo']['longitude'],\n+ }\n+\n+ yield GeojsonPointItem(**properties)\n", "issue": "Spider public_storage is broken\nDuring the global build at 2021-05-26-14-42-23, spider **public_storage** failed with **0 features** and **0 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/public_storage.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/public_storage.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/public_storage.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport json\n\nfrom locations.items import GeojsonPointItem\n\n\nclass PublicStorageSpider(scrapy.Spider):\n name = \"public_storage\"\n item_attributes = { 'brand': \"Public Storage\" }\n allowed_domains = [\"www.publicstorage.com\"]\n start_urls = (\n 'https://www.publicstorage.com/handlers/searchcoordinates.ashx?north=90.0&east=180.0&south=-90.0&west=-180.0',\n )\n\n def parse(self, response):\n data = json.loads(response.body_as_unicode())\n\n for store in data['response']['properties']['property']:\n lat, lon = map(float, store['lat_long'].split(', '))\n properties = {\n \"ref\": store.get('property_id'),\n \"opening_hours\": '; '.join(response.xpath('//time[@itemprop=\"openingHours\"]/@datetime').extract()),\n \"addr_full\": store.get('address'),\n \"city\": store.get('city'),\n \"state\": store.get('state'),\n \"postcode\": store.get('zip'),\n \"lat\": lat,\n \"lon\": lon,\n }\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/public_storage.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport json\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nclass PublicStorageSpider(scrapy.Spider):\n name = \"public_storage\"\n item_attributes = { 'brand': \"Public Storage\" }\n allowed_domains = [\"www.publicstorage.com\"]\n start_urls = (\n 'https://www.publicstorage.com/sitemap_plp.xml',\n )\n\n def parse(self, response):\n response.selector.remove_namespaces()\n city_urls = response.xpath('//url/loc/text()').extract()\n for path in city_urls:\n yield scrapy.Request(\n path.strip(),\n callback=self.parse_store,\n )\n\n def parse_hours(self, hours):\n opening_hours = OpeningHours()\n\n for hour in hours:\n for day in hour['dayOfWeek']:\n opening_hours.add_range(\n day=day[:2],\n open_time=hour[\"opens\"],\n close_time=hour[\"closes\"],\n )\n\n return opening_hours.as_opening_hours()\n\n def parse_store(self, response):\n data = json.loads(response.xpath('//script[@type=\"application/ld+json\"]/text()').extract_first())\n data = data['@graph'][0]\n\n properties = {\n \"ref\": data['@id'],\n \"opening_hours\": self.parse_hours(data['openingHoursSpecification']),\n \"addr_full\": data['address']['streetAddress'],\n \"city\": data['address']['addressLocality'],\n \"state\": data['address']['addressRegion'],\n \"postcode\": data['address']['postalCode'],\n \"phone\": data['telephone'],\n \"lat\": data['geo']['latitude'],\n \"lon\": data['geo']['longitude'],\n }\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/public_storage.py"}]}
762
656
gh_patches_debug_26560
rasdani/github-patches
git_diff
fidals__shopelectro-1005
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Products rss for Google Merchant Google Merchant has some semihidden and strange subservice looking like google adwords for the search. It couldn't integrate with an existing gm.yml file, but requires rss. It has no open documentation and/or validator and we have just one from seo guys [Trello task](https://trello.com/c/39zr3xox/21-9-14k-%D0%B4%D0%B5%D0%BB%D0%B0%D0%B9-%D1%84%D0%B8%D0%B4-%D0%BF%D0%BE-%D0%BC%D0%B5%D1%80%D1%87%D0%B0%D0%BD%D1%82-%D1%86%D0%B5%D0%BD%D1%82%D1%80) contains details --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `shopelectro/management/commands/price.py` Content: ``` 1 """ 2 Django command to generate yml price files for market-places. 3 4 `utm` or `target` defines particular market-place. 5 See `settings.UTM_PRICE_MAP` to explore current list of supported market-places. 6 """ 7 8 import logging 9 import os 10 import typing 11 from collections import defaultdict 12 13 from django.conf import settings 14 from django.core.management.base import BaseCommand 15 from django.db.models import QuerySet 16 from django.template.loader import render_to_string 17 18 from catalog import context 19 from shopelectro import models 20 21 logger = logging.getLogger(__name__) 22 23 24 # --- files processing --- 25 class File: 26 def __init__(self, path: str, context: dict): 27 self.path = path 28 self.context = context 29 30 def create(self): 31 with open(self.path, 'w', encoding='utf-8') as file: 32 file.write(render_to_string('prices/price.yml', self.context).strip()) 33 logger.info(f'{self.path} generated.') 34 35 36 class Files: 37 def __init__(self, files: typing.List[File]): 38 self.files = files 39 40 def create(self): 41 for file in self.files: 42 file.create() 43 44 45 class Context(context.Context): 46 """DB data, extracted for price file.""" 47 48 def __init__(self, target: str): 49 self.target = target 50 51 def context(self) -> dict: 52 categories = CategoriesFilter(self.target).qs() 53 products = ProductsPatch( 54 self.target, 55 products=ProductsFilter(self.target, categories).qs() 56 ).products() 57 58 return { 59 'base_url': settings.BASE_URL, 60 'categories': categories, 61 'products': products, 62 'shop': settings.SHOP, 63 'utm': self.target, 64 } 65 66 67 class CategoriesFilter: 68 """Categories list for particular market place.""" 69 70 @property 71 def ignored(self) -> typing.List[str]: 72 return ( 73 settings.PRICE_IGNORED_CATEGORIES_MAP['default'] 74 + settings.PRICE_IGNORED_CATEGORIES_MAP[self.target] 75 ) 76 77 def __init__(self, target: str): 78 assert target in settings.UTM_PRICE_MAP 79 self.target = target 80 81 def qs(self) -> models.SECategoryQuerySet: 82 if self.target == 'SE78': 83 return models.Category.objects.all() 84 85 result_categories = ( 86 models.Category.objects 87 .exclude( 88 id__in=( 89 models.Category.objects 90 .filter(name__in=self.ignored) 91 .get_descendants(include_self=True) 92 ) 93 ) 94 ) 95 96 if self.target == 'YM': 97 """ 98 Yandex Market feed requires items in some categories to have pictures. 99 To simplify filtering we are excluding all categories 100 which don't contain at least one product with picture. 101 """ 102 # @todo #715:30m Try to rm ancestors filter in YM price filter. 103 # Exclude only categories with no pictures, without their ancestors. 104 result_categories = result_categories.get_categories_tree_with_pictures() 105 106 return result_categories 107 108 109 class ProductsFilter: 110 """Filter offers with individual price requirements.""" 111 112 @property 113 def ignored(self) -> typing.List[str]: 114 return settings.PRICE_IGNORED_PRODUCTS_MAP[self.target] 115 116 FILTERS = defaultdict( 117 lambda: (lambda qs: qs), 118 # Yandex Market feed requires picture for every offer 119 YM=lambda qs: ( 120 qs 121 .filter(page__images__isnull=False) 122 .distinct() 123 ), 124 # Google Merchant feed should not contain offers cheaper then CONST 125 GM=lambda qs: ( 126 qs 127 .filter(price__gt=settings.PRICE_GM_LOWER_BOUND) 128 ) 129 ) 130 131 def __init__(self, target: str, categories: models.SECategoryQuerySet): 132 assert target in settings.UTM_PRICE_MAP 133 self.target = target 134 self.categories = categories 135 136 def qs(self) -> QuerySet: 137 return self.FILTERS[self.target]( 138 models.Product.objects.active() 139 .filter(category__in=self.categories, price__gt=0) 140 .exclude(vendor_code__in=self.ignored) 141 ) 142 143 144 class ProductsPatch: 145 146 UTM_MEDIUM_DATA = defaultdict( 147 lambda: 'cpc', 148 {'YM': 'cpc-market'} 149 ) 150 151 def __init__(self, target: str, products: QuerySet): 152 assert target in settings.UTM_PRICE_MAP 153 self.target = target 154 self._products = products 155 156 def put_params(self, product): 157 product.prepared_params = [ 158 (group, tags[0].name) 159 for (group, tags) in filter( 160 lambda x: x[0].name != 'Производитель', 161 product.get_params().items() 162 ) if tags 163 ] 164 return product 165 166 def put_utm(self, product): 167 """Put UTM attribute to product.""" 168 utm_marks = [ 169 ('utm_source', self.target), 170 ('utm_medium', self.UTM_MEDIUM_DATA[self.target]), 171 ('utm_content', product.get_root_category().page.slug), 172 ('utm_term', str(product.vendor_code)), 173 ] 174 175 utm_mark_query = '&'.join(f'{k}={v}' for k, v in utm_marks) 176 product.utm_url = f'{settings.BASE_URL}{product.url}?{utm_mark_query}' 177 178 return product 179 180 def put_crumbs(self, product): # Ignore PyDocStyleBear 181 """Crumbs for google merchant. https://goo.gl/b0UJQp""" 182 product.crumbs = ' > '.join( 183 product.page.get_ancestors_fields('h1', include_self=False)[1:] 184 ) 185 return product 186 187 def put_brand(self, product, brands): 188 product.brand = brands.get(product) 189 return product 190 191 def products(self) -> typing.List[models.Product]: 192 """Path every product with additional fields.""" 193 brands = models.Tag.objects.get_brands(self._products) 194 return [ 195 self.put_brand( 196 product=self.put_params(self.put_crumbs(self.put_utm(product))), 197 brands=brands 198 ) 199 for product in self._products 200 ] 201 202 203 # --- command block --- 204 class Command(BaseCommand): 205 """Generate yml file for a given vendor (YM or price.ru).""" 206 207 # price files will be stored at this dir 208 BASE_DIR = settings.ASSETS_DIR 209 210 def handle(self, *args, **options): 211 Files( 212 [File( 213 path=os.path.join(self.BASE_DIR, filename), 214 context=Context(target).context() 215 ) for target, filename in settings.UTM_PRICE_MAP.items()] 216 ).create() 217 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/shopelectro/management/commands/price.py b/shopelectro/management/commands/price.py --- a/shopelectro/management/commands/price.py +++ b/shopelectro/management/commands/price.py @@ -23,13 +23,14 @@ # --- files processing --- class File: - def __init__(self, path: str, context: dict): + def __init__(self, path: str, context: dict, template_path: str): self.path = path self.context = context + self.template_path = template_path def create(self): with open(self.path, 'w', encoding='utf-8') as file: - file.write(render_to_string('prices/price.yml', self.context).strip()) + file.write(render_to_string(self.template_path, self.context).strip()) logger.info(f'{self.path} generated.') @@ -208,9 +209,15 @@ BASE_DIR = settings.ASSETS_DIR def handle(self, *args, **options): - Files( - [File( + Files([ + *[File( path=os.path.join(self.BASE_DIR, filename), - context=Context(target).context() - ) for target, filename in settings.UTM_PRICE_MAP.items()] - ).create() + context=Context(target).context(), + template_path='prices/price.yml', + ) for target, filename in settings.UTM_PRICE_MAP.items()], + File( + path=os.path.join(self.BASE_DIR, 'gm.rss'), + context=Context('GM').context(), + template_path='prices/price.rss', + ) + ]).create()
{"golden_diff": "diff --git a/shopelectro/management/commands/price.py b/shopelectro/management/commands/price.py\n--- a/shopelectro/management/commands/price.py\n+++ b/shopelectro/management/commands/price.py\n@@ -23,13 +23,14 @@\n \n # --- files processing ---\n class File:\n- def __init__(self, path: str, context: dict):\n+ def __init__(self, path: str, context: dict, template_path: str):\n self.path = path\n self.context = context\n+ self.template_path = template_path\n \n def create(self):\n with open(self.path, 'w', encoding='utf-8') as file:\n- file.write(render_to_string('prices/price.yml', self.context).strip())\n+ file.write(render_to_string(self.template_path, self.context).strip())\n logger.info(f'{self.path} generated.')\n \n \n@@ -208,9 +209,15 @@\n BASE_DIR = settings.ASSETS_DIR\n \n def handle(self, *args, **options):\n- Files(\n- [File(\n+ Files([\n+ *[File(\n path=os.path.join(self.BASE_DIR, filename),\n- context=Context(target).context()\n- ) for target, filename in settings.UTM_PRICE_MAP.items()]\n- ).create()\n+ context=Context(target).context(),\n+ template_path='prices/price.yml',\n+ ) for target, filename in settings.UTM_PRICE_MAP.items()],\n+ File(\n+ path=os.path.join(self.BASE_DIR, 'gm.rss'),\n+ context=Context('GM').context(),\n+ template_path='prices/price.rss',\n+ )\n+ ]).create()\n", "issue": "Products rss for Google Merchant\nGoogle Merchant has some semihidden and strange subservice looking like google adwords for the search. It couldn't integrate with an existing gm.yml file, but requires rss. It has no open documentation and/or validator and we have just one from seo guys\r\n\r\n[Trello task](https://trello.com/c/39zr3xox/21-9-14k-%D0%B4%D0%B5%D0%BB%D0%B0%D0%B9-%D1%84%D0%B8%D0%B4-%D0%BF%D0%BE-%D0%BC%D0%B5%D1%80%D1%87%D0%B0%D0%BD%D1%82-%D1%86%D0%B5%D0%BD%D1%82%D1%80) contains details\n", "before_files": [{"content": "\"\"\"\nDjango command to generate yml price files for market-places.\n\n`utm` or `target` defines particular market-place.\nSee `settings.UTM_PRICE_MAP` to explore current list of supported market-places.\n\"\"\"\n\nimport logging\nimport os\nimport typing\nfrom collections import defaultdict\n\nfrom django.conf import settings\nfrom django.core.management.base import BaseCommand\nfrom django.db.models import QuerySet\nfrom django.template.loader import render_to_string\n\nfrom catalog import context\nfrom shopelectro import models\n\nlogger = logging.getLogger(__name__)\n\n\n# --- files processing ---\nclass File:\n def __init__(self, path: str, context: dict):\n self.path = path\n self.context = context\n\n def create(self):\n with open(self.path, 'w', encoding='utf-8') as file:\n file.write(render_to_string('prices/price.yml', self.context).strip())\n logger.info(f'{self.path} generated.')\n\n\nclass Files:\n def __init__(self, files: typing.List[File]):\n self.files = files\n\n def create(self):\n for file in self.files:\n file.create()\n\n\nclass Context(context.Context):\n \"\"\"DB data, extracted for price file.\"\"\"\n\n def __init__(self, target: str):\n self.target = target\n\n def context(self) -> dict:\n categories = CategoriesFilter(self.target).qs()\n products = ProductsPatch(\n self.target,\n products=ProductsFilter(self.target, categories).qs()\n ).products()\n\n return {\n 'base_url': settings.BASE_URL,\n 'categories': categories,\n 'products': products,\n 'shop': settings.SHOP,\n 'utm': self.target,\n }\n\n\nclass CategoriesFilter:\n \"\"\"Categories list for particular market place.\"\"\"\n\n @property\n def ignored(self) -> typing.List[str]:\n return (\n settings.PRICE_IGNORED_CATEGORIES_MAP['default']\n + settings.PRICE_IGNORED_CATEGORIES_MAP[self.target]\n )\n\n def __init__(self, target: str):\n assert target in settings.UTM_PRICE_MAP\n self.target = target\n\n def qs(self) -> models.SECategoryQuerySet:\n if self.target == 'SE78':\n return models.Category.objects.all()\n\n result_categories = (\n models.Category.objects\n .exclude(\n id__in=(\n models.Category.objects\n .filter(name__in=self.ignored)\n .get_descendants(include_self=True)\n )\n )\n )\n\n if self.target == 'YM':\n \"\"\"\n Yandex Market feed requires items in some categories to have pictures.\n To simplify filtering we are excluding all categories\n which don't contain at least one product with picture.\n \"\"\"\n # @todo #715:30m Try to rm ancestors filter in YM price filter.\n # Exclude only categories with no pictures, without their ancestors.\n result_categories = result_categories.get_categories_tree_with_pictures()\n\n return result_categories\n\n\nclass ProductsFilter:\n \"\"\"Filter offers with individual price requirements.\"\"\"\n\n @property\n def ignored(self) -> typing.List[str]:\n return settings.PRICE_IGNORED_PRODUCTS_MAP[self.target]\n\n FILTERS = defaultdict(\n lambda: (lambda qs: qs),\n # Yandex Market feed requires picture for every offer\n YM=lambda qs: (\n qs\n .filter(page__images__isnull=False)\n .distinct()\n ),\n # Google Merchant feed should not contain offers cheaper then CONST\n GM=lambda qs: (\n qs\n .filter(price__gt=settings.PRICE_GM_LOWER_BOUND)\n )\n )\n\n def __init__(self, target: str, categories: models.SECategoryQuerySet):\n assert target in settings.UTM_PRICE_MAP\n self.target = target\n self.categories = categories\n\n def qs(self) -> QuerySet:\n return self.FILTERS[self.target](\n models.Product.objects.active()\n .filter(category__in=self.categories, price__gt=0)\n .exclude(vendor_code__in=self.ignored)\n )\n\n\nclass ProductsPatch:\n\n UTM_MEDIUM_DATA = defaultdict(\n lambda: 'cpc',\n {'YM': 'cpc-market'}\n )\n\n def __init__(self, target: str, products: QuerySet):\n assert target in settings.UTM_PRICE_MAP\n self.target = target\n self._products = products\n\n def put_params(self, product):\n product.prepared_params = [\n (group, tags[0].name)\n for (group, tags) in filter(\n lambda x: x[0].name != '\u041f\u0440\u043e\u0438\u0437\u0432\u043e\u0434\u0438\u0442\u0435\u043b\u044c',\n product.get_params().items()\n ) if tags\n ]\n return product\n\n def put_utm(self, product):\n \"\"\"Put UTM attribute to product.\"\"\"\n utm_marks = [\n ('utm_source', self.target),\n ('utm_medium', self.UTM_MEDIUM_DATA[self.target]),\n ('utm_content', product.get_root_category().page.slug),\n ('utm_term', str(product.vendor_code)),\n ]\n\n utm_mark_query = '&'.join(f'{k}={v}' for k, v in utm_marks)\n product.utm_url = f'{settings.BASE_URL}{product.url}?{utm_mark_query}'\n\n return product\n\n def put_crumbs(self, product): # Ignore PyDocStyleBear\n \"\"\"Crumbs for google merchant. https://goo.gl/b0UJQp\"\"\"\n product.crumbs = ' > '.join(\n product.page.get_ancestors_fields('h1', include_self=False)[1:]\n )\n return product\n\n def put_brand(self, product, brands):\n product.brand = brands.get(product)\n return product\n\n def products(self) -> typing.List[models.Product]:\n \"\"\"Path every product with additional fields.\"\"\"\n brands = models.Tag.objects.get_brands(self._products)\n return [\n self.put_brand(\n product=self.put_params(self.put_crumbs(self.put_utm(product))),\n brands=brands\n )\n for product in self._products\n ]\n\n\n# --- command block ---\nclass Command(BaseCommand):\n \"\"\"Generate yml file for a given vendor (YM or price.ru).\"\"\"\n\n # price files will be stored at this dir\n BASE_DIR = settings.ASSETS_DIR\n\n def handle(self, *args, **options):\n Files(\n [File(\n path=os.path.join(self.BASE_DIR, filename),\n context=Context(target).context()\n ) for target, filename in settings.UTM_PRICE_MAP.items()]\n ).create()\n", "path": "shopelectro/management/commands/price.py"}], "after_files": [{"content": "\"\"\"\nDjango command to generate yml price files for market-places.\n\n`utm` or `target` defines particular market-place.\nSee `settings.UTM_PRICE_MAP` to explore current list of supported market-places.\n\"\"\"\n\nimport logging\nimport os\nimport typing\nfrom collections import defaultdict\n\nfrom django.conf import settings\nfrom django.core.management.base import BaseCommand\nfrom django.db.models import QuerySet\nfrom django.template.loader import render_to_string\n\nfrom catalog import context\nfrom shopelectro import models\n\nlogger = logging.getLogger(__name__)\n\n\n# --- files processing ---\nclass File:\n def __init__(self, path: str, context: dict, template_path: str):\n self.path = path\n self.context = context\n self.template_path = template_path\n\n def create(self):\n with open(self.path, 'w', encoding='utf-8') as file:\n file.write(render_to_string(self.template_path, self.context).strip())\n logger.info(f'{self.path} generated.')\n\n\nclass Files:\n def __init__(self, files: typing.List[File]):\n self.files = files\n\n def create(self):\n for file in self.files:\n file.create()\n\n\nclass Context(context.Context):\n \"\"\"DB data, extracted for price file.\"\"\"\n\n def __init__(self, target: str):\n self.target = target\n\n def context(self) -> dict:\n categories = CategoriesFilter(self.target).qs()\n products = ProductsPatch(\n self.target,\n products=ProductsFilter(self.target, categories).qs()\n ).products()\n\n return {\n 'base_url': settings.BASE_URL,\n 'categories': categories,\n 'products': products,\n 'shop': settings.SHOP,\n 'utm': self.target,\n }\n\n\nclass CategoriesFilter:\n \"\"\"Categories list for particular market place.\"\"\"\n\n @property\n def ignored(self) -> typing.List[str]:\n return (\n settings.PRICE_IGNORED_CATEGORIES_MAP['default']\n + settings.PRICE_IGNORED_CATEGORIES_MAP[self.target]\n )\n\n def __init__(self, target: str):\n assert target in settings.UTM_PRICE_MAP\n self.target = target\n\n def qs(self) -> models.SECategoryQuerySet:\n if self.target == 'SE78':\n return models.Category.objects.all()\n\n result_categories = (\n models.Category.objects\n .exclude(\n id__in=(\n models.Category.objects\n .filter(name__in=self.ignored)\n .get_descendants(include_self=True)\n )\n )\n )\n\n if self.target == 'YM':\n \"\"\"\n Yandex Market feed requires items in some categories to have pictures.\n To simplify filtering we are excluding all categories\n which don't contain at least one product with picture.\n \"\"\"\n # @todo #715:30m Try to rm ancestors filter in YM price filter.\n # Exclude only categories with no pictures, without their ancestors.\n result_categories = result_categories.get_categories_tree_with_pictures()\n\n return result_categories\n\n\nclass ProductsFilter:\n \"\"\"Filter offers with individual price requirements.\"\"\"\n\n @property\n def ignored(self) -> typing.List[str]:\n return settings.PRICE_IGNORED_PRODUCTS_MAP[self.target]\n\n FILTERS = defaultdict(\n lambda: (lambda qs: qs),\n # Yandex Market feed requires picture for every offer\n YM=lambda qs: (\n qs\n .filter(page__images__isnull=False)\n .distinct()\n ),\n # Google Merchant feed should not contain offers cheaper then CONST\n GM=lambda qs: (\n qs\n .filter(price__gt=settings.PRICE_GM_LOWER_BOUND)\n )\n )\n\n def __init__(self, target: str, categories: models.SECategoryQuerySet):\n assert target in settings.UTM_PRICE_MAP\n self.target = target\n self.categories = categories\n\n def qs(self) -> QuerySet:\n return self.FILTERS[self.target](\n models.Product.objects.active()\n .filter(category__in=self.categories, price__gt=0)\n .exclude(vendor_code__in=self.ignored)\n )\n\n\nclass ProductsPatch:\n\n UTM_MEDIUM_DATA = defaultdict(\n lambda: 'cpc',\n {'YM': 'cpc-market'}\n )\n\n def __init__(self, target: str, products: QuerySet):\n assert target in settings.UTM_PRICE_MAP\n self.target = target\n self._products = products\n\n def put_params(self, product):\n product.prepared_params = [\n (group, tags[0].name)\n for (group, tags) in filter(\n lambda x: x[0].name != '\u041f\u0440\u043e\u0438\u0437\u0432\u043e\u0434\u0438\u0442\u0435\u043b\u044c',\n product.get_params().items()\n ) if tags\n ]\n return product\n\n def put_utm(self, product):\n \"\"\"Put UTM attribute to product.\"\"\"\n utm_marks = [\n ('utm_source', self.target),\n ('utm_medium', self.UTM_MEDIUM_DATA[self.target]),\n ('utm_content', product.get_root_category().page.slug),\n ('utm_term', str(product.vendor_code)),\n ]\n\n utm_mark_query = '&'.join(f'{k}={v}' for k, v in utm_marks)\n product.utm_url = f'{settings.BASE_URL}{product.url}?{utm_mark_query}'\n\n return product\n\n def put_crumbs(self, product): # Ignore PyDocStyleBear\n \"\"\"Crumbs for google merchant. https://goo.gl/b0UJQp\"\"\"\n product.crumbs = ' > '.join(\n product.page.get_ancestors_fields('h1', include_self=False)[1:]\n )\n return product\n\n def put_brand(self, product, brands):\n product.brand = brands.get(product)\n return product\n\n def products(self) -> typing.List[models.Product]:\n \"\"\"Path every product with additional fields.\"\"\"\n brands = models.Tag.objects.get_brands(self._products)\n return [\n self.put_brand(\n product=self.put_params(self.put_crumbs(self.put_utm(product))),\n brands=brands\n )\n for product in self._products\n ]\n\n\n# --- command block ---\nclass Command(BaseCommand):\n \"\"\"Generate yml file for a given vendor (YM or price.ru).\"\"\"\n\n # price files will be stored at this dir\n BASE_DIR = settings.ASSETS_DIR\n\n def handle(self, *args, **options):\n Files([\n *[File(\n path=os.path.join(self.BASE_DIR, filename),\n context=Context(target).context(),\n template_path='prices/price.yml',\n ) for target, filename in settings.UTM_PRICE_MAP.items()],\n File(\n path=os.path.join(self.BASE_DIR, 'gm.rss'),\n context=Context('GM').context(),\n template_path='prices/price.rss',\n )\n ]).create()\n", "path": "shopelectro/management/commands/price.py"}]}
2,421
383
gh_patches_debug_28889
rasdani/github-patches
git_diff
piskvorky__gensim-968
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Lsi distributed fail Hi, I've got a problem with the lsi distributed. When i executed the example: https://radimrehurek.com/gensim/dist_lsi.html First configure the server (enviroment variables), then i run the server, worker and dispatcher. And all without errros. But when i executed the code. I have this fail: ![image](https://cloud.githubusercontent.com/assets/10063469/17175207/799856c8-5406-11e6-8713-ca7ad342baf8.png) Why does this happens? How can i solve? Thank you in advance. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `gensim/models/lsi_worker.py` Content: ``` 1 #!/usr/bin/env python 2 # -*- coding: utf-8 -*- 3 # 4 # Copyright (C) 2010 Radim Rehurek <[email protected]> 5 # Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html 6 7 """ 8 USAGE: %(program)s 9 10 Worker ("slave") process used in computing distributed LSI. Run this script \ 11 on every node in your cluster. If you wish, you may even run it multiple times \ 12 on a single machine, to make better use of multiple cores (just beware that \ 13 memory footprint increases accordingly). 14 15 Example: python -m gensim.models.lsi_worker 16 """ 17 18 19 from __future__ import with_statement 20 import os, sys, logging 21 import threading 22 import tempfile 23 try: 24 import Queue 25 except ImportError: 26 import queue as Queue 27 import Pyro4 28 from gensim.models import lsimodel 29 from gensim import utils 30 31 logger = logging.getLogger('gensim.models.lsi_worker') 32 33 34 SAVE_DEBUG = 0 # save intermediate models after every SAVE_DEBUG updates (0 for never) 35 36 37 38 class Worker(object): 39 def __init__(self): 40 self.model = None 41 42 43 def initialize(self, myid, dispatcher, **model_params): 44 self.lock_update = threading.Lock() 45 self.jobsdone = 0 # how many jobs has this worker completed? 46 self.myid = myid # id of this worker in the dispatcher; just a convenience var for easy access/logging TODO remove? 47 self.dispatcher = dispatcher 48 self.finished = False 49 logger.info("initializing worker #%s" % myid) 50 self.model = lsimodel.LsiModel(**model_params) 51 52 53 @Pyro4.oneway 54 def requestjob(self): 55 """ 56 Request jobs from the dispatcher, in a perpetual loop until `getstate()` is called. 57 """ 58 if self.model is None: 59 raise RuntimeError("worker must be initialized before receiving jobs") 60 61 job = None 62 while job is None and not self.finished: 63 try: 64 job = self.dispatcher.getjob(self.myid) 65 except Queue.Empty: 66 # no new job: try again, unless we're finished with all work 67 continue 68 if job is not None: 69 logger.info("worker #%s received job #%i" % (self.myid, self.jobsdone)) 70 self.processjob(job) 71 self.dispatcher.jobdone(self.myid) 72 else: 73 logger.info("worker #%i stopping asking for jobs" % self.myid) 74 75 76 @utils.synchronous('lock_update') 77 def processjob(self, job): 78 self.model.add_documents(job) 79 self.jobsdone += 1 80 if SAVE_DEBUG and self.jobsdone % SAVE_DEBUG == 0: 81 fname = os.path.join(tempfile.gettempdir(), 'lsi_worker.pkl') 82 self.model.save(fname) 83 84 85 @utils.synchronous('lock_update') 86 def getstate(self): 87 logger.info("worker #%i returning its state after %s jobs" % 88 (self.myid, self.jobsdone)) 89 assert isinstance(self.model.projection, lsimodel.Projection) 90 self.finished = True 91 return self.model.projection 92 93 94 @utils.synchronous('lock_update') 95 def reset(self): 96 logger.info("resetting worker #%i" % self.myid) 97 self.model.projection = self.model.projection.empty_like() 98 self.finished = False 99 100 101 @Pyro4.oneway 102 def exit(self): 103 logger.info("terminating worker #%i" % self.myid) 104 os._exit(0) 105 #endclass Worker 106 107 108 109 def main(): 110 logging.basicConfig(format = '%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO) 111 logger.info("running %s" % " ".join(sys.argv)) 112 113 program = os.path.basename(sys.argv[0]) 114 # make sure we have enough cmd line parameters 115 if len(sys.argv) < 1: 116 print(globals()["__doc__"] % locals()) 117 sys.exit(1) 118 119 utils.pyro_daemon('gensim.lsi_worker', Worker(), random_suffix=True) 120 121 logger.info("finished running %s" % program) 122 123 124 125 if __name__ == '__main__': 126 main() 127 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/gensim/models/lsi_worker.py b/gensim/models/lsi_worker.py --- a/gensim/models/lsi_worker.py +++ b/gensim/models/lsi_worker.py @@ -39,7 +39,7 @@ def __init__(self): self.model = None - + @Pyro4.expose def initialize(self, myid, dispatcher, **model_params): self.lock_update = threading.Lock() self.jobsdone = 0 # how many jobs has this worker completed? @@ -49,7 +49,7 @@ logger.info("initializing worker #%s" % myid) self.model = lsimodel.LsiModel(**model_params) - + @Pyro4.expose @Pyro4.oneway def requestjob(self): """ @@ -81,7 +81,7 @@ fname = os.path.join(tempfile.gettempdir(), 'lsi_worker.pkl') self.model.save(fname) - + @Pyro4.expose @utils.synchronous('lock_update') def getstate(self): logger.info("worker #%i returning its state after %s jobs" % @@ -90,7 +90,7 @@ self.finished = True return self.model.projection - + @Pyro4.expose @utils.synchronous('lock_update') def reset(self): logger.info("resetting worker #%i" % self.myid)
{"golden_diff": "diff --git a/gensim/models/lsi_worker.py b/gensim/models/lsi_worker.py\n--- a/gensim/models/lsi_worker.py\n+++ b/gensim/models/lsi_worker.py\n@@ -39,7 +39,7 @@\n def __init__(self):\n self.model = None\n \n-\n+ @Pyro4.expose\n def initialize(self, myid, dispatcher, **model_params):\n self.lock_update = threading.Lock()\n self.jobsdone = 0 # how many jobs has this worker completed?\n@@ -49,7 +49,7 @@\n logger.info(\"initializing worker #%s\" % myid)\n self.model = lsimodel.LsiModel(**model_params)\n \n-\n+ @Pyro4.expose\n @Pyro4.oneway\n def requestjob(self):\n \"\"\"\n@@ -81,7 +81,7 @@\n fname = os.path.join(tempfile.gettempdir(), 'lsi_worker.pkl')\n self.model.save(fname)\n \n-\n+ @Pyro4.expose\n @utils.synchronous('lock_update')\n def getstate(self):\n logger.info(\"worker #%i returning its state after %s jobs\" %\n@@ -90,7 +90,7 @@\n self.finished = True\n return self.model.projection\n \n-\n+ @Pyro4.expose\n @utils.synchronous('lock_update')\n def reset(self):\n logger.info(\"resetting worker #%i\" % self.myid)\n", "issue": "Lsi distributed fail\nHi, \nI've got a problem with the lsi distributed. When i executed the example:\n\nhttps://radimrehurek.com/gensim/dist_lsi.html\n\nFirst configure the server (enviroment variables), then i run the server, worker and dispatcher.\n\nAnd all without errros. But when i executed the code. I have this fail:\n![image](https://cloud.githubusercontent.com/assets/10063469/17175207/799856c8-5406-11e6-8713-ca7ad342baf8.png)\n\nWhy does this happens? How can i solve?\n\nThank you in advance.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) 2010 Radim Rehurek <[email protected]>\n# Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html\n\n\"\"\"\nUSAGE: %(program)s\n\n Worker (\"slave\") process used in computing distributed LSI. Run this script \\\non every node in your cluster. If you wish, you may even run it multiple times \\\non a single machine, to make better use of multiple cores (just beware that \\\nmemory footprint increases accordingly).\n\nExample: python -m gensim.models.lsi_worker\n\"\"\"\n\n\nfrom __future__ import with_statement\nimport os, sys, logging\nimport threading\nimport tempfile\ntry:\n import Queue\nexcept ImportError:\n import queue as Queue\nimport Pyro4\nfrom gensim.models import lsimodel\nfrom gensim import utils\n\nlogger = logging.getLogger('gensim.models.lsi_worker')\n\n\nSAVE_DEBUG = 0 # save intermediate models after every SAVE_DEBUG updates (0 for never)\n\n\n\nclass Worker(object):\n def __init__(self):\n self.model = None\n\n\n def initialize(self, myid, dispatcher, **model_params):\n self.lock_update = threading.Lock()\n self.jobsdone = 0 # how many jobs has this worker completed?\n self.myid = myid # id of this worker in the dispatcher; just a convenience var for easy access/logging TODO remove?\n self.dispatcher = dispatcher\n self.finished = False\n logger.info(\"initializing worker #%s\" % myid)\n self.model = lsimodel.LsiModel(**model_params)\n\n\n @Pyro4.oneway\n def requestjob(self):\n \"\"\"\n Request jobs from the dispatcher, in a perpetual loop until `getstate()` is called.\n \"\"\"\n if self.model is None:\n raise RuntimeError(\"worker must be initialized before receiving jobs\")\n\n job = None\n while job is None and not self.finished:\n try:\n job = self.dispatcher.getjob(self.myid)\n except Queue.Empty:\n # no new job: try again, unless we're finished with all work\n continue\n if job is not None:\n logger.info(\"worker #%s received job #%i\" % (self.myid, self.jobsdone))\n self.processjob(job)\n self.dispatcher.jobdone(self.myid)\n else:\n logger.info(\"worker #%i stopping asking for jobs\" % self.myid)\n\n\n @utils.synchronous('lock_update')\n def processjob(self, job):\n self.model.add_documents(job)\n self.jobsdone += 1\n if SAVE_DEBUG and self.jobsdone % SAVE_DEBUG == 0:\n fname = os.path.join(tempfile.gettempdir(), 'lsi_worker.pkl')\n self.model.save(fname)\n\n\n @utils.synchronous('lock_update')\n def getstate(self):\n logger.info(\"worker #%i returning its state after %s jobs\" %\n (self.myid, self.jobsdone))\n assert isinstance(self.model.projection, lsimodel.Projection)\n self.finished = True\n return self.model.projection\n\n\n @utils.synchronous('lock_update')\n def reset(self):\n logger.info(\"resetting worker #%i\" % self.myid)\n self.model.projection = self.model.projection.empty_like()\n self.finished = False\n\n\n @Pyro4.oneway\n def exit(self):\n logger.info(\"terminating worker #%i\" % self.myid)\n os._exit(0)\n#endclass Worker\n\n\n\ndef main():\n logging.basicConfig(format = '%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)\n logger.info(\"running %s\" % \" \".join(sys.argv))\n\n program = os.path.basename(sys.argv[0])\n # make sure we have enough cmd line parameters\n if len(sys.argv) < 1:\n print(globals()[\"__doc__\"] % locals())\n sys.exit(1)\n\n utils.pyro_daemon('gensim.lsi_worker', Worker(), random_suffix=True)\n\n logger.info(\"finished running %s\" % program)\n\n\n\nif __name__ == '__main__':\n main()\n", "path": "gensim/models/lsi_worker.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) 2010 Radim Rehurek <[email protected]>\n# Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html\n\n\"\"\"\nUSAGE: %(program)s\n\n Worker (\"slave\") process used in computing distributed LSI. Run this script \\\non every node in your cluster. If you wish, you may even run it multiple times \\\non a single machine, to make better use of multiple cores (just beware that \\\nmemory footprint increases accordingly).\n\nExample: python -m gensim.models.lsi_worker\n\"\"\"\n\n\nfrom __future__ import with_statement\nimport os, sys, logging\nimport threading\nimport tempfile\ntry:\n import Queue\nexcept ImportError:\n import queue as Queue\nimport Pyro4\nfrom gensim.models import lsimodel\nfrom gensim import utils\n\nlogger = logging.getLogger('gensim.models.lsi_worker')\n\n\nSAVE_DEBUG = 0 # save intermediate models after every SAVE_DEBUG updates (0 for never)\n\n\n\nclass Worker(object):\n def __init__(self):\n self.model = None\n\n @Pyro4.expose\n def initialize(self, myid, dispatcher, **model_params):\n self.lock_update = threading.Lock()\n self.jobsdone = 0 # how many jobs has this worker completed?\n self.myid = myid # id of this worker in the dispatcher; just a convenience var for easy access/logging TODO remove?\n self.dispatcher = dispatcher\n self.finished = False\n logger.info(\"initializing worker #%s\" % myid)\n self.model = lsimodel.LsiModel(**model_params)\n\n @Pyro4.expose\n @Pyro4.oneway\n def requestjob(self):\n \"\"\"\n Request jobs from the dispatcher, in a perpetual loop until `getstate()` is called.\n \"\"\"\n if self.model is None:\n raise RuntimeError(\"worker must be initialized before receiving jobs\")\n\n job = None\n while job is None and not self.finished:\n try:\n job = self.dispatcher.getjob(self.myid)\n except Queue.Empty:\n # no new job: try again, unless we're finished with all work\n continue\n if job is not None:\n logger.info(\"worker #%s received job #%i\" % (self.myid, self.jobsdone))\n self.processjob(job)\n self.dispatcher.jobdone(self.myid)\n else:\n logger.info(\"worker #%i stopping asking for jobs\" % self.myid)\n\n\n @utils.synchronous('lock_update')\n def processjob(self, job):\n self.model.add_documents(job)\n self.jobsdone += 1\n if SAVE_DEBUG and self.jobsdone % SAVE_DEBUG == 0:\n fname = os.path.join(tempfile.gettempdir(), 'lsi_worker.pkl')\n self.model.save(fname)\n\n @Pyro4.expose\n @utils.synchronous('lock_update')\n def getstate(self):\n logger.info(\"worker #%i returning its state after %s jobs\" %\n (self.myid, self.jobsdone))\n assert isinstance(self.model.projection, lsimodel.Projection)\n self.finished = True\n return self.model.projection\n\n @Pyro4.expose\n @utils.synchronous('lock_update')\n def reset(self):\n logger.info(\"resetting worker #%i\" % self.myid)\n self.model.projection = self.model.projection.empty_like()\n self.finished = False\n\n\n @Pyro4.oneway\n def exit(self):\n logger.info(\"terminating worker #%i\" % self.myid)\n os._exit(0)\n#endclass Worker\n\n\n\ndef main():\n logging.basicConfig(format = '%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)\n logger.info(\"running %s\" % \" \".join(sys.argv))\n\n program = os.path.basename(sys.argv[0])\n # make sure we have enough cmd line parameters\n if len(sys.argv) < 1:\n print(globals()[\"__doc__\"] % locals())\n sys.exit(1)\n\n utils.pyro_daemon('gensim.lsi_worker', Worker(), random_suffix=True)\n\n logger.info(\"finished running %s\" % program)\n\n\n\nif __name__ == '__main__':\n main()\n", "path": "gensim/models/lsi_worker.py"}]}
1,607
325
gh_patches_debug_20398
rasdani/github-patches
git_diff
Mailu__Mailu-2158
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Login attempt on roundcube triggers error 500 on /sso/login endpoint (ZeroDivisionError: division by zero) Hi everybody! Thanks in advance for your help! ## Before you open your issue - [x] Check if no issue or pull-request for this already exists. - [x] Check [documentation](https://mailu.io/master/) and [FAQ](https://mailu.io/master/faq.html). (Tip, use the search function on the documentation page) - [x] You understand `Mailu` is made by volunteers in their **free time** — be conscise, civil and accept that delays can occur. - [x] The title of the issue should be short and simple. It should contain specific terms related to the actual issue. Be specific while writing the title. ## Environment & Versions ### Environment - [x] docker-compose - [ ] kubernetes - [ ] docker swarm ### Versions 1.9 ## Description A user encountered an error 500 while trying to log into an account on roundcube. The 500 comes from a ZeroDivisionError in one of the jinja templates (see logs below). ## Replication Steps Unfortunately I could not reproduce it so far. Apparently it happened on the first login attempt, even though I suspect that it had to do with rate limiting, since there was a message about rate limiting right before the error (see below). Apparently the number of fields on the sso form is zero? The user also reports, that it is now working (also when rapidly logging out and logging in again). ## Expected behaviour No error 500 and succesful login. ## Logs ````markdown ``` admin_1 | [2022-01-10 14:59:49,322] WARNING in limiter: Authentication attempt from <REDACTED IP OF USER> for <REDACTED MAIL ACCOUNT> has been rate-limited. admin_1 | [2022-01-10 14:59:49,334] ERROR in app: Exception on /sso/login [POST] admin_1 | Traceback (most recent call last): admin_1 | File "/usr/lib/python3.9/site-packages/flask/app.py", line 2073, in wsgi_app admin_1 | response = self.full_dispatch_request() admin_1 | File "/usr/lib/python3.9/site-packages/flask/app.py", line 1518, in full_dispatch_request admin_1 | rv = self.handle_user_exception(e) admin_1 | File "/usr/lib/python3.9/site-packages/flask/app.py", line 1516, in full_dispatch_request admin_1 | rv = self.dispatch_request() admin_1 | File "/usr/lib/python3.9/site-packages/flask/app.py", line 1502, in dispatch_request admin_1 | return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args) admin_1 | File "/app/mailu/sso/views/base.py", line 36, in login admin_1 | return flask.render_template('login.html', form=form) admin_1 | File "/usr/lib/python3.9/site-packages/flask/templating.py", line 147, in render_template admin_1 | return _render( admin_1 | File "/usr/lib/python3.9/site-packages/flask/templating.py", line 128, in _render admin_1 | rv = template.render(context) admin_1 | File "/usr/lib/python3.9/site-packages/jinja2/environment.py", line 1304, in render admin_1 | self.environment.handle_exception() admin_1 | File "/usr/lib/python3.9/site-packages/jinja2/environment.py", line 925, in handle_exception admin_1 | raise rewrite_traceback_stack(source=source) admin_1 | File "/app/mailu/sso/templates/login.html", line 1, in top-level template code admin_1 | {%- extends "form_sso.html" %} admin_1 | File "/app/mailu/sso/templates/form_sso.html", line 1, in top-level template code admin_1 | {%- extends "base_sso.html" %} admin_1 | File "/app/mailu/sso/templates/base_sso.html", line 70, in top-level template code admin_1 | {%- block content %}{%- endblock %} admin_1 | File "/app/mailu/sso/templates/form_sso.html", line 4, in block 'content' admin_1 | {%- call macros.card() %} admin_1 | File "/usr/lib/python3.9/site-packages/jinja2/runtime.py", line 828, in _invoke admin_1 | rv = self._func(*arguments) admin_1 | File "/app/mailu/ui/templates/macros.html", line 84, in template admin_1 | {{- caller() }} admin_1 | File "/usr/lib/python3.9/site-packages/jinja2/runtime.py", line 828, in _invoke admin_1 | rv = self._func(*arguments) admin_1 | File "/app/mailu/sso/templates/form_sso.html", line 8, in template admin_1 | {{ macros.form_fields(fields, label=False, class="btn btn-default") }} admin_1 | File "/usr/lib/python3.9/site-packages/jinja2/runtime.py", line 828, in _invoke admin_1 | rv = self._func(*arguments) admin_1 | File "/app/mailu/ui/templates/macros.html", line 22, in template admin_1 | {%- set width = (12 / fields|length)|int %} admin_1 | ZeroDivisionError: division by zero ``` ```` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `core/admin/mailu/sso/views/base.py` Content: ``` 1 from werkzeug.utils import redirect 2 from mailu import models, utils 3 from mailu.sso import sso, forms 4 from mailu.ui import access 5 6 from flask import current_app as app 7 import flask 8 import flask_login 9 10 @sso.route('/login', methods=['GET', 'POST']) 11 def login(): 12 client_ip = flask.request.headers.get('X-Real-IP', flask.request.remote_addr) 13 form = forms.LoginForm() 14 form.submitAdmin.label.text = form.submitAdmin.label.text + ' Admin' 15 form.submitWebmail.label.text = form.submitWebmail.label.text + ' Webmail' 16 17 fields = [] 18 if str(app.config["WEBMAIL"]).upper() != "NONE": 19 fields.append(form.submitWebmail) 20 if str(app.config["ADMIN"]).upper() != "FALSE": 21 fields.append(form.submitAdmin) 22 fields = [fields] 23 24 if form.validate_on_submit(): 25 if form.submitAdmin.data: 26 destination = app.config['WEB_ADMIN'] 27 elif form.submitWebmail.data: 28 destination = app.config['WEB_WEBMAIL'] 29 device_cookie, device_cookie_username = utils.limiter.parse_device_cookie(flask.request.cookies.get('rate_limit')) 30 username = form.email.data 31 if username != device_cookie_username and utils.limiter.should_rate_limit_ip(client_ip): 32 flask.flash('Too many attempts from your IP (rate-limit)', 'error') 33 return flask.render_template('login.html', form=form) 34 if utils.limiter.should_rate_limit_user(username, client_ip, device_cookie, device_cookie_username): 35 flask.flash('Too many attempts for this user (rate-limit)', 'error') 36 return flask.render_template('login.html', form=form) 37 user = models.User.login(username, form.pw.data) 38 if user: 39 flask.session.regenerate() 40 flask_login.login_user(user) 41 response = flask.redirect(destination) 42 response.set_cookie('rate_limit', utils.limiter.device_cookie(username), max_age=31536000, path=flask.url_for('sso.login'), secure=app.config['SESSION_COOKIE_SECURE'], httponly=True) 43 flask.current_app.logger.info(f'Login succeeded for {username} from {client_ip}.') 44 return response 45 else: 46 utils.limiter.rate_limit_user(username, client_ip, device_cookie, device_cookie_username) if models.User.get(username) else utils.limiter.rate_limit_ip(client_ip) 47 flask.current_app.logger.warn(f'Login failed for {username} from {client_ip}.') 48 flask.flash('Wrong e-mail or password', 'error') 49 return flask.render_template('login.html', form=form, fields=fields) 50 51 @sso.route('/logout', methods=['GET']) 52 @access.authenticated 53 def logout(): 54 flask_login.logout_user() 55 flask.session.destroy() 56 return flask.redirect(flask.url_for('.login')) 57 58 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/core/admin/mailu/sso/views/base.py b/core/admin/mailu/sso/views/base.py --- a/core/admin/mailu/sso/views/base.py +++ b/core/admin/mailu/sso/views/base.py @@ -30,10 +30,10 @@ username = form.email.data if username != device_cookie_username and utils.limiter.should_rate_limit_ip(client_ip): flask.flash('Too many attempts from your IP (rate-limit)', 'error') - return flask.render_template('login.html', form=form) + return flask.render_template('login.html', form=form, fields=fields) if utils.limiter.should_rate_limit_user(username, client_ip, device_cookie, device_cookie_username): flask.flash('Too many attempts for this user (rate-limit)', 'error') - return flask.render_template('login.html', form=form) + return flask.render_template('login.html', form=form, fields=fields) user = models.User.login(username, form.pw.data) if user: flask.session.regenerate()
{"golden_diff": "diff --git a/core/admin/mailu/sso/views/base.py b/core/admin/mailu/sso/views/base.py\n--- a/core/admin/mailu/sso/views/base.py\n+++ b/core/admin/mailu/sso/views/base.py\n@@ -30,10 +30,10 @@\n username = form.email.data\n if username != device_cookie_username and utils.limiter.should_rate_limit_ip(client_ip):\n flask.flash('Too many attempts from your IP (rate-limit)', 'error')\n- return flask.render_template('login.html', form=form)\n+ return flask.render_template('login.html', form=form, fields=fields)\n if utils.limiter.should_rate_limit_user(username, client_ip, device_cookie, device_cookie_username):\n flask.flash('Too many attempts for this user (rate-limit)', 'error')\n- return flask.render_template('login.html', form=form)\n+ return flask.render_template('login.html', form=form, fields=fields)\n user = models.User.login(username, form.pw.data)\n if user:\n flask.session.regenerate()\n", "issue": "Login attempt on roundcube triggers error 500 on /sso/login endpoint (ZeroDivisionError: division by zero)\nHi everybody!\r\n\r\nThanks in advance for your help!\r\n\r\n## Before you open your issue\r\n- [x] Check if no issue or pull-request for this already exists.\r\n- [x] Check [documentation](https://mailu.io/master/) and [FAQ](https://mailu.io/master/faq.html). (Tip, use the search function on the documentation page)\r\n- [x] You understand `Mailu` is made by volunteers in their **free time** \u2014 be conscise, civil and accept that delays can occur.\r\n- [x] The title of the issue should be short and simple. It should contain specific terms related to the actual issue. Be specific while writing the title.\r\n\r\n## Environment & Versions\r\n### Environment\r\n - [x] docker-compose\r\n - [ ] kubernetes\r\n - [ ] docker swarm\r\n\r\n### Versions\r\n1.9\r\n\r\n## Description\r\nA user encountered an error 500 while trying to log into an account on roundcube. The 500 comes from a ZeroDivisionError in one of the jinja templates (see logs below).\r\n\r\n## Replication Steps\r\nUnfortunately I could not reproduce it so far. Apparently it happened on the first login attempt, even though I suspect that it had to do with rate limiting, since there was a message about rate limiting right before the error (see below). Apparently the number of fields on the sso form is zero?\r\n\r\nThe user also reports, that it is now working (also when rapidly logging out and logging in again).\r\n\r\n## Expected behaviour\r\nNo error 500 and succesful login.\r\n\r\n## Logs\r\n\r\n````markdown\r\n```\r\nadmin_1 | [2022-01-10 14:59:49,322] WARNING in limiter: Authentication attempt from <REDACTED IP OF USER> for <REDACTED MAIL ACCOUNT> has been rate-limited.\r\nadmin_1 | [2022-01-10 14:59:49,334] ERROR in app: Exception on /sso/login [POST]\r\nadmin_1 | Traceback (most recent call last):\r\nadmin_1 | File \"/usr/lib/python3.9/site-packages/flask/app.py\", line 2073, in wsgi_app\r\nadmin_1 | response = self.full_dispatch_request()\r\nadmin_1 | File \"/usr/lib/python3.9/site-packages/flask/app.py\", line 1518, in full_dispatch_request\r\nadmin_1 | rv = self.handle_user_exception(e)\r\nadmin_1 | File \"/usr/lib/python3.9/site-packages/flask/app.py\", line 1516, in full_dispatch_request\r\nadmin_1 | rv = self.dispatch_request()\r\nadmin_1 | File \"/usr/lib/python3.9/site-packages/flask/app.py\", line 1502, in dispatch_request\r\nadmin_1 | return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)\r\nadmin_1 | File \"/app/mailu/sso/views/base.py\", line 36, in login\r\nadmin_1 | return flask.render_template('login.html', form=form)\r\nadmin_1 | File \"/usr/lib/python3.9/site-packages/flask/templating.py\", line 147, in render_template\r\nadmin_1 | return _render(\r\nadmin_1 | File \"/usr/lib/python3.9/site-packages/flask/templating.py\", line 128, in _render\r\nadmin_1 | rv = template.render(context)\r\nadmin_1 | File \"/usr/lib/python3.9/site-packages/jinja2/environment.py\", line 1304, in render\r\nadmin_1 | self.environment.handle_exception()\r\nadmin_1 | File \"/usr/lib/python3.9/site-packages/jinja2/environment.py\", line 925, in handle_exception\r\nadmin_1 | raise rewrite_traceback_stack(source=source)\r\nadmin_1 | File \"/app/mailu/sso/templates/login.html\", line 1, in top-level template code\r\nadmin_1 | {%- extends \"form_sso.html\" %}\r\nadmin_1 | File \"/app/mailu/sso/templates/form_sso.html\", line 1, in top-level template code\r\nadmin_1 | {%- extends \"base_sso.html\" %}\r\nadmin_1 | File \"/app/mailu/sso/templates/base_sso.html\", line 70, in top-level template code\r\nadmin_1 | {%- block content %}{%- endblock %}\r\nadmin_1 | File \"/app/mailu/sso/templates/form_sso.html\", line 4, in block 'content'\r\nadmin_1 | {%- call macros.card() %}\r\nadmin_1 | File \"/usr/lib/python3.9/site-packages/jinja2/runtime.py\", line 828, in _invoke\r\nadmin_1 | rv = self._func(*arguments)\r\nadmin_1 | File \"/app/mailu/ui/templates/macros.html\", line 84, in template\r\nadmin_1 | {{- caller() }}\r\nadmin_1 | File \"/usr/lib/python3.9/site-packages/jinja2/runtime.py\", line 828, in _invoke\r\nadmin_1 | rv = self._func(*arguments)\r\nadmin_1 | File \"/app/mailu/sso/templates/form_sso.html\", line 8, in template\r\nadmin_1 | {{ macros.form_fields(fields, label=False, class=\"btn btn-default\") }}\r\nadmin_1 | File \"/usr/lib/python3.9/site-packages/jinja2/runtime.py\", line 828, in _invoke\r\nadmin_1 | rv = self._func(*arguments)\r\nadmin_1 | File \"/app/mailu/ui/templates/macros.html\", line 22, in template\r\nadmin_1 | {%- set width = (12 / fields|length)|int %}\r\nadmin_1 | ZeroDivisionError: division by zero\r\n\r\n```\r\n````\r\n\n", "before_files": [{"content": "from werkzeug.utils import redirect\nfrom mailu import models, utils\nfrom mailu.sso import sso, forms\nfrom mailu.ui import access\n\nfrom flask import current_app as app\nimport flask\nimport flask_login\n\[email protected]('/login', methods=['GET', 'POST'])\ndef login():\n client_ip = flask.request.headers.get('X-Real-IP', flask.request.remote_addr)\n form = forms.LoginForm()\n form.submitAdmin.label.text = form.submitAdmin.label.text + ' Admin'\n form.submitWebmail.label.text = form.submitWebmail.label.text + ' Webmail'\n\n fields = []\n if str(app.config[\"WEBMAIL\"]).upper() != \"NONE\":\n fields.append(form.submitWebmail)\n if str(app.config[\"ADMIN\"]).upper() != \"FALSE\":\n fields.append(form.submitAdmin)\n fields = [fields]\n\n if form.validate_on_submit():\n if form.submitAdmin.data:\n destination = app.config['WEB_ADMIN']\n elif form.submitWebmail.data:\n destination = app.config['WEB_WEBMAIL']\n device_cookie, device_cookie_username = utils.limiter.parse_device_cookie(flask.request.cookies.get('rate_limit'))\n username = form.email.data\n if username != device_cookie_username and utils.limiter.should_rate_limit_ip(client_ip):\n flask.flash('Too many attempts from your IP (rate-limit)', 'error')\n return flask.render_template('login.html', form=form)\n if utils.limiter.should_rate_limit_user(username, client_ip, device_cookie, device_cookie_username):\n flask.flash('Too many attempts for this user (rate-limit)', 'error')\n return flask.render_template('login.html', form=form)\n user = models.User.login(username, form.pw.data)\n if user:\n flask.session.regenerate()\n flask_login.login_user(user)\n response = flask.redirect(destination)\n response.set_cookie('rate_limit', utils.limiter.device_cookie(username), max_age=31536000, path=flask.url_for('sso.login'), secure=app.config['SESSION_COOKIE_SECURE'], httponly=True)\n flask.current_app.logger.info(f'Login succeeded for {username} from {client_ip}.')\n return response\n else:\n utils.limiter.rate_limit_user(username, client_ip, device_cookie, device_cookie_username) if models.User.get(username) else utils.limiter.rate_limit_ip(client_ip)\n flask.current_app.logger.warn(f'Login failed for {username} from {client_ip}.')\n flask.flash('Wrong e-mail or password', 'error')\n return flask.render_template('login.html', form=form, fields=fields)\n\[email protected]('/logout', methods=['GET'])\[email protected]\ndef logout():\n flask_login.logout_user()\n flask.session.destroy()\n return flask.redirect(flask.url_for('.login'))\n\n", "path": "core/admin/mailu/sso/views/base.py"}], "after_files": [{"content": "from werkzeug.utils import redirect\nfrom mailu import models, utils\nfrom mailu.sso import sso, forms\nfrom mailu.ui import access\n\nfrom flask import current_app as app\nimport flask\nimport flask_login\n\[email protected]('/login', methods=['GET', 'POST'])\ndef login():\n client_ip = flask.request.headers.get('X-Real-IP', flask.request.remote_addr)\n form = forms.LoginForm()\n form.submitAdmin.label.text = form.submitAdmin.label.text + ' Admin'\n form.submitWebmail.label.text = form.submitWebmail.label.text + ' Webmail'\n\n fields = []\n if str(app.config[\"WEBMAIL\"]).upper() != \"NONE\":\n fields.append(form.submitWebmail)\n if str(app.config[\"ADMIN\"]).upper() != \"FALSE\":\n fields.append(form.submitAdmin)\n fields = [fields]\n\n if form.validate_on_submit():\n if form.submitAdmin.data:\n destination = app.config['WEB_ADMIN']\n elif form.submitWebmail.data:\n destination = app.config['WEB_WEBMAIL']\n device_cookie, device_cookie_username = utils.limiter.parse_device_cookie(flask.request.cookies.get('rate_limit'))\n username = form.email.data\n if username != device_cookie_username and utils.limiter.should_rate_limit_ip(client_ip):\n flask.flash('Too many attempts from your IP (rate-limit)', 'error')\n return flask.render_template('login.html', form=form, fields=fields)\n if utils.limiter.should_rate_limit_user(username, client_ip, device_cookie, device_cookie_username):\n flask.flash('Too many attempts for this user (rate-limit)', 'error')\n return flask.render_template('login.html', form=form, fields=fields)\n user = models.User.login(username, form.pw.data)\n if user:\n flask.session.regenerate()\n flask_login.login_user(user)\n response = flask.redirect(destination)\n response.set_cookie('rate_limit', utils.limiter.device_cookie(username), max_age=31536000, path=flask.url_for('sso.login'), secure=app.config['SESSION_COOKIE_SECURE'], httponly=True)\n flask.current_app.logger.info(f'Login succeeded for {username} from {client_ip}.')\n return response\n else:\n utils.limiter.rate_limit_user(username, client_ip, device_cookie, device_cookie_username) if models.User.get(username) else utils.limiter.rate_limit_ip(client_ip)\n flask.current_app.logger.warn(f'Login failed for {username} from {client_ip}.')\n flask.flash('Wrong e-mail or password', 'error')\n return flask.render_template('login.html', form=form, fields=fields)\n\[email protected]('/logout', methods=['GET'])\[email protected]\ndef logout():\n flask_login.logout_user()\n flask.session.destroy()\n return flask.redirect(flask.url_for('.login'))\n\n", "path": "core/admin/mailu/sso/views/base.py"}]}
2,324
225
gh_patches_debug_20388
rasdani/github-patches
git_diff
vnpy__vnpy-1500
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- ubuntu  ctp导入问题 ## 环境 * 操作系统: Ubuntu 18.04 * Anaconda版本: Python 3.7 64位 * vn.py版本: DEV-2.0.1 branch 20190313(下载日期) ## Issue类型 三选一:Bug ## 预期程序行为 ``` from vnpy.gateway.ctp import ctp_gateway导入成功 ## 实际程序行为 '''from vnpy.gateway.ctp.ctp_gateway import CtpGateWay Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/vnpy/vnpy/vnpy/gateway/ctp/__init__.py", line 1, in <module> from .ctp_gateway import CtpGateway File "/home/vnpy/vnpy/vnpy/gateway/ctp/ctp_gateway.py", line 6, in <module> from vnpy.api.ctp import ( File "/home/vnpy/vnpy/vnpy/api/ctp/__init__.py", line 1, in <module> from .vnctpmd import MdApi ModuleNotFoundError: No module named 'vnpy.api.ctp.vnctpmd' ``` ## 重现步骤 ``` 删除setup下面的oes安装模块 git clone -b v2.0.1-DEV https://github.com/vnpy/vnpy cd vnpy vim setup.py #具体删除删除相关代码即可 chmod +x install.sh && ./install.sh # 安装会正常进行 ``` 针对Bug类型Issue,请提供具体重现步骤以及报错截图 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `setup.py` Content: ``` 1 import ast 2 import platform 3 import re 4 5 from setuptools import Extension, find_packages, setup 6 7 with open("vnpy/__init__.py", "rb") as f: 8 version_line = re.search( 9 r"__version__\s+=\s+(.*)", f.read().decode("utf-8") 10 ).group(1) 11 version = str(ast.literal_eval(version_line)) 12 13 if platform.uname().system == "Windows": 14 compiler_flags = ["/MP", "/std:c++17", # standard 15 "/O2", "/Ob2", "/Oi", "/Ot", "/Oy", "/GL", # Optimization 16 "/wd4819" # 936 code page 17 ] 18 extra_link_args = [] 19 else: 20 compiler_flags = ["-std=c++17", 21 "-Wno-delete-incomplete", "-Wno-sign-compare", 22 ] 23 extra_link_args = ["-lstdc++"] 24 25 vnctpmd = Extension("vnpy.api.ctp.vnctpmd", 26 [ 27 "vnpy/api/ctp/vnctp/vnctpmd/vnctpmd.cpp", 28 ], 29 include_dirs=["vnpy/api/ctp/include", "vnpy/api/ctp/vnctp", ], 30 define_macros=[], 31 undef_macros=[], 32 library_dirs=["vnpy/api/ctp/libs", "vnpy/api/ctp"], 33 libraries=["thostmduserapi", "thosttraderapi", ], 34 extra_compile_args=compiler_flags, 35 extra_link_args=extra_link_args, 36 depends=[], 37 runtime_library_dirs=["vnpy/api/ctp"], 38 language="cpp", 39 ) 40 vnctptd = Extension("vnpy.api.ctp.vnctptd", 41 [ 42 "vnpy/api/ctp/vnctp/vnctptd/vnctptd.cpp", 43 ], 44 include_dirs=["vnpy/api/ctp/include", "vnpy/api/ctp/vnctp", ], 45 define_macros=[], 46 undef_macros=[], 47 library_dirs=["vnpy/api/ctp/libs", "vnpy/api/ctp"], 48 libraries=["thostmduserapi", "thosttraderapi", ], 49 extra_compile_args=compiler_flags, 50 extra_link_args=extra_link_args, 51 runtime_library_dirs=["vnpy/api/ctp"], 52 depends=[], 53 language="cpp", 54 ) 55 vnoes = Extension("vnpy.api.oes.vnoes", 56 [ 57 "vnpy/api/oes/vnoes/generated_files/classes_1.cpp", 58 "vnpy/api/oes/vnoes/generated_files/classes_2.cpp", 59 "vnpy/api/oes/vnoes/generated_files/module.cpp", 60 ], 61 include_dirs=["vnpy/api/oes/include", "vnpy/api/oes/vnoes", ], 62 define_macros=[("BRIGAND_NO_BOOST_SUPPORT", "1")], 63 undef_macros=[], 64 library_dirs=["vnpy/api/oes/libs"], 65 libraries=["oes_api"], 66 extra_compile_args=compiler_flags, 67 extra_link_args=extra_link_args, 68 depends=[], 69 language="cpp", 70 ) 71 72 if platform.uname().system == "Windows": 73 # use pre-built pyd for windows ( support python 3.7 only ) 74 ext_modules = [] 75 else: 76 ext_modules = [vnctptd, vnctpmd, vnoes] 77 78 pkgs = find_packages() 79 80 setup( 81 name="vnpy", 82 version=version, 83 include_package_data=True, 84 packages=pkgs, 85 package_data={"": [ 86 "*.json", "*.md", "*.ico", "*.ini", 87 "*.dll", "*.so", "*.pyd" 88 ]}, 89 install_requires=[], 90 ext_modules=ext_modules 91 ) 92 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -34,7 +34,7 @@ extra_compile_args=compiler_flags, extra_link_args=extra_link_args, depends=[], - runtime_library_dirs=["vnpy/api/ctp"], + runtime_library_dirs=["$ORIGIN"], language="cpp", ) vnctptd = Extension("vnpy.api.ctp.vnctptd", @@ -48,7 +48,7 @@ libraries=["thostmduserapi", "thosttraderapi", ], extra_compile_args=compiler_flags, extra_link_args=extra_link_args, - runtime_library_dirs=["vnpy/api/ctp"], + runtime_library_dirs=["$ORIGIN"], depends=[], language="cpp", )
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -34,7 +34,7 @@\n extra_compile_args=compiler_flags,\n extra_link_args=extra_link_args,\n depends=[],\n- runtime_library_dirs=[\"vnpy/api/ctp\"],\n+ runtime_library_dirs=[\"$ORIGIN\"],\n language=\"cpp\",\n )\n vnctptd = Extension(\"vnpy.api.ctp.vnctptd\",\n@@ -48,7 +48,7 @@\n libraries=[\"thostmduserapi\", \"thosttraderapi\", ],\n extra_compile_args=compiler_flags,\n extra_link_args=extra_link_args,\n- runtime_library_dirs=[\"vnpy/api/ctp\"],\n+ runtime_library_dirs=[\"$ORIGIN\"],\n depends=[],\n language=\"cpp\",\n )\n", "issue": "ubuntu\u3000 ctp\u5bfc\u5165\u95ee\u9898\n## \u73af\u5883\r\n\r\n* \u64cd\u4f5c\u7cfb\u7edf: Ubuntu 18.04\r\n* Anaconda\u7248\u672c: Python 3.7 64\u4f4d\r\n* vn.py\u7248\u672c: DEV-2.0.1 branch 20190313\uff08\u4e0b\u8f7d\u65e5\u671f\uff09\r\n\r\n## Issue\u7c7b\u578b\r\n\u4e09\u9009\u4e00\uff1aBu\uff47\r\n\r\n## \u9884\u671f\u7a0b\u5e8f\u884c\u4e3a\r\n```\r\nfrom vnpy.gateway.ctp import ctp_gateway\u5bfc\u5165\u6210\u529f\r\n## \u5b9e\u9645\u7a0b\u5e8f\u884c\u4e3a\r\n'''from vnpy.gateway.ctp.ctp_gateway import CtpGateWay\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/vnpy/vnpy/vnpy/gateway/ctp/__init__.py\", line 1, in <module>\r\n from .ctp_gateway import CtpGateway\r\n File \"/home/vnpy/vnpy/vnpy/gateway/ctp/ctp_gateway.py\", line 6, in <module>\r\n from vnpy.api.ctp import (\r\n File \"/home/vnpy/vnpy/vnpy/api/ctp/__init__.py\", line 1, in <module>\r\n from .vnctpmd import MdApi\r\nModuleNotFoundError: No module named 'vnpy.api.ctp.vnctpmd'\r\n```\r\n\r\n## \u91cd\u73b0\u6b65\u9aa4\r\n\r\n```\r\n\u5220\u9664setup\u4e0b\u9762\u7684oes\u5b89\u88c5\u6a21\u5757 \r\ngit clone -b v2.0.1-DEV https://github.com/vnpy/vnpy\r\ncd vnpy\r\nvim setup.py #\u5177\u4f53\u5220\u9664\u5220\u9664\u76f8\u5173\u4ee3\u7801\u5373\u53ef \r\nchmod +x install.sh && ./install.sh \r\n# \u5b89\u88c5\u4f1a\u6b63\u5e38\u8fdb\u884c \r\n```\r\n\r\n\u9488\u5bf9Bug\u7c7b\u578bIssue\uff0c\u8bf7\u63d0\u4f9b\u5177\u4f53\u91cd\u73b0\u6b65\u9aa4\u4ee5\u53ca\u62a5\u9519\u622a\u56fe\r\n\r\n\n", "before_files": [{"content": "import ast\nimport platform\nimport re\n\nfrom setuptools import Extension, find_packages, setup\n\nwith open(\"vnpy/__init__.py\", \"rb\") as f:\n version_line = re.search(\n r\"__version__\\s+=\\s+(.*)\", f.read().decode(\"utf-8\")\n ).group(1)\n version = str(ast.literal_eval(version_line))\n\nif platform.uname().system == \"Windows\":\n compiler_flags = [\"/MP\", \"/std:c++17\", # standard\n \"/O2\", \"/Ob2\", \"/Oi\", \"/Ot\", \"/Oy\", \"/GL\", # Optimization\n \"/wd4819\" # 936 code page\n ]\n extra_link_args = []\nelse:\n compiler_flags = [\"-std=c++17\",\n \"-Wno-delete-incomplete\", \"-Wno-sign-compare\",\n ]\n extra_link_args = [\"-lstdc++\"]\n\nvnctpmd = Extension(\"vnpy.api.ctp.vnctpmd\",\n [\n \"vnpy/api/ctp/vnctp/vnctpmd/vnctpmd.cpp\",\n ],\n include_dirs=[\"vnpy/api/ctp/include\", \"vnpy/api/ctp/vnctp\", ],\n define_macros=[],\n undef_macros=[],\n library_dirs=[\"vnpy/api/ctp/libs\", \"vnpy/api/ctp\"],\n libraries=[\"thostmduserapi\", \"thosttraderapi\", ],\n extra_compile_args=compiler_flags,\n extra_link_args=extra_link_args,\n depends=[],\n runtime_library_dirs=[\"vnpy/api/ctp\"],\n language=\"cpp\",\n )\nvnctptd = Extension(\"vnpy.api.ctp.vnctptd\",\n [\n \"vnpy/api/ctp/vnctp/vnctptd/vnctptd.cpp\",\n ],\n include_dirs=[\"vnpy/api/ctp/include\", \"vnpy/api/ctp/vnctp\", ],\n define_macros=[],\n undef_macros=[],\n library_dirs=[\"vnpy/api/ctp/libs\", \"vnpy/api/ctp\"],\n libraries=[\"thostmduserapi\", \"thosttraderapi\", ],\n extra_compile_args=compiler_flags,\n extra_link_args=extra_link_args,\n runtime_library_dirs=[\"vnpy/api/ctp\"],\n depends=[],\n language=\"cpp\",\n )\nvnoes = Extension(\"vnpy.api.oes.vnoes\",\n [\n \"vnpy/api/oes/vnoes/generated_files/classes_1.cpp\",\n \"vnpy/api/oes/vnoes/generated_files/classes_2.cpp\",\n \"vnpy/api/oes/vnoes/generated_files/module.cpp\",\n ],\n include_dirs=[\"vnpy/api/oes/include\", \"vnpy/api/oes/vnoes\", ],\n define_macros=[(\"BRIGAND_NO_BOOST_SUPPORT\", \"1\")],\n undef_macros=[],\n library_dirs=[\"vnpy/api/oes/libs\"],\n libraries=[\"oes_api\"],\n extra_compile_args=compiler_flags,\n extra_link_args=extra_link_args,\n depends=[],\n language=\"cpp\",\n )\n\nif platform.uname().system == \"Windows\":\n # use pre-built pyd for windows ( support python 3.7 only )\n ext_modules = []\nelse:\n ext_modules = [vnctptd, vnctpmd, vnoes]\n\npkgs = find_packages()\n\nsetup(\n name=\"vnpy\",\n version=version,\n include_package_data=True,\n packages=pkgs,\n package_data={\"\": [\n \"*.json\", \"*.md\", \"*.ico\", \"*.ini\",\n \"*.dll\", \"*.so\", \"*.pyd\"\n ]},\n install_requires=[],\n ext_modules=ext_modules\n)\n", "path": "setup.py"}], "after_files": [{"content": "import ast\nimport platform\nimport re\n\nfrom setuptools import Extension, find_packages, setup\n\nwith open(\"vnpy/__init__.py\", \"rb\") as f:\n version_line = re.search(\n r\"__version__\\s+=\\s+(.*)\", f.read().decode(\"utf-8\")\n ).group(1)\n version = str(ast.literal_eval(version_line))\n\nif platform.uname().system == \"Windows\":\n compiler_flags = [\"/MP\", \"/std:c++17\", # standard\n \"/O2\", \"/Ob2\", \"/Oi\", \"/Ot\", \"/Oy\", \"/GL\", # Optimization\n \"/wd4819\" # 936 code page\n ]\n extra_link_args = []\nelse:\n compiler_flags = [\"-std=c++17\",\n \"-Wno-delete-incomplete\", \"-Wno-sign-compare\",\n ]\n extra_link_args = [\"-lstdc++\"]\n\nvnctpmd = Extension(\"vnpy.api.ctp.vnctpmd\",\n [\n \"vnpy/api/ctp/vnctp/vnctpmd/vnctpmd.cpp\",\n ],\n include_dirs=[\"vnpy/api/ctp/include\", \"vnpy/api/ctp/vnctp\", ],\n define_macros=[],\n undef_macros=[],\n library_dirs=[\"vnpy/api/ctp/libs\", \"vnpy/api/ctp\"],\n libraries=[\"thostmduserapi\", \"thosttraderapi\", ],\n extra_compile_args=compiler_flags,\n extra_link_args=extra_link_args,\n depends=[],\n runtime_library_dirs=[\"$ORIGIN\"],\n language=\"cpp\",\n )\nvnctptd = Extension(\"vnpy.api.ctp.vnctptd\",\n [\n \"vnpy/api/ctp/vnctp/vnctptd/vnctptd.cpp\",\n ],\n include_dirs=[\"vnpy/api/ctp/include\", \"vnpy/api/ctp/vnctp\", ],\n define_macros=[],\n undef_macros=[],\n library_dirs=[\"vnpy/api/ctp/libs\", \"vnpy/api/ctp\"],\n libraries=[\"thostmduserapi\", \"thosttraderapi\", ],\n extra_compile_args=compiler_flags,\n extra_link_args=extra_link_args,\n runtime_library_dirs=[\"$ORIGIN\"],\n depends=[],\n language=\"cpp\",\n )\nvnoes = Extension(\"vnpy.api.oes.vnoes\",\n [\n \"vnpy/api/oes/vnoes/generated_files/classes_1.cpp\",\n \"vnpy/api/oes/vnoes/generated_files/classes_2.cpp\",\n \"vnpy/api/oes/vnoes/generated_files/module.cpp\",\n ],\n include_dirs=[\"vnpy/api/oes/include\", \"vnpy/api/oes/vnoes\", ],\n define_macros=[(\"BRIGAND_NO_BOOST_SUPPORT\", \"1\")],\n undef_macros=[],\n library_dirs=[\"vnpy/api/oes/libs\"],\n libraries=[\"oes_api\"],\n extra_compile_args=compiler_flags,\n extra_link_args=extra_link_args,\n depends=[],\n language=\"cpp\",\n )\n\nif platform.uname().system == \"Windows\":\n # use pre-built pyd for windows ( support python 3.7 only )\n ext_modules = []\nelse:\n ext_modules = [vnctptd, vnctpmd, vnoes]\n\npkgs = find_packages()\n\nsetup(\n name=\"vnpy\",\n version=version,\n include_package_data=True,\n packages=pkgs,\n package_data={\"\": [\n \"*.json\", \"*.md\", \"*.ico\", \"*.ini\",\n \"*.dll\", \"*.so\", \"*.pyd\"\n ]},\n install_requires=[],\n ext_modules=ext_modules\n)\n", "path": "setup.py"}]}
1,594
178
gh_patches_debug_37915
rasdani/github-patches
git_diff
facebookresearch__ParlAI-3457
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Issue terminal chat service Hello, First of all thank you for this amazing framework. I was trying to run the terminal chat example (code from today without changes except port). It seems that there are only 'max_worker' clients possible to connect in total. For example when I set max_worker=1 in the config.yml, I can connect with the client one time successfully. But when I stop the client ('[DONE]'), start it again and it gets stuck right at the beginning. How can I prevent the server from getting stuck once >max_workers clients have been connected? I already tried removing the agent from memory, however it seems that the issue is that the thread is just not ending. ![image](https://user-images.githubusercontent.com/32954413/107663358-94f0e000-6c8b-11eb-8cd3-97fd1806197a.png) I use Python 3.7.5 and Ubuntu 18.04 LTS. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `parlai/chat_service/services/terminal_chat/client.py` Content: ``` 1 #!/usr/bin/env python3 2 3 # Copyright (c) Facebook, Inc. and its affiliates. 4 # This source code is licensed under the MIT license found in the 5 # LICENSE file in the root directory of this source tree. 6 7 import json 8 import uuid 9 import websocket 10 import time 11 import threading 12 from parlai.core.params import ParlaiParser 13 14 15 def _get_rand_id(): 16 """ 17 :return: The string of a random id using uuid4 18 """ 19 return str(uuid.uuid4()) 20 21 22 def _prBlueBG(text): 23 """ 24 Print given in text with a blue background. 25 26 :param text: The text to be printed 27 """ 28 print("\033[44m{}\033[0m".format(text), sep="") 29 30 31 def on_message(ws, message): 32 """ 33 Prints the incoming message from the server. 34 35 :param ws: a WebSocketApp 36 :param message: json with 'text' field to be printed 37 """ 38 incoming_message = json.loads(message) 39 print("\033[0m\n") 40 print("Bot: " + incoming_message['text']) 41 quick_replies = incoming_message.get('quick_replies') 42 if quick_replies is not None and len(quick_replies) > 0: 43 print(f"\nOptions: [{'|'.join(quick_replies)}]") 44 print("\033[44m\n") 45 46 47 def on_error(ws, error): 48 """ 49 Prints an error, if occurs. 50 51 :param ws: WebSocketApp 52 :param error: An error 53 """ 54 print(error) 55 56 57 def on_close(ws): 58 """ 59 Cleanup before closing connection. 60 61 :param ws: WebSocketApp 62 """ 63 # Reset color formatting if necessary 64 print("\033[0m") 65 print("Connection closed") 66 67 68 def _run(ws, id): 69 """ 70 Takes user input and sends it to a websocket. 71 72 :param ws: websocket.WebSocketApp 73 """ 74 while True: 75 x = input("\033[44m Me: ") 76 print("\033[0m", end="") 77 data = {} 78 data['id'] = id 79 data['text'] = x 80 json_data = json.dumps(data) 81 ws.send(json_data) 82 time.sleep(1) 83 if x == "[DONE]": 84 break 85 ws.close() 86 87 88 def on_open(ws): 89 """ 90 Starts a new thread that loops, taking user input and sending it to the websocket. 91 92 :param ws: websocket.WebSocketApp that sends messages to a terminal_manager 93 """ 94 id = _get_rand_id() 95 threading.Thread(target=_run, args=(ws, id)).start() 96 97 98 def setup_args(): 99 """ 100 Set up args, specifically for the port number. 101 102 :return: A parser that parses the port from commandline arguments. 103 """ 104 parser = ParlaiParser(False, False) 105 parser_grp = parser.add_argument_group('Terminal Chat') 106 parser_grp.add_argument( 107 '--port', default=35496, type=int, help='Port to run the terminal chat server' 108 ) 109 return parser.parse_args() 110 111 112 if __name__ == "__main__": 113 opt = setup_args() 114 port = opt.get('port', 34596) 115 print("Connecting to port: ", port) 116 ws = websocket.WebSocketApp( 117 "ws://localhost:{}/websocket".format(port), 118 on_message=on_message, 119 on_error=on_error, 120 on_close=on_close, 121 ) 122 ws.on_open = on_open 123 ws.run_forever() 124 ``` Path: `parlai/chat_service/tasks/chatbot/worlds.py` Content: ``` 1 #!/usr/bin/env python3 2 3 # Copyright (c) Facebook, Inc. and its affiliates. 4 # This source code is licensed under the MIT license found in the 5 # LICENSE file in the root directory of this source tree. 6 # 7 # py parlai/chat_service/tasks/overworld_demo/run.py --debug --verbose 8 9 from parlai.core.worlds import World 10 from parlai.chat_service.services.messenger.worlds import OnboardWorld 11 from parlai.core.agents import create_agent_from_shared 12 13 14 # ---------- Chatbot demo ---------- # 15 class MessengerBotChatOnboardWorld(OnboardWorld): 16 """ 17 Example messenger onboarding world for Chatbot Model. 18 """ 19 20 @staticmethod 21 def generate_world(opt, agents): 22 return MessengerBotChatOnboardWorld(opt=opt, agent=agents[0]) 23 24 def parley(self): 25 self.episodeDone = True 26 27 28 class MessengerBotChatTaskWorld(World): 29 """ 30 Example one person world that talks to a provided agent (bot). 31 """ 32 33 MAX_AGENTS = 1 34 MODEL_KEY = 'blender_90M' 35 36 def __init__(self, opt, agent, bot): 37 self.agent = agent 38 self.episodeDone = False 39 self.model = bot 40 self.first_time = True 41 42 @staticmethod 43 def generate_world(opt, agents): 44 if opt['models'] is None: 45 raise RuntimeError("Model must be specified") 46 return MessengerBotChatTaskWorld( 47 opt, 48 agents[0], 49 create_agent_from_shared( 50 opt['shared_bot_params'][MessengerBotChatTaskWorld.MODEL_KEY] 51 ), 52 ) 53 54 @staticmethod 55 def assign_roles(agents): 56 agents[0].disp_id = 'ChatbotAgent' 57 58 def parley(self): 59 if self.first_time: 60 self.agent.observe( 61 { 62 'id': 'World', 63 'text': 'Welcome to the ParlAI Chatbot demo. ' 64 'You are now paired with a bot - feel free to send a message.' 65 'Type [DONE] to finish the chat, or [RESET] to reset the dialogue history.', 66 } 67 ) 68 self.first_time = False 69 a = self.agent.act() 70 if a is not None: 71 if '[DONE]' in a['text']: 72 self.episodeDone = True 73 elif '[RESET]' in a['text']: 74 self.model.reset() 75 self.agent.observe({"text": "[History Cleared]", "episode_done": False}) 76 else: 77 print("===act====") 78 print(a) 79 print("~~~~~~~~~~~") 80 self.model.observe(a) 81 response = self.model.act() 82 print("===response====") 83 print(response) 84 print("~~~~~~~~~~~") 85 self.agent.observe(response) 86 87 def episode_done(self): 88 return self.episodeDone 89 90 def shutdown(self): 91 self.agent.shutdown() 92 93 94 # ---------- Overworld -------- # 95 class MessengerOverworld(World): 96 """ 97 World to handle moving agents to their proper places. 98 """ 99 100 def __init__(self, opt, agent): 101 self.agent = agent 102 self.opt = opt 103 self.first_time = True 104 self.episodeDone = False 105 106 @staticmethod 107 def generate_world(opt, agents): 108 return MessengerOverworld(opt, agents[0]) 109 110 @staticmethod 111 def assign_roles(agents): 112 for a in agents: 113 a.disp_id = 'Agent' 114 115 def episode_done(self): 116 return self.episodeDone 117 118 def parley(self): 119 if self.first_time: 120 self.agent.observe( 121 { 122 'id': 'Overworld', 123 'text': 'Welcome to the overworld for the ParlAI messenger ' 124 'chatbot demo. Please type "begin" to start.', 125 'quick_replies': ['begin'], 126 } 127 ) 128 self.first_time = False 129 a = self.agent.act() 130 if a is not None and a['text'].lower() == 'begin': 131 self.episodeDone = True 132 return 'default' 133 elif a is not None: 134 self.agent.observe( 135 { 136 'id': 'Overworld', 137 'text': 'Invalid option. Please type "begin".', 138 'quick_replies': ['begin'], 139 } 140 ) 141 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/parlai/chat_service/services/terminal_chat/client.py b/parlai/chat_service/services/terminal_chat/client.py --- a/parlai/chat_service/services/terminal_chat/client.py +++ b/parlai/chat_service/services/terminal_chat/client.py @@ -11,6 +11,13 @@ import threading from parlai.core.params import ParlaiParser +# the socket callback functions operate asynchronously. +# upon exit of a chat, we do not want the user to view any additional messages from the server. +# alas, it is necessary to send two messages ([DONE], and EXIT) in order to fully exist the world pool +# to prevent receiving a message after sending [DONE], we track the user's state with +# this global variable. +RUNNING = True + def _get_rand_id(): """ @@ -35,6 +42,8 @@ :param ws: a WebSocketApp :param message: json with 'text' field to be printed """ + if not RUNNING: + return incoming_message = json.loads(message) print("\033[0m\n") print("Bot: " + incoming_message['text']) @@ -71,16 +80,22 @@ :param ws: websocket.WebSocketApp """ + global RUNNING while True: x = input("\033[44m Me: ") print("\033[0m", end="") data = {} data['id'] = id data['text'] = x + if x == "[DONE]": + RUNNING = False json_data = json.dumps(data) ws.send(json_data) time.sleep(1) if x == "[DONE]": + time.sleep(1) + data['text'] = 'EXIT' + ws.send(json.dumps(data)) break ws.close() diff --git a/parlai/chat_service/tasks/chatbot/worlds.py b/parlai/chat_service/tasks/chatbot/worlds.py --- a/parlai/chat_service/tasks/chatbot/worlds.py +++ b/parlai/chat_service/tasks/chatbot/worlds.py @@ -121,12 +121,15 @@ { 'id': 'Overworld', 'text': 'Welcome to the overworld for the ParlAI messenger ' - 'chatbot demo. Please type "begin" to start.', - 'quick_replies': ['begin'], + 'chatbot demo. Please type "begin" to start, or "exit" to exit', + 'quick_replies': ['begin', 'exit'], } ) self.first_time = False a = self.agent.act() + if a is not None and a['text'].lower() == 'exit': + self.episode_done = True + return 'EXIT' if a is not None and a['text'].lower() == 'begin': self.episodeDone = True return 'default'
{"golden_diff": "diff --git a/parlai/chat_service/services/terminal_chat/client.py b/parlai/chat_service/services/terminal_chat/client.py\n--- a/parlai/chat_service/services/terminal_chat/client.py\n+++ b/parlai/chat_service/services/terminal_chat/client.py\n@@ -11,6 +11,13 @@\n import threading\n from parlai.core.params import ParlaiParser\n \n+# the socket callback functions operate asynchronously.\n+# upon exit of a chat, we do not want the user to view any additional messages from the server.\n+# alas, it is necessary to send two messages ([DONE], and EXIT) in order to fully exist the world pool\n+# to prevent receiving a message after sending [DONE], we track the user's state with\n+# this global variable.\n+RUNNING = True\n+\n \n def _get_rand_id():\n \"\"\"\n@@ -35,6 +42,8 @@\n :param ws: a WebSocketApp\n :param message: json with 'text' field to be printed\n \"\"\"\n+ if not RUNNING:\n+ return\n incoming_message = json.loads(message)\n print(\"\\033[0m\\n\")\n print(\"Bot: \" + incoming_message['text'])\n@@ -71,16 +80,22 @@\n \n :param ws: websocket.WebSocketApp\n \"\"\"\n+ global RUNNING\n while True:\n x = input(\"\\033[44m Me: \")\n print(\"\\033[0m\", end=\"\")\n data = {}\n data['id'] = id\n data['text'] = x\n+ if x == \"[DONE]\":\n+ RUNNING = False\n json_data = json.dumps(data)\n ws.send(json_data)\n time.sleep(1)\n if x == \"[DONE]\":\n+ time.sleep(1)\n+ data['text'] = 'EXIT'\n+ ws.send(json.dumps(data))\n break\n ws.close()\n \ndiff --git a/parlai/chat_service/tasks/chatbot/worlds.py b/parlai/chat_service/tasks/chatbot/worlds.py\n--- a/parlai/chat_service/tasks/chatbot/worlds.py\n+++ b/parlai/chat_service/tasks/chatbot/worlds.py\n@@ -121,12 +121,15 @@\n {\n 'id': 'Overworld',\n 'text': 'Welcome to the overworld for the ParlAI messenger '\n- 'chatbot demo. Please type \"begin\" to start.',\n- 'quick_replies': ['begin'],\n+ 'chatbot demo. Please type \"begin\" to start, or \"exit\" to exit',\n+ 'quick_replies': ['begin', 'exit'],\n }\n )\n self.first_time = False\n a = self.agent.act()\n+ if a is not None and a['text'].lower() == 'exit':\n+ self.episode_done = True\n+ return 'EXIT'\n if a is not None and a['text'].lower() == 'begin':\n self.episodeDone = True\n return 'default'\n", "issue": "Issue terminal chat service\nHello,\r\n\r\nFirst of all thank you for this amazing framework.\r\nI was trying to run the terminal chat example (code from today without changes except port).\r\n\r\nIt seems that there are only 'max_worker' clients possible to connect in total. For example when I set max_worker=1 in the config.yml, I can connect with the client one time successfully. But when I stop the client ('[DONE]'), start it again and it gets stuck right at the beginning.\r\n\r\nHow can I prevent the server from getting stuck once >max_workers clients have been connected? I already tried removing the agent from memory, however it seems that the issue is that the thread is just not ending.\r\n\r\n![image](https://user-images.githubusercontent.com/32954413/107663358-94f0e000-6c8b-11eb-8cd3-97fd1806197a.png)\r\n\r\nI use Python 3.7.5 and Ubuntu 18.04 LTS.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\nimport json\nimport uuid\nimport websocket\nimport time\nimport threading\nfrom parlai.core.params import ParlaiParser\n\n\ndef _get_rand_id():\n \"\"\"\n :return: The string of a random id using uuid4\n \"\"\"\n return str(uuid.uuid4())\n\n\ndef _prBlueBG(text):\n \"\"\"\n Print given in text with a blue background.\n\n :param text: The text to be printed\n \"\"\"\n print(\"\\033[44m{}\\033[0m\".format(text), sep=\"\")\n\n\ndef on_message(ws, message):\n \"\"\"\n Prints the incoming message from the server.\n\n :param ws: a WebSocketApp\n :param message: json with 'text' field to be printed\n \"\"\"\n incoming_message = json.loads(message)\n print(\"\\033[0m\\n\")\n print(\"Bot: \" + incoming_message['text'])\n quick_replies = incoming_message.get('quick_replies')\n if quick_replies is not None and len(quick_replies) > 0:\n print(f\"\\nOptions: [{'|'.join(quick_replies)}]\")\n print(\"\\033[44m\\n\")\n\n\ndef on_error(ws, error):\n \"\"\"\n Prints an error, if occurs.\n\n :param ws: WebSocketApp\n :param error: An error\n \"\"\"\n print(error)\n\n\ndef on_close(ws):\n \"\"\"\n Cleanup before closing connection.\n\n :param ws: WebSocketApp\n \"\"\"\n # Reset color formatting if necessary\n print(\"\\033[0m\")\n print(\"Connection closed\")\n\n\ndef _run(ws, id):\n \"\"\"\n Takes user input and sends it to a websocket.\n\n :param ws: websocket.WebSocketApp\n \"\"\"\n while True:\n x = input(\"\\033[44m Me: \")\n print(\"\\033[0m\", end=\"\")\n data = {}\n data['id'] = id\n data['text'] = x\n json_data = json.dumps(data)\n ws.send(json_data)\n time.sleep(1)\n if x == \"[DONE]\":\n break\n ws.close()\n\n\ndef on_open(ws):\n \"\"\"\n Starts a new thread that loops, taking user input and sending it to the websocket.\n\n :param ws: websocket.WebSocketApp that sends messages to a terminal_manager\n \"\"\"\n id = _get_rand_id()\n threading.Thread(target=_run, args=(ws, id)).start()\n\n\ndef setup_args():\n \"\"\"\n Set up args, specifically for the port number.\n\n :return: A parser that parses the port from commandline arguments.\n \"\"\"\n parser = ParlaiParser(False, False)\n parser_grp = parser.add_argument_group('Terminal Chat')\n parser_grp.add_argument(\n '--port', default=35496, type=int, help='Port to run the terminal chat server'\n )\n return parser.parse_args()\n\n\nif __name__ == \"__main__\":\n opt = setup_args()\n port = opt.get('port', 34596)\n print(\"Connecting to port: \", port)\n ws = websocket.WebSocketApp(\n \"ws://localhost:{}/websocket\".format(port),\n on_message=on_message,\n on_error=on_error,\n on_close=on_close,\n )\n ws.on_open = on_open\n ws.run_forever()\n", "path": "parlai/chat_service/services/terminal_chat/client.py"}, {"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n#\n# py parlai/chat_service/tasks/overworld_demo/run.py --debug --verbose\n\nfrom parlai.core.worlds import World\nfrom parlai.chat_service.services.messenger.worlds import OnboardWorld\nfrom parlai.core.agents import create_agent_from_shared\n\n\n# ---------- Chatbot demo ---------- #\nclass MessengerBotChatOnboardWorld(OnboardWorld):\n \"\"\"\n Example messenger onboarding world for Chatbot Model.\n \"\"\"\n\n @staticmethod\n def generate_world(opt, agents):\n return MessengerBotChatOnboardWorld(opt=opt, agent=agents[0])\n\n def parley(self):\n self.episodeDone = True\n\n\nclass MessengerBotChatTaskWorld(World):\n \"\"\"\n Example one person world that talks to a provided agent (bot).\n \"\"\"\n\n MAX_AGENTS = 1\n MODEL_KEY = 'blender_90M'\n\n def __init__(self, opt, agent, bot):\n self.agent = agent\n self.episodeDone = False\n self.model = bot\n self.first_time = True\n\n @staticmethod\n def generate_world(opt, agents):\n if opt['models'] is None:\n raise RuntimeError(\"Model must be specified\")\n return MessengerBotChatTaskWorld(\n opt,\n agents[0],\n create_agent_from_shared(\n opt['shared_bot_params'][MessengerBotChatTaskWorld.MODEL_KEY]\n ),\n )\n\n @staticmethod\n def assign_roles(agents):\n agents[0].disp_id = 'ChatbotAgent'\n\n def parley(self):\n if self.first_time:\n self.agent.observe(\n {\n 'id': 'World',\n 'text': 'Welcome to the ParlAI Chatbot demo. '\n 'You are now paired with a bot - feel free to send a message.'\n 'Type [DONE] to finish the chat, or [RESET] to reset the dialogue history.',\n }\n )\n self.first_time = False\n a = self.agent.act()\n if a is not None:\n if '[DONE]' in a['text']:\n self.episodeDone = True\n elif '[RESET]' in a['text']:\n self.model.reset()\n self.agent.observe({\"text\": \"[History Cleared]\", \"episode_done\": False})\n else:\n print(\"===act====\")\n print(a)\n print(\"~~~~~~~~~~~\")\n self.model.observe(a)\n response = self.model.act()\n print(\"===response====\")\n print(response)\n print(\"~~~~~~~~~~~\")\n self.agent.observe(response)\n\n def episode_done(self):\n return self.episodeDone\n\n def shutdown(self):\n self.agent.shutdown()\n\n\n# ---------- Overworld -------- #\nclass MessengerOverworld(World):\n \"\"\"\n World to handle moving agents to their proper places.\n \"\"\"\n\n def __init__(self, opt, agent):\n self.agent = agent\n self.opt = opt\n self.first_time = True\n self.episodeDone = False\n\n @staticmethod\n def generate_world(opt, agents):\n return MessengerOverworld(opt, agents[0])\n\n @staticmethod\n def assign_roles(agents):\n for a in agents:\n a.disp_id = 'Agent'\n\n def episode_done(self):\n return self.episodeDone\n\n def parley(self):\n if self.first_time:\n self.agent.observe(\n {\n 'id': 'Overworld',\n 'text': 'Welcome to the overworld for the ParlAI messenger '\n 'chatbot demo. Please type \"begin\" to start.',\n 'quick_replies': ['begin'],\n }\n )\n self.first_time = False\n a = self.agent.act()\n if a is not None and a['text'].lower() == 'begin':\n self.episodeDone = True\n return 'default'\n elif a is not None:\n self.agent.observe(\n {\n 'id': 'Overworld',\n 'text': 'Invalid option. Please type \"begin\".',\n 'quick_replies': ['begin'],\n }\n )\n", "path": "parlai/chat_service/tasks/chatbot/worlds.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\nimport json\nimport uuid\nimport websocket\nimport time\nimport threading\nfrom parlai.core.params import ParlaiParser\n\n# the socket callback functions operate asynchronously.\n# upon exit of a chat, we do not want the user to view any additional messages from the server.\n# alas, it is necessary to send two messages ([DONE], and EXIT) in order to fully exist the world pool\n# to prevent receiving a message after sending [DONE], we track the user's state with\n# this global variable.\nRUNNING = True\n\n\ndef _get_rand_id():\n \"\"\"\n :return: The string of a random id using uuid4\n \"\"\"\n return str(uuid.uuid4())\n\n\ndef _prBlueBG(text):\n \"\"\"\n Print given in text with a blue background.\n\n :param text: The text to be printed\n \"\"\"\n print(\"\\033[44m{}\\033[0m\".format(text), sep=\"\")\n\n\ndef on_message(ws, message):\n \"\"\"\n Prints the incoming message from the server.\n\n :param ws: a WebSocketApp\n :param message: json with 'text' field to be printed\n \"\"\"\n if not RUNNING:\n return\n incoming_message = json.loads(message)\n print(\"\\033[0m\\n\")\n print(\"Bot: \" + incoming_message['text'])\n quick_replies = incoming_message.get('quick_replies')\n if quick_replies is not None and len(quick_replies) > 0:\n print(f\"\\nOptions: [{'|'.join(quick_replies)}]\")\n print(\"\\033[44m\\n\")\n\n\ndef on_error(ws, error):\n \"\"\"\n Prints an error, if occurs.\n\n :param ws: WebSocketApp\n :param error: An error\n \"\"\"\n print(error)\n\n\ndef on_close(ws):\n \"\"\"\n Cleanup before closing connection.\n\n :param ws: WebSocketApp\n \"\"\"\n # Reset color formatting if necessary\n print(\"\\033[0m\")\n print(\"Connection closed\")\n\n\ndef _run(ws, id):\n \"\"\"\n Takes user input and sends it to a websocket.\n\n :param ws: websocket.WebSocketApp\n \"\"\"\n global RUNNING\n while True:\n x = input(\"\\033[44m Me: \")\n print(\"\\033[0m\", end=\"\")\n data = {}\n data['id'] = id\n data['text'] = x\n if x == \"[DONE]\":\n RUNNING = False\n json_data = json.dumps(data)\n ws.send(json_data)\n time.sleep(1)\n if x == \"[DONE]\":\n time.sleep(1)\n data['text'] = 'EXIT'\n ws.send(json.dumps(data))\n break\n ws.close()\n\n\ndef on_open(ws):\n \"\"\"\n Starts a new thread that loops, taking user input and sending it to the websocket.\n\n :param ws: websocket.WebSocketApp that sends messages to a terminal_manager\n \"\"\"\n id = _get_rand_id()\n threading.Thread(target=_run, args=(ws, id)).start()\n\n\ndef setup_args():\n \"\"\"\n Set up args, specifically for the port number.\n\n :return: A parser that parses the port from commandline arguments.\n \"\"\"\n parser = ParlaiParser(False, False)\n parser_grp = parser.add_argument_group('Terminal Chat')\n parser_grp.add_argument(\n '--port', default=35496, type=int, help='Port to run the terminal chat server'\n )\n return parser.parse_args()\n\n\nif __name__ == \"__main__\":\n opt = setup_args()\n port = opt.get('port', 34596)\n print(\"Connecting to port: \", port)\n ws = websocket.WebSocketApp(\n \"ws://localhost:{}/websocket\".format(port),\n on_message=on_message,\n on_error=on_error,\n on_close=on_close,\n )\n ws.on_open = on_open\n ws.run_forever()\n", "path": "parlai/chat_service/services/terminal_chat/client.py"}, {"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n#\n# py parlai/chat_service/tasks/overworld_demo/run.py --debug --verbose\n\nfrom parlai.core.worlds import World\nfrom parlai.chat_service.services.messenger.worlds import OnboardWorld\nfrom parlai.core.agents import create_agent_from_shared\n\n\n# ---------- Chatbot demo ---------- #\nclass MessengerBotChatOnboardWorld(OnboardWorld):\n \"\"\"\n Example messenger onboarding world for Chatbot Model.\n \"\"\"\n\n @staticmethod\n def generate_world(opt, agents):\n return MessengerBotChatOnboardWorld(opt=opt, agent=agents[0])\n\n def parley(self):\n self.episodeDone = True\n\n\nclass MessengerBotChatTaskWorld(World):\n \"\"\"\n Example one person world that talks to a provided agent (bot).\n \"\"\"\n\n MAX_AGENTS = 1\n MODEL_KEY = 'blender_90M'\n\n def __init__(self, opt, agent, bot):\n self.agent = agent\n self.episodeDone = False\n self.model = bot\n self.first_time = True\n\n @staticmethod\n def generate_world(opt, agents):\n if opt['models'] is None:\n raise RuntimeError(\"Model must be specified\")\n return MessengerBotChatTaskWorld(\n opt,\n agents[0],\n create_agent_from_shared(\n opt['shared_bot_params'][MessengerBotChatTaskWorld.MODEL_KEY]\n ),\n )\n\n @staticmethod\n def assign_roles(agents):\n agents[0].disp_id = 'ChatbotAgent'\n\n def parley(self):\n if self.first_time:\n self.agent.observe(\n {\n 'id': 'World',\n 'text': 'Welcome to the ParlAI Chatbot demo. '\n 'You are now paired with a bot - feel free to send a message.'\n 'Type [DONE] to finish the chat, or [RESET] to reset the dialogue history.',\n }\n )\n self.first_time = False\n a = self.agent.act()\n if a is not None:\n if '[DONE]' in a['text']:\n self.episodeDone = True\n elif '[RESET]' in a['text']:\n self.model.reset()\n self.agent.observe({\"text\": \"[History Cleared]\", \"episode_done\": False})\n else:\n print(\"===act====\")\n print(a)\n print(\"~~~~~~~~~~~\")\n self.model.observe(a)\n response = self.model.act()\n print(\"===response====\")\n print(response)\n print(\"~~~~~~~~~~~\")\n self.agent.observe(response)\n\n def episode_done(self):\n return self.episodeDone\n\n def shutdown(self):\n self.agent.shutdown()\n\n\n# ---------- Overworld -------- #\nclass MessengerOverworld(World):\n \"\"\"\n World to handle moving agents to their proper places.\n \"\"\"\n\n def __init__(self, opt, agent):\n self.agent = agent\n self.opt = opt\n self.first_time = True\n self.episodeDone = False\n\n @staticmethod\n def generate_world(opt, agents):\n return MessengerOverworld(opt, agents[0])\n\n @staticmethod\n def assign_roles(agents):\n for a in agents:\n a.disp_id = 'Agent'\n\n def episode_done(self):\n return self.episodeDone\n\n def parley(self):\n if self.first_time:\n self.agent.observe(\n {\n 'id': 'Overworld',\n 'text': 'Welcome to the overworld for the ParlAI messenger '\n 'chatbot demo. Please type \"begin\" to start, or \"exit\" to exit',\n 'quick_replies': ['begin', 'exit'],\n }\n )\n self.first_time = False\n a = self.agent.act()\n if a is not None and a['text'].lower() == 'exit':\n self.episode_done = True\n return 'EXIT'\n if a is not None and a['text'].lower() == 'begin':\n self.episodeDone = True\n return 'default'\n elif a is not None:\n self.agent.observe(\n {\n 'id': 'Overworld',\n 'text': 'Invalid option. Please type \"begin\".',\n 'quick_replies': ['begin'],\n }\n )\n", "path": "parlai/chat_service/tasks/chatbot/worlds.py"}]}
2,785
662
gh_patches_debug_12716
rasdani/github-patches
git_diff
localstack__localstack-2332
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- s3.upload returns `Location: http://localhost:4566` # Bug report # Detailed description The `AWS.s3.upload()` (official SDK - https://github.com/aws/aws-sdk-js) returns an object with the `Location` key that points to 4566 instead of 4572 (LocalStack S3 port). ## Expected behavior The `Location` should point to the file on S3. Example: ``` Location: http://localhost:4572/path/to/bucket.txt ``` ## Actual behavior The `Location` points to the LocalStack entrypoint. Example: ``` Location: http://localhost:4566/path/to/bucket.txt ``` # Steps to reproduce - Upload a file to S3 using the official AWS SDK (https://github.com/aws/aws-sdk-js). - Check out the `Location` property. ## Client code ```javascript const AWS = require('aws-sdk'); const s3 = new AWS.S3({ region: 'us-west-1', endpoint: 'http://localhost:4566', apiVersion: '2006-03-01', s3ForcePathStyle: true, }); (async () => { await s3 .createBucket({ Bucket: 'my-bucket', ACL: 'private' }) .promise(); const { Location } = await s3 .upload({ Key: 'file.txt', Body: 'test', Bucket: 'my-bucket' }) .promise(); console.assert(Location === 'http://localhost:4572/my-bucket/file.txt'); })(); ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `localstack/services/edge.py` Content: ``` 1 import re 2 import os 3 import sys 4 import json 5 import logging 6 from requests.models import Response 7 from localstack import config 8 from localstack.constants import HEADER_LOCALSTACK_TARGET, HEADER_LOCALSTACK_EDGE_URL, LOCALSTACK_ROOT_FOLDER 9 from localstack.utils.common import run, is_root, TMP_THREADS 10 from localstack.utils.common import safe_requests as requests 11 from localstack.services.generic_proxy import ProxyListener, GenericProxy 12 13 LOG = logging.getLogger(__name__) 14 15 # Header to indicate that the process should kill itself. This is required because if 16 # this process is started as root, then we cannot kill it from a non-root process 17 HEADER_KILL_SIGNAL = 'x-localstack-kill' 18 19 20 class ProxyListenerEdge(ProxyListener): 21 22 def forward_request(self, method, path, data, headers): 23 if method == 'OPTIONS': 24 return 200 25 26 # kill the process if we receive this header 27 headers.get(HEADER_KILL_SIGNAL) and os._exit(0) 28 29 target = headers.get('x-amz-target', '') 30 auth_header = headers.get('authorization', '') 31 host = headers.get('host', '') 32 headers[HEADER_LOCALSTACK_EDGE_URL] = 'https://%s' % host 33 34 # extract API details 35 _, port, path, host = get_api_from_headers(headers, path) 36 37 if not port: 38 # detect S3 presigned URLs 39 if 'AWSAccessKeyId=' in path or 'Signature=' in path: 40 port = config.PORT_S3 41 42 if not port: 43 LOG.info('Unable to find forwarding rule for host "%s", path "%s", target header "%s", auth header "%s"' % 44 (host, path, target, auth_header)) 45 response = Response() 46 response.status_code = 404 47 response._content = '{"status": "running"}' 48 return response 49 50 use_ssl = config.USE_SSL 51 52 connect_host = '%s:%s' % (config.HOSTNAME, port) 53 url = 'http%s://%s%s' % ('s' if use_ssl else '', connect_host, path) 54 headers['Host'] = host 55 function = getattr(requests, method.lower()) 56 if isinstance(data, dict): 57 data = json.dumps(data) 58 59 response = function(url, data=data, headers=headers, verify=False) 60 return response 61 62 63 def get_api_from_headers(headers, path=None): 64 target = headers.get('x-amz-target', '') 65 host = headers.get('host', '') 66 auth_header = headers.get('authorization', '') 67 ls_target = headers.get(HEADER_LOCALSTACK_TARGET, '') 68 path = path or '/' 69 70 # initialize result 71 result = '_unknown_', 0 72 73 # https://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html 74 try: 75 credential_scope = auth_header.split(',')[0].split()[1] 76 _, _, _, service, _ = credential_scope.split('/') 77 result = service, config.service_port(service) 78 except Exception: 79 pass 80 81 # Fallback rules and route customizations applied below 82 83 if host.endswith('cloudfront.net'): 84 path = path or '/' 85 result = 'cloudfront', config.PORT_CLOUDFRONT 86 elif target.startswith('AWSCognitoIdentityProviderService') or 'cognito-idp.' in host: 87 result = 'cognito-idp', config.PORT_COGNITO_IDP 88 elif target.startswith('AWSCognitoIdentityService') or 'cognito-identity.' in host: 89 result = 'cognito-identity', config.PORT_COGNITO_IDENTITY 90 elif result[0] == 's3' or re.match(r'.*s3(\-website)?\.([^\.]+\.)?amazonaws.com', host): 91 host = re.sub(r's3-website\..*\.amazonaws', 's3.amazonaws', host) 92 result = 's3', config.PORT_S3 93 elif result[0] == 'states' in auth_header or host.startswith('states.'): 94 result = 'stepfunctions', config.PORT_STEPFUNCTIONS 95 elif '.execute-api.' in host: 96 result = 'apigateway', config.PORT_APIGATEWAY 97 elif target.startswith('DynamoDBStreams') or host.startswith('streams.dynamodb.'): 98 result = 'dynamodbstreams', config.PORT_DYNAMODBSTREAMS 99 elif ls_target == 'web' or path == '/graph': 100 result = 'web', config.PORT_WEB_UI 101 102 return result[0], result[1], path, host 103 104 105 def do_start_edge(port, use_ssl, asynchronous=False): 106 try: 107 # start local DNS server, if present 108 from localstack_ext.services import dns_server 109 dns_server.start_servers() 110 except Exception: 111 pass 112 113 # get port and start Edge 114 print('Starting edge router (http%s port %s)...' % ('s' if use_ssl else '', port)) 115 # use use=True here because our proxy allows both, HTTP and HTTPS traffic 116 proxy = GenericProxy(port, ssl=True, update_listener=ProxyListenerEdge()) 117 proxy.start() 118 if not asynchronous: 119 proxy.join() 120 return proxy 121 122 123 def can_use_sudo(): 124 try: 125 run('echo | sudo -S echo', print_error=False) 126 return True 127 except Exception: 128 return False 129 130 131 def ensure_can_use_sudo(): 132 if not is_root() and not can_use_sudo(): 133 print('Please enter your sudo password (required to configure local network):') 134 run('sudo echo', stdin=True) 135 136 137 def start_edge(port=None, use_ssl=True, asynchronous=False): 138 if not port: 139 port = config.EDGE_PORT 140 if config.EDGE_PORT_HTTP: 141 do_start_edge(config.EDGE_PORT_HTTP, use_ssl=False, asynchronous=True) 142 if port > 1024 or is_root(): 143 return do_start_edge(port, use_ssl, asynchronous=asynchronous) 144 145 # process requires priviledged port but we're not root -> try running as sudo 146 147 class Terminator(object): 148 149 def stop(self, quiet=True): 150 try: 151 url = 'http%s://localhost:%s' % ('s' if use_ssl else '', port) 152 requests.verify_ssl = False 153 requests.post(url, headers={HEADER_KILL_SIGNAL: 'kill'}) 154 except Exception: 155 pass 156 157 # make sure we can run sudo commands 158 ensure_can_use_sudo() 159 160 # register a signal handler to terminate the sudo process later on 161 TMP_THREADS.append(Terminator()) 162 163 # start the process as sudo 164 sudo_cmd = 'sudo ' 165 python_cmd = sys.executable 166 cmd = '%sPYTHONPATH=.:%s %s %s %s' % (sudo_cmd, LOCALSTACK_ROOT_FOLDER, python_cmd, __file__, port) 167 process = run(cmd, asynchronous=asynchronous) 168 return process 169 170 171 if __name__ == '__main__': 172 logging.basicConfig() 173 start_edge(int(sys.argv[1])) 174 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/localstack/services/edge.py b/localstack/services/edge.py --- a/localstack/services/edge.py +++ b/localstack/services/edge.py @@ -38,6 +38,10 @@ # detect S3 presigned URLs if 'AWSAccessKeyId=' in path or 'Signature=' in path: port = config.PORT_S3 + # assume that this is an S3 GET request with URL path `/<bucket>/<key ...>` + # TODO: move S3 public URLs to a separate port/endpoint, OR check ACLs here first + if method == 'GET' and '/' in path.strip('/'): + port = config.PORT_S3 if not port: LOG.info('Unable to find forwarding rule for host "%s", path "%s", target header "%s", auth header "%s"' %
{"golden_diff": "diff --git a/localstack/services/edge.py b/localstack/services/edge.py\n--- a/localstack/services/edge.py\n+++ b/localstack/services/edge.py\n@@ -38,6 +38,10 @@\n # detect S3 presigned URLs\n if 'AWSAccessKeyId=' in path or 'Signature=' in path:\n port = config.PORT_S3\n+ # assume that this is an S3 GET request with URL path `/<bucket>/<key ...>`\n+ # TODO: move S3 public URLs to a separate port/endpoint, OR check ACLs here first\n+ if method == 'GET' and '/' in path.strip('/'):\n+ port = config.PORT_S3\n \n if not port:\n LOG.info('Unable to find forwarding rule for host \"%s\", path \"%s\", target header \"%s\", auth header \"%s\"' %\n", "issue": "s3.upload returns `Location: http://localhost:4566`\n# Bug report\r\n\r\n# Detailed description\r\n\r\nThe `AWS.s3.upload()` (official SDK - https://github.com/aws/aws-sdk-js) returns an object with the `Location` key that points to 4566 instead of 4572 (LocalStack S3 port).\r\n\r\n## Expected behavior\r\n\r\nThe `Location` should point to the file on S3.\r\n\r\nExample:\r\n\r\n```\r\nLocation: http://localhost:4572/path/to/bucket.txt\r\n```\r\n\r\n## Actual behavior\r\n\r\nThe `Location` points to the LocalStack entrypoint.\r\n\r\nExample:\r\n\r\n```\r\nLocation: http://localhost:4566/path/to/bucket.txt\r\n```\r\n\r\n# Steps to reproduce\r\n\r\n- Upload a file to S3 using the official AWS SDK (https://github.com/aws/aws-sdk-js).\r\n- Check out the `Location` property.\r\n\r\n## Client code\r\n\r\n```javascript\r\nconst AWS = require('aws-sdk');\r\nconst s3 = new AWS.S3({\r\n region: 'us-west-1',\r\n endpoint: 'http://localhost:4566',\r\n apiVersion: '2006-03-01',\r\n s3ForcePathStyle: true,\r\n});\r\n\r\n(async () => {\r\n await s3\r\n .createBucket({ Bucket: 'my-bucket', ACL: 'private' })\r\n .promise();\r\n\r\n const { Location } = await s3\r\n .upload({ Key: 'file.txt', Body: 'test', Bucket: 'my-bucket' })\r\n .promise();\r\n\r\n console.assert(Location === 'http://localhost:4572/my-bucket/file.txt');\r\n})();\r\n```\n", "before_files": [{"content": "import re\nimport os\nimport sys\nimport json\nimport logging\nfrom requests.models import Response\nfrom localstack import config\nfrom localstack.constants import HEADER_LOCALSTACK_TARGET, HEADER_LOCALSTACK_EDGE_URL, LOCALSTACK_ROOT_FOLDER\nfrom localstack.utils.common import run, is_root, TMP_THREADS\nfrom localstack.utils.common import safe_requests as requests\nfrom localstack.services.generic_proxy import ProxyListener, GenericProxy\n\nLOG = logging.getLogger(__name__)\n\n# Header to indicate that the process should kill itself. This is required because if\n# this process is started as root, then we cannot kill it from a non-root process\nHEADER_KILL_SIGNAL = 'x-localstack-kill'\n\n\nclass ProxyListenerEdge(ProxyListener):\n\n def forward_request(self, method, path, data, headers):\n if method == 'OPTIONS':\n return 200\n\n # kill the process if we receive this header\n headers.get(HEADER_KILL_SIGNAL) and os._exit(0)\n\n target = headers.get('x-amz-target', '')\n auth_header = headers.get('authorization', '')\n host = headers.get('host', '')\n headers[HEADER_LOCALSTACK_EDGE_URL] = 'https://%s' % host\n\n # extract API details\n _, port, path, host = get_api_from_headers(headers, path)\n\n if not port:\n # detect S3 presigned URLs\n if 'AWSAccessKeyId=' in path or 'Signature=' in path:\n port = config.PORT_S3\n\n if not port:\n LOG.info('Unable to find forwarding rule for host \"%s\", path \"%s\", target header \"%s\", auth header \"%s\"' %\n (host, path, target, auth_header))\n response = Response()\n response.status_code = 404\n response._content = '{\"status\": \"running\"}'\n return response\n\n use_ssl = config.USE_SSL\n\n connect_host = '%s:%s' % (config.HOSTNAME, port)\n url = 'http%s://%s%s' % ('s' if use_ssl else '', connect_host, path)\n headers['Host'] = host\n function = getattr(requests, method.lower())\n if isinstance(data, dict):\n data = json.dumps(data)\n\n response = function(url, data=data, headers=headers, verify=False)\n return response\n\n\ndef get_api_from_headers(headers, path=None):\n target = headers.get('x-amz-target', '')\n host = headers.get('host', '')\n auth_header = headers.get('authorization', '')\n ls_target = headers.get(HEADER_LOCALSTACK_TARGET, '')\n path = path or '/'\n\n # initialize result\n result = '_unknown_', 0\n\n # https://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html\n try:\n credential_scope = auth_header.split(',')[0].split()[1]\n _, _, _, service, _ = credential_scope.split('/')\n result = service, config.service_port(service)\n except Exception:\n pass\n\n # Fallback rules and route customizations applied below\n\n if host.endswith('cloudfront.net'):\n path = path or '/'\n result = 'cloudfront', config.PORT_CLOUDFRONT\n elif target.startswith('AWSCognitoIdentityProviderService') or 'cognito-idp.' in host:\n result = 'cognito-idp', config.PORT_COGNITO_IDP\n elif target.startswith('AWSCognitoIdentityService') or 'cognito-identity.' in host:\n result = 'cognito-identity', config.PORT_COGNITO_IDENTITY\n elif result[0] == 's3' or re.match(r'.*s3(\\-website)?\\.([^\\.]+\\.)?amazonaws.com', host):\n host = re.sub(r's3-website\\..*\\.amazonaws', 's3.amazonaws', host)\n result = 's3', config.PORT_S3\n elif result[0] == 'states' in auth_header or host.startswith('states.'):\n result = 'stepfunctions', config.PORT_STEPFUNCTIONS\n elif '.execute-api.' in host:\n result = 'apigateway', config.PORT_APIGATEWAY\n elif target.startswith('DynamoDBStreams') or host.startswith('streams.dynamodb.'):\n result = 'dynamodbstreams', config.PORT_DYNAMODBSTREAMS\n elif ls_target == 'web' or path == '/graph':\n result = 'web', config.PORT_WEB_UI\n\n return result[0], result[1], path, host\n\n\ndef do_start_edge(port, use_ssl, asynchronous=False):\n try:\n # start local DNS server, if present\n from localstack_ext.services import dns_server\n dns_server.start_servers()\n except Exception:\n pass\n\n # get port and start Edge\n print('Starting edge router (http%s port %s)...' % ('s' if use_ssl else '', port))\n # use use=True here because our proxy allows both, HTTP and HTTPS traffic\n proxy = GenericProxy(port, ssl=True, update_listener=ProxyListenerEdge())\n proxy.start()\n if not asynchronous:\n proxy.join()\n return proxy\n\n\ndef can_use_sudo():\n try:\n run('echo | sudo -S echo', print_error=False)\n return True\n except Exception:\n return False\n\n\ndef ensure_can_use_sudo():\n if not is_root() and not can_use_sudo():\n print('Please enter your sudo password (required to configure local network):')\n run('sudo echo', stdin=True)\n\n\ndef start_edge(port=None, use_ssl=True, asynchronous=False):\n if not port:\n port = config.EDGE_PORT\n if config.EDGE_PORT_HTTP:\n do_start_edge(config.EDGE_PORT_HTTP, use_ssl=False, asynchronous=True)\n if port > 1024 or is_root():\n return do_start_edge(port, use_ssl, asynchronous=asynchronous)\n\n # process requires priviledged port but we're not root -> try running as sudo\n\n class Terminator(object):\n\n def stop(self, quiet=True):\n try:\n url = 'http%s://localhost:%s' % ('s' if use_ssl else '', port)\n requests.verify_ssl = False\n requests.post(url, headers={HEADER_KILL_SIGNAL: 'kill'})\n except Exception:\n pass\n\n # make sure we can run sudo commands\n ensure_can_use_sudo()\n\n # register a signal handler to terminate the sudo process later on\n TMP_THREADS.append(Terminator())\n\n # start the process as sudo\n sudo_cmd = 'sudo '\n python_cmd = sys.executable\n cmd = '%sPYTHONPATH=.:%s %s %s %s' % (sudo_cmd, LOCALSTACK_ROOT_FOLDER, python_cmd, __file__, port)\n process = run(cmd, asynchronous=asynchronous)\n return process\n\n\nif __name__ == '__main__':\n logging.basicConfig()\n start_edge(int(sys.argv[1]))\n", "path": "localstack/services/edge.py"}], "after_files": [{"content": "import re\nimport os\nimport sys\nimport json\nimport logging\nfrom requests.models import Response\nfrom localstack import config\nfrom localstack.constants import HEADER_LOCALSTACK_TARGET, HEADER_LOCALSTACK_EDGE_URL, LOCALSTACK_ROOT_FOLDER\nfrom localstack.utils.common import run, is_root, TMP_THREADS\nfrom localstack.utils.common import safe_requests as requests\nfrom localstack.services.generic_proxy import ProxyListener, GenericProxy\n\nLOG = logging.getLogger(__name__)\n\n# Header to indicate that the process should kill itself. This is required because if\n# this process is started as root, then we cannot kill it from a non-root process\nHEADER_KILL_SIGNAL = 'x-localstack-kill'\n\n\nclass ProxyListenerEdge(ProxyListener):\n\n def forward_request(self, method, path, data, headers):\n if method == 'OPTIONS':\n return 200\n\n # kill the process if we receive this header\n headers.get(HEADER_KILL_SIGNAL) and os._exit(0)\n\n target = headers.get('x-amz-target', '')\n auth_header = headers.get('authorization', '')\n host = headers.get('host', '')\n headers[HEADER_LOCALSTACK_EDGE_URL] = 'https://%s' % host\n\n # extract API details\n _, port, path, host = get_api_from_headers(headers, path)\n\n if not port:\n # detect S3 presigned URLs\n if 'AWSAccessKeyId=' in path or 'Signature=' in path:\n port = config.PORT_S3\n # assume that this is an S3 GET request with URL path `/<bucket>/<key ...>`\n # TODO: move S3 public URLs to a separate port/endpoint, OR check ACLs here first\n if method == 'GET' and '/' in path.strip('/'):\n port = config.PORT_S3\n\n if not port:\n LOG.info('Unable to find forwarding rule for host \"%s\", path \"%s\", target header \"%s\", auth header \"%s\"' %\n (host, path, target, auth_header))\n response = Response()\n response.status_code = 404\n response._content = '{\"status\": \"running\"}'\n return response\n\n use_ssl = config.USE_SSL\n\n connect_host = '%s:%s' % (config.HOSTNAME, port)\n url = 'http%s://%s%s' % ('s' if use_ssl else '', connect_host, path)\n headers['Host'] = host\n function = getattr(requests, method.lower())\n if isinstance(data, dict):\n data = json.dumps(data)\n\n response = function(url, data=data, headers=headers, verify=False)\n return response\n\n\ndef get_api_from_headers(headers, path=None):\n target = headers.get('x-amz-target', '')\n host = headers.get('host', '')\n auth_header = headers.get('authorization', '')\n ls_target = headers.get(HEADER_LOCALSTACK_TARGET, '')\n path = path or '/'\n\n # initialize result\n result = '_unknown_', 0\n\n # https://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html\n try:\n credential_scope = auth_header.split(',')[0].split()[1]\n _, _, _, service, _ = credential_scope.split('/')\n result = service, config.service_port(service)\n except Exception:\n pass\n\n # Fallback rules and route customizations applied below\n\n if host.endswith('cloudfront.net'):\n path = path or '/'\n result = 'cloudfront', config.PORT_CLOUDFRONT\n elif target.startswith('AWSCognitoIdentityProviderService') or 'cognito-idp.' in host:\n result = 'cognito-idp', config.PORT_COGNITO_IDP\n elif target.startswith('AWSCognitoIdentityService') or 'cognito-identity.' in host:\n result = 'cognito-identity', config.PORT_COGNITO_IDENTITY\n elif result[0] == 's3' or re.match(r'.*s3(\\-website)?\\.([^\\.]+\\.)?amazonaws.com', host):\n host = re.sub(r's3-website\\..*\\.amazonaws', 's3.amazonaws', host)\n result = 's3', config.PORT_S3\n elif result[0] == 'states' in auth_header or host.startswith('states.'):\n result = 'stepfunctions', config.PORT_STEPFUNCTIONS\n elif '.execute-api.' in host:\n result = 'apigateway', config.PORT_APIGATEWAY\n elif target.startswith('DynamoDBStreams') or host.startswith('streams.dynamodb.'):\n result = 'dynamodbstreams', config.PORT_DYNAMODBSTREAMS\n elif ls_target == 'web' or path == '/graph':\n result = 'web', config.PORT_WEB_UI\n\n return result[0], result[1], path, host\n\n\ndef do_start_edge(port, use_ssl, asynchronous=False):\n try:\n # start local DNS server, if present\n from localstack_ext.services import dns_server\n dns_server.start_servers()\n except Exception:\n pass\n\n # get port and start Edge\n print('Starting edge router (http%s port %s)...' % ('s' if use_ssl else '', port))\n # use use=True here because our proxy allows both, HTTP and HTTPS traffic\n proxy = GenericProxy(port, ssl=True, update_listener=ProxyListenerEdge())\n proxy.start()\n if not asynchronous:\n proxy.join()\n return proxy\n\n\ndef can_use_sudo():\n try:\n run('echo | sudo -S echo', print_error=False)\n return True\n except Exception:\n return False\n\n\ndef ensure_can_use_sudo():\n if not is_root() and not can_use_sudo():\n print('Please enter your sudo password (required to configure local network):')\n run('sudo echo', stdin=True)\n\n\ndef start_edge(port=None, use_ssl=True, asynchronous=False):\n if not port:\n port = config.EDGE_PORT\n if config.EDGE_PORT_HTTP:\n do_start_edge(config.EDGE_PORT_HTTP, use_ssl=False, asynchronous=True)\n if port > 1024 or is_root():\n return do_start_edge(port, use_ssl, asynchronous=asynchronous)\n\n # process requires priviledged port but we're not root -> try running as sudo\n\n class Terminator(object):\n\n def stop(self, quiet=True):\n try:\n url = 'http%s://localhost:%s' % ('s' if use_ssl else '', port)\n requests.verify_ssl = False\n requests.post(url, headers={HEADER_KILL_SIGNAL: 'kill'})\n except Exception:\n pass\n\n # make sure we can run sudo commands\n ensure_can_use_sudo()\n\n # register a signal handler to terminate the sudo process later on\n TMP_THREADS.append(Terminator())\n\n # start the process as sudo\n sudo_cmd = 'sudo '\n python_cmd = sys.executable\n cmd = '%sPYTHONPATH=.:%s %s %s %s' % (sudo_cmd, LOCALSTACK_ROOT_FOLDER, python_cmd, __file__, port)\n process = run(cmd, asynchronous=asynchronous)\n return process\n\n\nif __name__ == '__main__':\n logging.basicConfig()\n start_edge(int(sys.argv[1]))\n", "path": "localstack/services/edge.py"}]}
2,529
186
gh_patches_debug_35150
rasdani/github-patches
git_diff
alltheplaces__alltheplaces-2973
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Spider ljsilvers is broken During the global build at 2021-06-02-14-42-40, spider **ljsilvers** failed with **0 features** and **0 errors**. Here's [the log](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/logs/ljsilvers.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/ljsilvers.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/ljsilvers.geojson)) Long John Silver's http://www.ljsilvers.com/ (location search box top right) --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `locations/spiders/ljsilvers.py` Content: ``` 1 # -*- coding: utf-8 -*- 2 import scrapy 3 import json 4 import re 5 6 from locations.items import GeojsonPointItem 7 8 9 class LjsilversSpider(scrapy.Spider): 10 name = "ljsilvers" 11 item_attributes = { 'brand': "Long John Silver's", 'brand_wikidata': "Q1535221" } 12 allowed_domains = ["ljsilvers.com"] 13 start_urls = ( 14 'http://www.ljsilvers.com/locator?postalcode=76010', 15 ) 16 17 def parse(self, response): 18 data = response.body_as_unicode() 19 base_data = re.search(r'dataout\s--Array\s\((.*)\)\s\s--><style type="text/css">', data, re.DOTALL).group(1) 20 detail_matches = re.findall(r'\((.*?)\)', base_data, re.DOTALL) 21 22 for detail_match in detail_matches: 23 key_values = re.findall(r'(.*?)\s=>\s(.*)', detail_match) 24 props = {} 25 26 for key_value in key_values: 27 key = key_value[0].strip() 28 value = key_value[1].strip() 29 30 if key == '[storeID]': 31 props['ref'] = value 32 if key == '[address]': 33 props['addr_full'] = value 34 if key == '[city]': 35 props['city'] = value 36 if key == '[state]': 37 props['state'] = value 38 if key == '[zip]': 39 props['postcode'] = value 40 if key == '[phone_number]': 41 props['phone'] = value 42 if key == '[latitude]': 43 props['lat'] = value 44 if key == '[longitude]': 45 props['lon'] = value 46 47 yield GeojsonPointItem(**props) 48 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/locations/spiders/ljsilvers.py b/locations/spiders/ljsilvers.py --- a/locations/spiders/ljsilvers.py +++ b/locations/spiders/ljsilvers.py @@ -1,47 +1,32 @@ # -*- coding: utf-8 -*- import scrapy -import json -import re from locations.items import GeojsonPointItem class LjsilversSpider(scrapy.Spider): name = "ljsilvers" - item_attributes = { 'brand': "Long John Silver's", 'brand_wikidata': "Q1535221" } + item_attributes = {"brand": "Long John Silver's", "brand_wikidata": "Q1535221"} allowed_domains = ["ljsilvers.com"] start_urls = ( - 'http://www.ljsilvers.com/locator?postalcode=76010', + "https://viewer.blipstar.com/searchdbnew?uid=2483677&lat=45&lng=-103&value=10000", ) def parse(self, response): - data = response.body_as_unicode() - base_data = re.search(r'dataout\s--Array\s\((.*)\)\s\s--><style type="text/css">', data, re.DOTALL).group(1) - detail_matches = re.findall(r'\((.*?)\)', base_data, re.DOTALL) - - for detail_match in detail_matches: - key_values = re.findall(r'(.*?)\s=>\s(.*)', detail_match) - props = {} - - for key_value in key_values: - key = key_value[0].strip() - value = key_value[1].strip() - - if key == '[storeID]': - props['ref'] = value - if key == '[address]': - props['addr_full'] = value - if key == '[city]': - props['city'] = value - if key == '[state]': - props['state'] = value - if key == '[zip]': - props['postcode'] = value - if key == '[phone_number]': - props['phone'] = value - if key == '[latitude]': - props['lat'] = value - if key == '[longitude]': - props['lon'] = value - - yield GeojsonPointItem(**props) + for row in response.json(): + if row.keys() == {"fulltotal", "total", "units"}: + continue + addr = scrapy.Selector(text=row["a"]) + properties = { + "name": row["n"], + "ref": row["bpid"], + "lat": row["lat"], + "lon": row["lng"], + "addr_full": addr.xpath("//p/text()").extract_first(), + "city": addr.css(".storecity ::text").extract_first(), + "state": addr.css(".storestate ::text").extract_first(), + "postcode": addr.css(".storepostalcode ::text").extract_first(), + "country": row["c"], + "phone": row.get("p"), + } + yield GeojsonPointItem(**properties)
{"golden_diff": "diff --git a/locations/spiders/ljsilvers.py b/locations/spiders/ljsilvers.py\n--- a/locations/spiders/ljsilvers.py\n+++ b/locations/spiders/ljsilvers.py\n@@ -1,47 +1,32 @@\n # -*- coding: utf-8 -*-\n import scrapy\n-import json\n-import re\n \n from locations.items import GeojsonPointItem\n \n \n class LjsilversSpider(scrapy.Spider):\n name = \"ljsilvers\"\n- item_attributes = { 'brand': \"Long John Silver's\", 'brand_wikidata': \"Q1535221\" }\n+ item_attributes = {\"brand\": \"Long John Silver's\", \"brand_wikidata\": \"Q1535221\"}\n allowed_domains = [\"ljsilvers.com\"]\n start_urls = (\n- 'http://www.ljsilvers.com/locator?postalcode=76010',\n+ \"https://viewer.blipstar.com/searchdbnew?uid=2483677&lat=45&lng=-103&value=10000\",\n )\n \n def parse(self, response):\n- data = response.body_as_unicode()\n- base_data = re.search(r'dataout\\s--Array\\s\\((.*)\\)\\s\\s--><style type=\"text/css\">', data, re.DOTALL).group(1)\n- detail_matches = re.findall(r'\\((.*?)\\)', base_data, re.DOTALL)\n-\n- for detail_match in detail_matches:\n- key_values = re.findall(r'(.*?)\\s=>\\s(.*)', detail_match)\n- props = {}\n-\n- for key_value in key_values:\n- key = key_value[0].strip()\n- value = key_value[1].strip()\n-\n- if key == '[storeID]':\n- props['ref'] = value\n- if key == '[address]':\n- props['addr_full'] = value\n- if key == '[city]':\n- props['city'] = value\n- if key == '[state]':\n- props['state'] = value\n- if key == '[zip]':\n- props['postcode'] = value\n- if key == '[phone_number]':\n- props['phone'] = value\n- if key == '[latitude]':\n- props['lat'] = value\n- if key == '[longitude]':\n- props['lon'] = value\n-\n- yield GeojsonPointItem(**props)\n+ for row in response.json():\n+ if row.keys() == {\"fulltotal\", \"total\", \"units\"}:\n+ continue\n+ addr = scrapy.Selector(text=row[\"a\"])\n+ properties = {\n+ \"name\": row[\"n\"],\n+ \"ref\": row[\"bpid\"],\n+ \"lat\": row[\"lat\"],\n+ \"lon\": row[\"lng\"],\n+ \"addr_full\": addr.xpath(\"//p/text()\").extract_first(),\n+ \"city\": addr.css(\".storecity ::text\").extract_first(),\n+ \"state\": addr.css(\".storestate ::text\").extract_first(),\n+ \"postcode\": addr.css(\".storepostalcode ::text\").extract_first(),\n+ \"country\": row[\"c\"],\n+ \"phone\": row.get(\"p\"),\n+ }\n+ yield GeojsonPointItem(**properties)\n", "issue": "Spider ljsilvers is broken\nDuring the global build at 2021-06-02-14-42-40, spider **ljsilvers** failed with **0 features** and **0 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/logs/ljsilvers.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/ljsilvers.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/ljsilvers.geojson))\nLong John Silver's\nhttp://www.ljsilvers.com/\r\n\r\n(location search box top right)\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport json\nimport re\n\nfrom locations.items import GeojsonPointItem\n\n\nclass LjsilversSpider(scrapy.Spider):\n name = \"ljsilvers\"\n item_attributes = { 'brand': \"Long John Silver's\", 'brand_wikidata': \"Q1535221\" }\n allowed_domains = [\"ljsilvers.com\"]\n start_urls = (\n 'http://www.ljsilvers.com/locator?postalcode=76010',\n )\n\n def parse(self, response):\n data = response.body_as_unicode()\n base_data = re.search(r'dataout\\s--Array\\s\\((.*)\\)\\s\\s--><style type=\"text/css\">', data, re.DOTALL).group(1)\n detail_matches = re.findall(r'\\((.*?)\\)', base_data, re.DOTALL)\n\n for detail_match in detail_matches:\n key_values = re.findall(r'(.*?)\\s=>\\s(.*)', detail_match)\n props = {}\n\n for key_value in key_values:\n key = key_value[0].strip()\n value = key_value[1].strip()\n\n if key == '[storeID]':\n props['ref'] = value\n if key == '[address]':\n props['addr_full'] = value\n if key == '[city]':\n props['city'] = value\n if key == '[state]':\n props['state'] = value\n if key == '[zip]':\n props['postcode'] = value\n if key == '[phone_number]':\n props['phone'] = value\n if key == '[latitude]':\n props['lat'] = value\n if key == '[longitude]':\n props['lon'] = value\n\n yield GeojsonPointItem(**props)\n", "path": "locations/spiders/ljsilvers.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\n\n\nclass LjsilversSpider(scrapy.Spider):\n name = \"ljsilvers\"\n item_attributes = {\"brand\": \"Long John Silver's\", \"brand_wikidata\": \"Q1535221\"}\n allowed_domains = [\"ljsilvers.com\"]\n start_urls = (\n \"https://viewer.blipstar.com/searchdbnew?uid=2483677&lat=45&lng=-103&value=10000\",\n )\n\n def parse(self, response):\n for row in response.json():\n if row.keys() == {\"fulltotal\", \"total\", \"units\"}:\n continue\n addr = scrapy.Selector(text=row[\"a\"])\n properties = {\n \"name\": row[\"n\"],\n \"ref\": row[\"bpid\"],\n \"lat\": row[\"lat\"],\n \"lon\": row[\"lng\"],\n \"addr_full\": addr.xpath(\"//p/text()\").extract_first(),\n \"city\": addr.css(\".storecity ::text\").extract_first(),\n \"state\": addr.css(\".storestate ::text\").extract_first(),\n \"postcode\": addr.css(\".storepostalcode ::text\").extract_first(),\n \"country\": row[\"c\"],\n \"phone\": row.get(\"p\"),\n }\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/ljsilvers.py"}]}
958
738
gh_patches_debug_37386
rasdani/github-patches
git_diff
translate__pootle-6010
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- update_stores and sync_stores should produce an error if the project doesn't exist If a non-existent project is passed to `update_stores` or `sync_stores` there is no output. I would expect an error: ``` # pootle update_stores --project=nonexistent-project # pootle sync_stores --project=nonexistent-project # ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `pootle/apps/pootle_app/management/commands/set_filetype.py` Content: ``` 1 # -*- coding: utf-8 -*- 2 # 3 # Copyright (C) Pootle contributors. 4 # 5 # This file is a part of the Pootle project. It is distributed under the GPL3 6 # or later license. See the LICENSE file for a copy of the license and the 7 # AUTHORS file for copyright and authorship information. 8 9 import os 10 11 os.environ['DJANGO_SETTINGS_MODULE'] = 'pootle.settings' 12 13 from django.core.management.base import CommandError 14 15 from pootle_format.models import Format 16 from pootle_project.models import Project 17 18 from . import PootleCommand 19 20 21 class Command(PootleCommand): 22 help = "Manage Store formats." 23 24 def add_arguments(self, parser): 25 super(Command, self).add_arguments(parser) 26 parser.add_argument( 27 'filetype', 28 action='store', 29 help="File type to set") 30 parser.add_argument( 31 '--from-filetype', 32 action='store', 33 help="Only convert Stores of this file type") 34 parser.add_argument( 35 '--matching', 36 action='store', 37 help="Glob match Store path excluding extension") 38 39 def get_projects(self): 40 if not self.projects: 41 return Project.objects.all() 42 projects = [] 43 for project in self.projects: 44 # ensure all projects are valid before proceeding 45 try: 46 projects.append(Project.objects.get(code=project)) 47 except Project.DoesNotExist: 48 raise CommandError("Unrecognized project '%s'" % project) 49 return projects 50 51 def get_filetype(self, name): 52 try: 53 return Format.objects.get(name=name) 54 except Format.DoesNotExist: 55 raise CommandError("Unrecognized filetype '%s'" % name) 56 57 def handle_all(self, **options): 58 filetype = self.get_filetype(options["filetype"]) 59 from_filetype = ( 60 options["from_filetype"] 61 and self.get_filetype(options["from_filetype"]) 62 or None) 63 for project in self.get_projects(): 64 # add the filetype to project, and convert the stores 65 project.filetype_tool.add_filetype(filetype) 66 project.filetype_tool.set_filetypes( 67 filetype, 68 from_filetype=from_filetype, 69 matching=options["matching"]) 70 ``` Path: `pootle/apps/pootle_app/management/commands/__init__.py` Content: ``` 1 # -*- coding: utf-8 -*- 2 # 3 # Copyright (C) Pootle contributors. 4 # 5 # This file is a part of the Pootle project. It is distributed under the GPL3 6 # or later license. See the LICENSE file for a copy of the license and the 7 # AUTHORS file for copyright and authorship information. 8 9 import datetime 10 import logging 11 12 from django.core.management.base import BaseCommand, CommandError 13 14 from pootle.runner import set_sync_mode 15 from pootle_project.models import Project 16 17 18 class SkipChecksMixin(object): 19 def check(self, app_configs=None, tags=None, display_num_errors=False, 20 include_deployment_checks=False): 21 skip_tags = getattr(self, 'skip_system_check_tags', None) 22 if skip_tags is not None: 23 from django.core.checks.registry import registry 24 tags = registry.tags_available() - set(skip_tags) 25 26 super(SkipChecksMixin, self).check( 27 app_configs=app_configs, 28 tags=tags, 29 display_num_errors=display_num_errors, 30 include_deployment_checks=include_deployment_checks) 31 32 33 class PootleCommand(BaseCommand): 34 """Base class for handling recursive pootle store management commands.""" 35 36 process_disabled_projects = False 37 38 def add_arguments(self, parser): 39 parser.add_argument( 40 '--project', 41 action='append', 42 dest='projects', 43 help='Project to refresh', 44 ) 45 parser.add_argument( 46 '--language', 47 action='append', 48 dest='languages', 49 help='Language to refresh', 50 ) 51 parser.add_argument( 52 "--noinput", 53 action="store_true", 54 default=False, 55 help=u"Never prompt for input", 56 ) 57 parser.add_argument( 58 "--no-rq", 59 action="store_true", 60 default=False, 61 help=(u"Run all jobs in a single process, without " 62 "using rq workers"), 63 ) 64 65 def __init__(self, *args, **kwargs): 66 self.languages = [] 67 self.projects = [] 68 super(PootleCommand, self).__init__(*args, **kwargs) 69 70 def do_translation_project(self, tp, **options): 71 if hasattr(self, "handle_translation_project"): 72 logging.info(u"Running %s over %s", self.name, tp) 73 if not self.handle_translation_project(tp, **options): 74 return 75 if hasattr(self, "handle_all_stores"): 76 logging.info(u"Running %s over %s's files", self.name, tp) 77 self.handle_all_stores(tp, **options) 78 elif hasattr(self, "handle_store"): 79 store_query = tp.stores.live() 80 for store in store_query.iterator(): 81 logging.info(u"Running %s over %s", 82 self.name, store.pootle_path) 83 self.handle_store(store, **options) 84 85 def handle(self, **options): 86 # adjust debug level to the verbosity option 87 debug_levels = { 88 0: logging.ERROR, 89 1: logging.WARNING, 90 2: logging.INFO, 91 3: logging.DEBUG 92 } 93 logging.getLogger().setLevel( 94 debug_levels.get(options['verbosity'], logging.DEBUG) 95 ) 96 97 # reduce size of parse pool early on 98 self.name = self.__class__.__module__.split('.')[-1] 99 from pootle_store.fields import TranslationStoreFieldFile 100 TranslationStoreFieldFile._store_cache.maxsize = 2 101 TranslationStoreFieldFile._store_cache.cullsize = 2 102 103 self.projects = options.pop('projects', []) 104 self.languages = options.pop('languages', []) 105 106 # info start 107 start = datetime.datetime.now() 108 logging.info('Start running of %s', self.name) 109 110 try: 111 self.handle_all(**options) 112 except Exception as e: 113 raise CommandError(e) 114 115 # info finish 116 end = datetime.datetime.now() 117 logging.info('All done for %s in %s', self.name, end - start) 118 119 def handle_all(self, **options): 120 if options["no_rq"]: 121 set_sync_mode(options['noinput']) 122 123 if self.process_disabled_projects: 124 project_query = Project.objects.all() 125 else: 126 project_query = Project.objects.enabled() 127 128 if self.projects: 129 project_query = project_query.filter(code__in=self.projects) 130 131 for project in project_query.iterator(): 132 tp_query = project.translationproject_set.live() \ 133 .order_by('language__code') 134 135 if self.languages: 136 tp_query = tp_query.filter(language__code__in=self.languages) 137 138 for tp in tp_query.iterator(): 139 self.do_translation_project(tp, **options) 140 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/pootle/apps/pootle_app/management/commands/__init__.py b/pootle/apps/pootle_app/management/commands/__init__.py --- a/pootle/apps/pootle_app/management/commands/__init__.py +++ b/pootle/apps/pootle_app/management/commands/__init__.py @@ -12,6 +12,7 @@ from django.core.management.base import BaseCommand, CommandError from pootle.runner import set_sync_mode +from pootle_language.models import Language from pootle_project.models import Project @@ -82,6 +83,26 @@ self.name, store.pootle_path) self.handle_store(store, **options) + def check_projects(self, project_codes): + existing_projects = Project.objects.filter( + code__in=project_codes + ).values_list("code", flat=True) + if len(existing_projects) != len(project_codes): + unrecognized_projects = list(set(project_codes) - + set(existing_projects)) + raise CommandError("Unrecognized projects: %s" % + unrecognized_projects) + + def check_languages(self, language_codes): + existing_languages = Language.objects.filter( + code__in=language_codes + ).values_list("code", flat=True) + if len(existing_languages) != len(language_codes): + unrecognized_languages = list(set(language_codes) - + set(existing_languages)) + raise CommandError("Unrecognized languages: %s" % + unrecognized_languages) + def handle(self, **options): # adjust debug level to the verbosity option debug_levels = { @@ -102,6 +123,10 @@ self.projects = options.pop('projects', []) self.languages = options.pop('languages', []) + if self.projects: + self.check_projects(self.projects) + if self.languages: + self.check_languages(self.languages) # info start start = datetime.datetime.now() diff --git a/pootle/apps/pootle_app/management/commands/set_filetype.py b/pootle/apps/pootle_app/management/commands/set_filetype.py --- a/pootle/apps/pootle_app/management/commands/set_filetype.py +++ b/pootle/apps/pootle_app/management/commands/set_filetype.py @@ -39,14 +39,8 @@ def get_projects(self): if not self.projects: return Project.objects.all() - projects = [] - for project in self.projects: - # ensure all projects are valid before proceeding - try: - projects.append(Project.objects.get(code=project)) - except Project.DoesNotExist: - raise CommandError("Unrecognized project '%s'" % project) - return projects + + return Project.objects.filter(code__in=self.projects) def get_filetype(self, name): try:
{"golden_diff": "diff --git a/pootle/apps/pootle_app/management/commands/__init__.py b/pootle/apps/pootle_app/management/commands/__init__.py\n--- a/pootle/apps/pootle_app/management/commands/__init__.py\n+++ b/pootle/apps/pootle_app/management/commands/__init__.py\n@@ -12,6 +12,7 @@\n from django.core.management.base import BaseCommand, CommandError\n \n from pootle.runner import set_sync_mode\n+from pootle_language.models import Language\n from pootle_project.models import Project\n \n \n@@ -82,6 +83,26 @@\n self.name, store.pootle_path)\n self.handle_store(store, **options)\n \n+ def check_projects(self, project_codes):\n+ existing_projects = Project.objects.filter(\n+ code__in=project_codes\n+ ).values_list(\"code\", flat=True)\n+ if len(existing_projects) != len(project_codes):\n+ unrecognized_projects = list(set(project_codes) -\n+ set(existing_projects))\n+ raise CommandError(\"Unrecognized projects: %s\" %\n+ unrecognized_projects)\n+\n+ def check_languages(self, language_codes):\n+ existing_languages = Language.objects.filter(\n+ code__in=language_codes\n+ ).values_list(\"code\", flat=True)\n+ if len(existing_languages) != len(language_codes):\n+ unrecognized_languages = list(set(language_codes) -\n+ set(existing_languages))\n+ raise CommandError(\"Unrecognized languages: %s\" %\n+ unrecognized_languages)\n+\n def handle(self, **options):\n # adjust debug level to the verbosity option\n debug_levels = {\n@@ -102,6 +123,10 @@\n \n self.projects = options.pop('projects', [])\n self.languages = options.pop('languages', [])\n+ if self.projects:\n+ self.check_projects(self.projects)\n+ if self.languages:\n+ self.check_languages(self.languages)\n \n # info start\n start = datetime.datetime.now()\ndiff --git a/pootle/apps/pootle_app/management/commands/set_filetype.py b/pootle/apps/pootle_app/management/commands/set_filetype.py\n--- a/pootle/apps/pootle_app/management/commands/set_filetype.py\n+++ b/pootle/apps/pootle_app/management/commands/set_filetype.py\n@@ -39,14 +39,8 @@\n def get_projects(self):\n if not self.projects:\n return Project.objects.all()\n- projects = []\n- for project in self.projects:\n- # ensure all projects are valid before proceeding\n- try:\n- projects.append(Project.objects.get(code=project))\n- except Project.DoesNotExist:\n- raise CommandError(\"Unrecognized project '%s'\" % project)\n- return projects\n+\n+ return Project.objects.filter(code__in=self.projects)\n \n def get_filetype(self, name):\n try:\n", "issue": "update_stores and sync_stores should produce an error if the project doesn't exist\nIf a non-existent project is passed to `update_stores` or `sync_stores` there is no output. I would expect an error:\r\n\r\n```\r\n# pootle update_stores --project=nonexistent-project\r\n# pootle sync_stores --project=nonexistent-project\r\n#\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport os\n\nos.environ['DJANGO_SETTINGS_MODULE'] = 'pootle.settings'\n\nfrom django.core.management.base import CommandError\n\nfrom pootle_format.models import Format\nfrom pootle_project.models import Project\n\nfrom . import PootleCommand\n\n\nclass Command(PootleCommand):\n help = \"Manage Store formats.\"\n\n def add_arguments(self, parser):\n super(Command, self).add_arguments(parser)\n parser.add_argument(\n 'filetype',\n action='store',\n help=\"File type to set\")\n parser.add_argument(\n '--from-filetype',\n action='store',\n help=\"Only convert Stores of this file type\")\n parser.add_argument(\n '--matching',\n action='store',\n help=\"Glob match Store path excluding extension\")\n\n def get_projects(self):\n if not self.projects:\n return Project.objects.all()\n projects = []\n for project in self.projects:\n # ensure all projects are valid before proceeding\n try:\n projects.append(Project.objects.get(code=project))\n except Project.DoesNotExist:\n raise CommandError(\"Unrecognized project '%s'\" % project)\n return projects\n\n def get_filetype(self, name):\n try:\n return Format.objects.get(name=name)\n except Format.DoesNotExist:\n raise CommandError(\"Unrecognized filetype '%s'\" % name)\n\n def handle_all(self, **options):\n filetype = self.get_filetype(options[\"filetype\"])\n from_filetype = (\n options[\"from_filetype\"]\n and self.get_filetype(options[\"from_filetype\"])\n or None)\n for project in self.get_projects():\n # add the filetype to project, and convert the stores\n project.filetype_tool.add_filetype(filetype)\n project.filetype_tool.set_filetypes(\n filetype,\n from_filetype=from_filetype,\n matching=options[\"matching\"])\n", "path": "pootle/apps/pootle_app/management/commands/set_filetype.py"}, {"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport datetime\nimport logging\n\nfrom django.core.management.base import BaseCommand, CommandError\n\nfrom pootle.runner import set_sync_mode\nfrom pootle_project.models import Project\n\n\nclass SkipChecksMixin(object):\n def check(self, app_configs=None, tags=None, display_num_errors=False,\n include_deployment_checks=False):\n skip_tags = getattr(self, 'skip_system_check_tags', None)\n if skip_tags is not None:\n from django.core.checks.registry import registry\n tags = registry.tags_available() - set(skip_tags)\n\n super(SkipChecksMixin, self).check(\n app_configs=app_configs,\n tags=tags,\n display_num_errors=display_num_errors,\n include_deployment_checks=include_deployment_checks)\n\n\nclass PootleCommand(BaseCommand):\n \"\"\"Base class for handling recursive pootle store management commands.\"\"\"\n\n process_disabled_projects = False\n\n def add_arguments(self, parser):\n parser.add_argument(\n '--project',\n action='append',\n dest='projects',\n help='Project to refresh',\n )\n parser.add_argument(\n '--language',\n action='append',\n dest='languages',\n help='Language to refresh',\n )\n parser.add_argument(\n \"--noinput\",\n action=\"store_true\",\n default=False,\n help=u\"Never prompt for input\",\n )\n parser.add_argument(\n \"--no-rq\",\n action=\"store_true\",\n default=False,\n help=(u\"Run all jobs in a single process, without \"\n \"using rq workers\"),\n )\n\n def __init__(self, *args, **kwargs):\n self.languages = []\n self.projects = []\n super(PootleCommand, self).__init__(*args, **kwargs)\n\n def do_translation_project(self, tp, **options):\n if hasattr(self, \"handle_translation_project\"):\n logging.info(u\"Running %s over %s\", self.name, tp)\n if not self.handle_translation_project(tp, **options):\n return\n if hasattr(self, \"handle_all_stores\"):\n logging.info(u\"Running %s over %s's files\", self.name, tp)\n self.handle_all_stores(tp, **options)\n elif hasattr(self, \"handle_store\"):\n store_query = tp.stores.live()\n for store in store_query.iterator():\n logging.info(u\"Running %s over %s\",\n self.name, store.pootle_path)\n self.handle_store(store, **options)\n\n def handle(self, **options):\n # adjust debug level to the verbosity option\n debug_levels = {\n 0: logging.ERROR,\n 1: logging.WARNING,\n 2: logging.INFO,\n 3: logging.DEBUG\n }\n logging.getLogger().setLevel(\n debug_levels.get(options['verbosity'], logging.DEBUG)\n )\n\n # reduce size of parse pool early on\n self.name = self.__class__.__module__.split('.')[-1]\n from pootle_store.fields import TranslationStoreFieldFile\n TranslationStoreFieldFile._store_cache.maxsize = 2\n TranslationStoreFieldFile._store_cache.cullsize = 2\n\n self.projects = options.pop('projects', [])\n self.languages = options.pop('languages', [])\n\n # info start\n start = datetime.datetime.now()\n logging.info('Start running of %s', self.name)\n\n try:\n self.handle_all(**options)\n except Exception as e:\n raise CommandError(e)\n\n # info finish\n end = datetime.datetime.now()\n logging.info('All done for %s in %s', self.name, end - start)\n\n def handle_all(self, **options):\n if options[\"no_rq\"]:\n set_sync_mode(options['noinput'])\n\n if self.process_disabled_projects:\n project_query = Project.objects.all()\n else:\n project_query = Project.objects.enabled()\n\n if self.projects:\n project_query = project_query.filter(code__in=self.projects)\n\n for project in project_query.iterator():\n tp_query = project.translationproject_set.live() \\\n .order_by('language__code')\n\n if self.languages:\n tp_query = tp_query.filter(language__code__in=self.languages)\n\n for tp in tp_query.iterator():\n self.do_translation_project(tp, **options)\n", "path": "pootle/apps/pootle_app/management/commands/__init__.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport os\n\nos.environ['DJANGO_SETTINGS_MODULE'] = 'pootle.settings'\n\nfrom django.core.management.base import CommandError\n\nfrom pootle_format.models import Format\nfrom pootle_project.models import Project\n\nfrom . import PootleCommand\n\n\nclass Command(PootleCommand):\n help = \"Manage Store formats.\"\n\n def add_arguments(self, parser):\n super(Command, self).add_arguments(parser)\n parser.add_argument(\n 'filetype',\n action='store',\n help=\"File type to set\")\n parser.add_argument(\n '--from-filetype',\n action='store',\n help=\"Only convert Stores of this file type\")\n parser.add_argument(\n '--matching',\n action='store',\n help=\"Glob match Store path excluding extension\")\n\n def get_projects(self):\n if not self.projects:\n return Project.objects.all()\n\n return Project.objects.filter(code__in=self.projects)\n\n def get_filetype(self, name):\n try:\n return Format.objects.get(name=name)\n except Format.DoesNotExist:\n raise CommandError(\"Unrecognized filetype '%s'\" % name)\n\n def handle_all(self, **options):\n filetype = self.get_filetype(options[\"filetype\"])\n from_filetype = (\n options[\"from_filetype\"]\n and self.get_filetype(options[\"from_filetype\"])\n or None)\n for project in self.get_projects():\n # add the filetype to project, and convert the stores\n project.filetype_tool.add_filetype(filetype)\n project.filetype_tool.set_filetypes(\n filetype,\n from_filetype=from_filetype,\n matching=options[\"matching\"])\n", "path": "pootle/apps/pootle_app/management/commands/set_filetype.py"}, {"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport datetime\nimport logging\n\nfrom django.core.management.base import BaseCommand, CommandError\n\nfrom pootle.runner import set_sync_mode\nfrom pootle_language.models import Language\nfrom pootle_project.models import Project\n\n\nclass SkipChecksMixin(object):\n def check(self, app_configs=None, tags=None, display_num_errors=False,\n include_deployment_checks=False):\n skip_tags = getattr(self, 'skip_system_check_tags', None)\n if skip_tags is not None:\n from django.core.checks.registry import registry\n tags = registry.tags_available() - set(skip_tags)\n\n super(SkipChecksMixin, self).check(\n app_configs=app_configs,\n tags=tags,\n display_num_errors=display_num_errors,\n include_deployment_checks=include_deployment_checks)\n\n\nclass PootleCommand(BaseCommand):\n \"\"\"Base class for handling recursive pootle store management commands.\"\"\"\n\n process_disabled_projects = False\n\n def add_arguments(self, parser):\n parser.add_argument(\n '--project',\n action='append',\n dest='projects',\n help='Project to refresh',\n )\n parser.add_argument(\n '--language',\n action='append',\n dest='languages',\n help='Language to refresh',\n )\n parser.add_argument(\n \"--noinput\",\n action=\"store_true\",\n default=False,\n help=u\"Never prompt for input\",\n )\n parser.add_argument(\n \"--no-rq\",\n action=\"store_true\",\n default=False,\n help=(u\"Run all jobs in a single process, without \"\n \"using rq workers\"),\n )\n\n def __init__(self, *args, **kwargs):\n self.languages = []\n self.projects = []\n super(PootleCommand, self).__init__(*args, **kwargs)\n\n def do_translation_project(self, tp, **options):\n if hasattr(self, \"handle_translation_project\"):\n logging.info(u\"Running %s over %s\", self.name, tp)\n if not self.handle_translation_project(tp, **options):\n return\n if hasattr(self, \"handle_all_stores\"):\n logging.info(u\"Running %s over %s's files\", self.name, tp)\n self.handle_all_stores(tp, **options)\n elif hasattr(self, \"handle_store\"):\n store_query = tp.stores.live()\n for store in store_query.iterator():\n logging.info(u\"Running %s over %s\",\n self.name, store.pootle_path)\n self.handle_store(store, **options)\n\n def check_projects(self, project_codes):\n existing_projects = Project.objects.filter(\n code__in=project_codes\n ).values_list(\"code\", flat=True)\n if len(existing_projects) != len(project_codes):\n unrecognized_projects = list(set(project_codes) -\n set(existing_projects))\n raise CommandError(\"Unrecognized projects: %s\" %\n unrecognized_projects)\n\n def check_languages(self, language_codes):\n existing_languages = Language.objects.filter(\n code__in=language_codes\n ).values_list(\"code\", flat=True)\n if len(existing_languages) != len(language_codes):\n unrecognized_languages = list(set(language_codes) -\n set(existing_languages))\n raise CommandError(\"Unrecognized languages: %s\" %\n unrecognized_languages)\n\n def handle(self, **options):\n # adjust debug level to the verbosity option\n debug_levels = {\n 0: logging.ERROR,\n 1: logging.WARNING,\n 2: logging.INFO,\n 3: logging.DEBUG\n }\n logging.getLogger().setLevel(\n debug_levels.get(options['verbosity'], logging.DEBUG)\n )\n\n # reduce size of parse pool early on\n self.name = self.__class__.__module__.split('.')[-1]\n from pootle_store.fields import TranslationStoreFieldFile\n TranslationStoreFieldFile._store_cache.maxsize = 2\n TranslationStoreFieldFile._store_cache.cullsize = 2\n\n self.projects = options.pop('projects', [])\n self.languages = options.pop('languages', [])\n if self.projects:\n self.check_projects(self.projects)\n if self.languages:\n self.check_languages(self.languages)\n\n # info start\n start = datetime.datetime.now()\n logging.info('Start running of %s', self.name)\n\n try:\n self.handle_all(**options)\n except Exception as e:\n raise CommandError(e)\n\n # info finish\n end = datetime.datetime.now()\n logging.info('All done for %s in %s', self.name, end - start)\n\n def handle_all(self, **options):\n if options[\"no_rq\"]:\n set_sync_mode(options['noinput'])\n\n if self.process_disabled_projects:\n project_query = Project.objects.all()\n else:\n project_query = Project.objects.enabled()\n\n if self.projects:\n project_query = project_query.filter(code__in=self.projects)\n\n for project in project_query.iterator():\n tp_query = project.translationproject_set.live() \\\n .order_by('language__code')\n\n if self.languages:\n tp_query = tp_query.filter(language__code__in=self.languages)\n\n for tp in tp_query.iterator():\n self.do_translation_project(tp, **options)\n", "path": "pootle/apps/pootle_app/management/commands/__init__.py"}]}
2,266
637
gh_patches_debug_1832
rasdani/github-patches
git_diff
conan-io__conan-center-index-18494
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [package] clickhouse-cpp/*: fPIC option is not respected In the recipe file fPIC option is always removed during configure stage, which can lead to not working static library. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `recipes/clickhouse-cpp/all/conanfile.py` Content: ``` 1 from conan import ConanFile 2 from conan.tools.cmake import CMake, CMakeToolchain,CMakeDeps, cmake_layout 3 from conan.tools.files import copy, get 4 from conan.tools.build import check_min_cppstd 5 from conan.errors import ConanInvalidConfiguration 6 from conan.tools.scm import Version 7 import os 8 9 required_conan_version = ">=1.53.0" 10 11 class ClickHouseCppConan(ConanFile): 12 name = "clickhouse-cpp" 13 homepage = "https://github.com/ClickHouse/clickhouse-cpp" 14 url = "https://github.com/conan-io/conan-center-index" 15 description = "ClickHouse C++ API" 16 license = "Apache-2.0" 17 topics = ("database", "db", "clickhouse") 18 settings = "os", "arch", "compiler", "build_type" 19 options = { 20 "shared": [True, False], 21 "fPIC": [True, False], 22 "enable_benchmark": [True, False], 23 "with_openssl": [True, False] 24 } 25 default_options = { 26 "shared": False, 27 "fPIC": True, 28 "enable_benchmark": False, 29 "with_openssl": False 30 } 31 32 def requirements(self): 33 34 self.requires("lz4/1.9.4") 35 36 self.requires("abseil/20230125.3", transitive_headers=True) 37 38 self.requires("cityhash/cci.20130801") 39 if self.options.with_openssl: 40 self.requires("openssl/[>=1.1 <4]") 41 42 def build_requirements(self): 43 if self.options.enable_benchmark: 44 self.requires("benchmark/1.8.0") 45 46 @property 47 def _min_cppstd(self): 48 return "17" 49 50 @property 51 def _compilers_minimum_version(self): 52 return { 53 "Visual Studio": "15", 54 "msvc": "191", 55 "gcc": "7", 56 "clang": "6", 57 } 58 59 @property 60 def _requires_compiler_rt(self): 61 return self.settings.compiler == "clang" and (( self.settings.compiler.libcxx in ["libstdc++", "libstdc++11"] and not self.options.shared) or self.settings.compiler.libcxx == "libc++" ) 62 63 def validate(self): 64 if self.settings.compiler.get_safe("cppstd"): 65 check_min_cppstd(self, self._min_cppstd) 66 minimum_version = self._compilers_minimum_version.get(str(self.settings.compiler), False) 67 if minimum_version and Version(self.settings.compiler.version) < minimum_version: 68 raise ConanInvalidConfiguration(f"{self.ref} requires C++17, which your compiler does not support.") 69 if self.settings.os == "Windows" and self.options.shared: 70 raise ConanInvalidConfiguration("f{self.ref} does not support shared library on Windows.") 71 # look at https://github.com/ClickHouse/clickhouse-cpp/pull/226 72 73 def config_options(self): 74 if self.settings.os == "Windows": 75 del self.options.fPIC 76 77 def configure(self): 78 self.options.rm_safe("fPIC") 79 80 def layout(self): 81 cmake_layout(self, src_folder="src") 82 83 def source(self): 84 get(self, **self.conan_data["sources"][self.version], 85 destination=self.source_folder, strip_root=True) 86 87 def generate(self): 88 tc = CMakeToolchain(self) 89 tc.variables["BUILD_BENCHMARK"] = self.options.enable_benchmark 90 tc.cache_variables["BUILD_SHARED_LIBS"] = self.options.shared 91 tc.variables["WITH_OPENSSL"] = self.options.with_openssl 92 tc.cache_variables["WITH_SYSTEM_ABSEIL"] = True 93 tc.cache_variables["WITH_SYSTEM_LZ4"] = True 94 tc.cache_variables["WITH_SYSTEM_CITYHASH"] = True 95 tc.generate() 96 97 cd = CMakeDeps(self) 98 cd.generate() 99 100 def build(self): 101 cmake = CMake(self) 102 cmake.configure() 103 cmake.build() 104 105 def package(self): 106 copy(self, "LICENSE", src=self.source_folder, dst=os.path.join(self.package_folder, "licenses")) 107 cmake = CMake(self) 108 cmake.install() 109 110 def package_info(self): 111 self.cpp_info.libs.append("clickhouse-cpp-lib") 112 self.cpp_info.set_property("cmake_target_name", "clickhouse-cpp-lib::clickhouse-cpp-lib") 113 114 if self._requires_compiler_rt: 115 ldflags = ["--rtlib=compiler-rt"] 116 self.cpp_info.exelinkflags = ldflags 117 self.cpp_info.sharedlinkflags = ldflags 118 self.cpp_info.system_libs.append("gcc_s") 119 120 self.cpp_info.filenames["cmake_find_package"] = "clickhouse-cpp" 121 self.cpp_info.filenames["cmake_find_package_multi"] = "clickhouse-cpp" 122 self.cpp_info.names["cmake_find_package"] = "clickhouse-cpp-lib" 123 self.cpp_info.names["cmake_find_package_multi"] = "clickhouse-cpp-lib" 124 125 if self.settings.os == 'Windows': 126 self.cpp_info.system_libs = ['ws2_32', 'wsock32'] 127 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/recipes/clickhouse-cpp/all/conanfile.py b/recipes/clickhouse-cpp/all/conanfile.py --- a/recipes/clickhouse-cpp/all/conanfile.py +++ b/recipes/clickhouse-cpp/all/conanfile.py @@ -75,7 +75,8 @@ del self.options.fPIC def configure(self): - self.options.rm_safe("fPIC") + if self.options.shared: + self.options.rm_safe("fPIC") def layout(self): cmake_layout(self, src_folder="src")
{"golden_diff": "diff --git a/recipes/clickhouse-cpp/all/conanfile.py b/recipes/clickhouse-cpp/all/conanfile.py\n--- a/recipes/clickhouse-cpp/all/conanfile.py\n+++ b/recipes/clickhouse-cpp/all/conanfile.py\n@@ -75,7 +75,8 @@\n del self.options.fPIC\n \n def configure(self):\n- self.options.rm_safe(\"fPIC\")\n+ if self.options.shared:\n+ self.options.rm_safe(\"fPIC\")\n \n def layout(self):\n cmake_layout(self, src_folder=\"src\")\n", "issue": "[package] clickhouse-cpp/*: fPIC option is not respected\nIn the recipe file fPIC option is always removed during configure stage, which can lead to not working static library.\n", "before_files": [{"content": "from conan import ConanFile\nfrom conan.tools.cmake import CMake, CMakeToolchain,CMakeDeps, cmake_layout\nfrom conan.tools.files import copy, get\nfrom conan.tools.build import check_min_cppstd\nfrom conan.errors import ConanInvalidConfiguration\nfrom conan.tools.scm import Version\nimport os\n\nrequired_conan_version = \">=1.53.0\"\n\nclass ClickHouseCppConan(ConanFile):\n name = \"clickhouse-cpp\"\n homepage = \"https://github.com/ClickHouse/clickhouse-cpp\"\n url = \"https://github.com/conan-io/conan-center-index\"\n description = \"ClickHouse C++ API\"\n license = \"Apache-2.0\"\n topics = (\"database\", \"db\", \"clickhouse\")\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False],\n \"enable_benchmark\": [True, False],\n \"with_openssl\": [True, False]\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True,\n \"enable_benchmark\": False,\n \"with_openssl\": False\n }\n\n def requirements(self):\n\n self.requires(\"lz4/1.9.4\")\n\n self.requires(\"abseil/20230125.3\", transitive_headers=True)\n\n self.requires(\"cityhash/cci.20130801\")\n if self.options.with_openssl:\n self.requires(\"openssl/[>=1.1 <4]\")\n\n def build_requirements(self):\n if self.options.enable_benchmark:\n self.requires(\"benchmark/1.8.0\")\n\n @property\n def _min_cppstd(self):\n return \"17\"\n\n @property\n def _compilers_minimum_version(self):\n return {\n \"Visual Studio\": \"15\",\n \"msvc\": \"191\",\n \"gcc\": \"7\",\n \"clang\": \"6\",\n }\n\n @property\n def _requires_compiler_rt(self):\n return self.settings.compiler == \"clang\" and (( self.settings.compiler.libcxx in [\"libstdc++\", \"libstdc++11\"] and not self.options.shared) or self.settings.compiler.libcxx == \"libc++\" )\n\n def validate(self):\n if self.settings.compiler.get_safe(\"cppstd\"):\n check_min_cppstd(self, self._min_cppstd)\n minimum_version = self._compilers_minimum_version.get(str(self.settings.compiler), False)\n if minimum_version and Version(self.settings.compiler.version) < minimum_version:\n raise ConanInvalidConfiguration(f\"{self.ref} requires C++17, which your compiler does not support.\")\n if self.settings.os == \"Windows\" and self.options.shared:\n raise ConanInvalidConfiguration(\"f{self.ref} does not support shared library on Windows.\")\n # look at https://github.com/ClickHouse/clickhouse-cpp/pull/226\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n self.options.rm_safe(\"fPIC\")\n\n def layout(self):\n cmake_layout(self, src_folder=\"src\")\n\n def source(self):\n get(self, **self.conan_data[\"sources\"][self.version],\n destination=self.source_folder, strip_root=True)\n\n def generate(self):\n tc = CMakeToolchain(self)\n tc.variables[\"BUILD_BENCHMARK\"] = self.options.enable_benchmark\n tc.cache_variables[\"BUILD_SHARED_LIBS\"] = self.options.shared\n tc.variables[\"WITH_OPENSSL\"] = self.options.with_openssl\n tc.cache_variables[\"WITH_SYSTEM_ABSEIL\"] = True\n tc.cache_variables[\"WITH_SYSTEM_LZ4\"] = True\n tc.cache_variables[\"WITH_SYSTEM_CITYHASH\"] = True\n tc.generate()\n\n cd = CMakeDeps(self)\n cd.generate()\n\n def build(self):\n cmake = CMake(self)\n cmake.configure()\n cmake.build()\n\n def package(self):\n copy(self, \"LICENSE\", src=self.source_folder, dst=os.path.join(self.package_folder, \"licenses\"))\n cmake = CMake(self)\n cmake.install()\n\n def package_info(self):\n self.cpp_info.libs.append(\"clickhouse-cpp-lib\")\n self.cpp_info.set_property(\"cmake_target_name\", \"clickhouse-cpp-lib::clickhouse-cpp-lib\")\n\n if self._requires_compiler_rt:\n ldflags = [\"--rtlib=compiler-rt\"]\n self.cpp_info.exelinkflags = ldflags\n self.cpp_info.sharedlinkflags = ldflags\n self.cpp_info.system_libs.append(\"gcc_s\")\n\n self.cpp_info.filenames[\"cmake_find_package\"] = \"clickhouse-cpp\"\n self.cpp_info.filenames[\"cmake_find_package_multi\"] = \"clickhouse-cpp\"\n self.cpp_info.names[\"cmake_find_package\"] = \"clickhouse-cpp-lib\"\n self.cpp_info.names[\"cmake_find_package_multi\"] = \"clickhouse-cpp-lib\"\n\n if self.settings.os == 'Windows':\n self.cpp_info.system_libs = ['ws2_32', 'wsock32']\n", "path": "recipes/clickhouse-cpp/all/conanfile.py"}], "after_files": [{"content": "from conan import ConanFile\nfrom conan.tools.cmake import CMake, CMakeToolchain,CMakeDeps, cmake_layout\nfrom conan.tools.files import copy, get\nfrom conan.tools.build import check_min_cppstd\nfrom conan.errors import ConanInvalidConfiguration\nfrom conan.tools.scm import Version\nimport os\n\nrequired_conan_version = \">=1.53.0\"\n\nclass ClickHouseCppConan(ConanFile):\n name = \"clickhouse-cpp\"\n homepage = \"https://github.com/ClickHouse/clickhouse-cpp\"\n url = \"https://github.com/conan-io/conan-center-index\"\n description = \"ClickHouse C++ API\"\n license = \"Apache-2.0\"\n topics = (\"database\", \"db\", \"clickhouse\")\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False],\n \"enable_benchmark\": [True, False],\n \"with_openssl\": [True, False]\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True,\n \"enable_benchmark\": False,\n \"with_openssl\": False\n }\n\n def requirements(self):\n\n self.requires(\"lz4/1.9.4\")\n\n self.requires(\"abseil/20230125.3\", transitive_headers=True)\n\n self.requires(\"cityhash/cci.20130801\")\n if self.options.with_openssl:\n self.requires(\"openssl/[>=1.1 <4]\")\n\n def build_requirements(self):\n if self.options.enable_benchmark:\n self.requires(\"benchmark/1.8.0\")\n\n @property\n def _min_cppstd(self):\n return \"17\"\n\n @property\n def _compilers_minimum_version(self):\n return {\n \"Visual Studio\": \"15\",\n \"msvc\": \"191\",\n \"gcc\": \"7\",\n \"clang\": \"6\",\n }\n\n @property\n def _requires_compiler_rt(self):\n return self.settings.compiler == \"clang\" and (( self.settings.compiler.libcxx in [\"libstdc++\", \"libstdc++11\"] and not self.options.shared) or self.settings.compiler.libcxx == \"libc++\" )\n\n def validate(self):\n if self.settings.compiler.get_safe(\"cppstd\"):\n check_min_cppstd(self, self._min_cppstd)\n minimum_version = self._compilers_minimum_version.get(str(self.settings.compiler), False)\n if minimum_version and Version(self.settings.compiler.version) < minimum_version:\n raise ConanInvalidConfiguration(f\"{self.ref} requires C++17, which your compiler does not support.\")\n if self.settings.os == \"Windows\" and self.options.shared:\n raise ConanInvalidConfiguration(\"f{self.ref} does not support shared library on Windows.\")\n # look at https://github.com/ClickHouse/clickhouse-cpp/pull/226\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n if self.options.shared:\n self.options.rm_safe(\"fPIC\")\n\n def layout(self):\n cmake_layout(self, src_folder=\"src\")\n\n def source(self):\n get(self, **self.conan_data[\"sources\"][self.version],\n destination=self.source_folder, strip_root=True)\n\n def generate(self):\n tc = CMakeToolchain(self)\n tc.variables[\"BUILD_BENCHMARK\"] = self.options.enable_benchmark\n tc.cache_variables[\"BUILD_SHARED_LIBS\"] = self.options.shared\n tc.variables[\"WITH_OPENSSL\"] = self.options.with_openssl\n tc.cache_variables[\"WITH_SYSTEM_ABSEIL\"] = True\n tc.cache_variables[\"WITH_SYSTEM_LZ4\"] = True\n tc.cache_variables[\"WITH_SYSTEM_CITYHASH\"] = True\n tc.generate()\n\n cd = CMakeDeps(self)\n cd.generate()\n\n def build(self):\n cmake = CMake(self)\n cmake.configure()\n cmake.build()\n\n def package(self):\n copy(self, \"LICENSE\", src=self.source_folder, dst=os.path.join(self.package_folder, \"licenses\"))\n cmake = CMake(self)\n cmake.install()\n\n def package_info(self):\n self.cpp_info.libs.append(\"clickhouse-cpp-lib\")\n self.cpp_info.set_property(\"cmake_target_name\", \"clickhouse-cpp-lib::clickhouse-cpp-lib\")\n\n if self._requires_compiler_rt:\n ldflags = [\"--rtlib=compiler-rt\"]\n self.cpp_info.exelinkflags = ldflags\n self.cpp_info.sharedlinkflags = ldflags\n self.cpp_info.system_libs.append(\"gcc_s\")\n\n self.cpp_info.filenames[\"cmake_find_package\"] = \"clickhouse-cpp\"\n self.cpp_info.filenames[\"cmake_find_package_multi\"] = \"clickhouse-cpp\"\n self.cpp_info.names[\"cmake_find_package\"] = \"clickhouse-cpp-lib\"\n self.cpp_info.names[\"cmake_find_package_multi\"] = \"clickhouse-cpp-lib\"\n\n if self.settings.os == 'Windows':\n self.cpp_info.system_libs = ['ws2_32', 'wsock32']\n", "path": "recipes/clickhouse-cpp/all/conanfile.py"}]}
1,719
127
gh_patches_debug_16662
rasdani/github-patches
git_diff
Pylons__pyramid-2918
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Restore the Registry(*args, **kw) API In the course of #2891, the Registry API was changed in a backwards-incompatible way. While the author of the change (PR 2893) couldn't imagine anyone using the keywords API, I immediately ran into the incompatibility with a number of customer projects. Since the change back to the old API shouldn't introduce another backwards incompatibility, I suggest reverting that change as soon as possible. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `pyramid/registry.py` Content: ``` 1 import operator 2 import threading 3 4 from zope.interface import implementer 5 6 from zope.interface.registry import Components 7 8 from pyramid.compat import text_ 9 from pyramid.decorator import reify 10 11 from pyramid.interfaces import ( 12 IIntrospector, 13 IIntrospectable, 14 ISettings, 15 ) 16 17 from pyramid.path import ( 18 CALLER_PACKAGE, 19 caller_package, 20 ) 21 22 empty = text_('') 23 24 class Registry(Components, dict): 25 """ A registry object is an :term:`application registry`. 26 27 It is used by the framework itself to perform mappings of URLs to view 28 callables, as well as servicing other various framework duties. A registry 29 has its own internal API, but this API is rarely used by Pyramid 30 application developers (it's usually only used by developers of the 31 Pyramid framework and Pyramid addons). But it has a number of attributes 32 that may be useful to application developers within application code, 33 such as ``settings``, which is a dictionary containing application 34 deployment settings. 35 36 For information about the purpose and usage of the application registry, 37 see :ref:`zca_chapter`. 38 39 The registry may be used both as an :class:`pyramid.interfaces.IDict` and 40 as a Zope component registry. 41 These two ways of storing configuration are independent. 42 Applications will tend to prefer to store information as key-values 43 whereas addons may prefer to use the component registry to avoid naming 44 conflicts and to provide more complex lookup mechanisms. 45 46 The application registry is usually accessed as ``request.registry`` in 47 application code. By the time a registry is used to handle requests it 48 should be considered frozen and read-only. Any changes to its internal 49 state should be done with caution and concern for thread-safety. 50 51 """ 52 53 # for optimization purposes, if no listeners are listening, don't try 54 # to notify them 55 has_listeners = False 56 57 _settings = None 58 59 def __init__(self, package_name=CALLER_PACKAGE): 60 # add a registry-instance-specific lock, which is used when the lookup 61 # cache is mutated 62 self._lock = threading.Lock() 63 # add a view lookup cache 64 self._clear_view_lookup_cache() 65 if package_name is CALLER_PACKAGE: 66 package_name = caller_package().__name__ 67 Components.__init__(self, package_name) 68 dict.__init__(self) 69 70 def _clear_view_lookup_cache(self): 71 self._view_lookup_cache = {} 72 73 def __nonzero__(self): 74 # defeat bool determination via dict.__len__ 75 return True 76 77 @reify 78 def package_name(self): 79 return self.__name__ 80 81 def registerSubscriptionAdapter(self, *arg, **kw): 82 result = Components.registerSubscriptionAdapter(self, *arg, **kw) 83 self.has_listeners = True 84 return result 85 86 def registerSelfAdapter(self, required=None, provided=None, name=empty, 87 info=empty, event=True): 88 # registerAdapter analogue which always returns the object itself 89 # when required is matched 90 return self.registerAdapter(lambda x: x, required=required, 91 provided=provided, name=name, 92 info=info, event=event) 93 94 def queryAdapterOrSelf(self, object, interface, default=None): 95 # queryAdapter analogue which returns the object if it implements 96 # the interface, otherwise it will return an adaptation to the 97 # interface 98 if not interface.providedBy(object): 99 return self.queryAdapter(object, interface, default=default) 100 return object 101 102 def registerHandler(self, *arg, **kw): 103 result = Components.registerHandler(self, *arg, **kw) 104 self.has_listeners = True 105 return result 106 107 def notify(self, *events): 108 if self.has_listeners: 109 # iterating over subscribers assures they get executed 110 [ _ for _ in self.subscribers(events, None) ] 111 112 # backwards compatibility for code that wants to look up a settings 113 # object via ``registry.getUtility(ISettings)`` 114 def _get_settings(self): 115 return self._settings 116 117 def _set_settings(self, settings): 118 self.registerUtility(settings, ISettings) 119 self._settings = settings 120 121 settings = property(_get_settings, _set_settings) 122 123 @implementer(IIntrospector) 124 class Introspector(object): 125 def __init__(self): 126 self._refs = {} 127 self._categories = {} 128 self._counter = 0 129 130 def add(self, intr): 131 category = self._categories.setdefault(intr.category_name, {}) 132 category[intr.discriminator] = intr 133 category[intr.discriminator_hash] = intr 134 intr.order = self._counter 135 self._counter += 1 136 137 def get(self, category_name, discriminator, default=None): 138 category = self._categories.setdefault(category_name, {}) 139 intr = category.get(discriminator, default) 140 return intr 141 142 def get_category(self, category_name, default=None, sort_key=None): 143 if sort_key is None: 144 sort_key = operator.attrgetter('order') 145 category = self._categories.get(category_name) 146 if category is None: 147 return default 148 values = category.values() 149 values = sorted(set(values), key=sort_key) 150 return [ 151 {'introspectable': intr, 152 'related': self.related(intr)} 153 for intr in values 154 ] 155 156 def categorized(self, sort_key=None): 157 L = [] 158 for category_name in self.categories(): 159 L.append((category_name, self.get_category(category_name, 160 sort_key=sort_key))) 161 return L 162 163 def categories(self): 164 return sorted(self._categories.keys()) 165 166 def remove(self, category_name, discriminator): 167 intr = self.get(category_name, discriminator) 168 if intr is None: 169 return 170 L = self._refs.pop(intr, []) 171 for d in L: 172 L2 = self._refs[d] 173 L2.remove(intr) 174 category = self._categories[intr.category_name] 175 del category[intr.discriminator] 176 del category[intr.discriminator_hash] 177 178 def _get_intrs_by_pairs(self, pairs): 179 introspectables = [] 180 for pair in pairs: 181 category_name, discriminator = pair 182 intr = self._categories.get(category_name, {}).get(discriminator) 183 if intr is None: 184 raise KeyError((category_name, discriminator)) 185 introspectables.append(intr) 186 return introspectables 187 188 def relate(self, *pairs): 189 introspectables = self._get_intrs_by_pairs(pairs) 190 relatable = ((x,y) for x in introspectables for y in introspectables) 191 for x, y in relatable: 192 L = self._refs.setdefault(x, []) 193 if x is not y and y not in L: 194 L.append(y) 195 196 def unrelate(self, *pairs): 197 introspectables = self._get_intrs_by_pairs(pairs) 198 relatable = ((x,y) for x in introspectables for y in introspectables) 199 for x, y in relatable: 200 L = self._refs.get(x, []) 201 if y in L: 202 L.remove(y) 203 204 def related(self, intr): 205 category_name, discriminator = intr.category_name, intr.discriminator 206 intr = self._categories.get(category_name, {}).get(discriminator) 207 if intr is None: 208 raise KeyError((category_name, discriminator)) 209 return self._refs.get(intr, []) 210 211 @implementer(IIntrospectable) 212 class Introspectable(dict): 213 214 order = 0 # mutated by introspector.add 215 action_info = None # mutated by self.register 216 217 def __init__(self, category_name, discriminator, title, type_name): 218 self.category_name = category_name 219 self.discriminator = discriminator 220 self.title = title 221 self.type_name = type_name 222 self._relations = [] 223 224 def relate(self, category_name, discriminator): 225 self._relations.append((True, category_name, discriminator)) 226 227 def unrelate(self, category_name, discriminator): 228 self._relations.append((False, category_name, discriminator)) 229 230 def _assert_resolved(self): 231 assert undefer(self.discriminator) is self.discriminator 232 233 @property 234 def discriminator_hash(self): 235 self._assert_resolved() 236 return hash(self.discriminator) 237 238 def __hash__(self): 239 self._assert_resolved() 240 return hash((self.category_name,) + (self.discriminator,)) 241 242 def __repr__(self): 243 self._assert_resolved() 244 return '<%s category %r, discriminator %r>' % (self.__class__.__name__, 245 self.category_name, 246 self.discriminator) 247 248 def __nonzero__(self): 249 return True 250 251 __bool__ = __nonzero__ # py3 252 253 def register(self, introspector, action_info): 254 self.discriminator = undefer(self.discriminator) 255 self.action_info = action_info 256 introspector.add(self) 257 for relate, category_name, discriminator in self._relations: 258 discriminator = undefer(discriminator) 259 if relate: 260 method = introspector.relate 261 else: 262 method = introspector.unrelate 263 method( 264 (self.category_name, self.discriminator), 265 (category_name, discriminator) 266 ) 267 268 class Deferred(object): 269 """ Can be used by a third-party configuration extender to wrap a 270 :term:`discriminator` during configuration if an immediately hashable 271 discriminator cannot be computed because it relies on unresolved values. 272 The function should accept no arguments and should return a hashable 273 discriminator.""" 274 def __init__(self, func): 275 self.func = func 276 277 @reify 278 def value(self): 279 return self.func() 280 281 def resolve(self): 282 return self.value 283 284 def undefer(v): 285 """ Function which accepts an object and returns it unless it is a 286 :class:`pyramid.registry.Deferred` instance. If it is an instance of 287 that class, its ``resolve`` method is called, and the result of the 288 method is returned.""" 289 if isinstance(v, Deferred): 290 v = v.resolve() 291 return v 292 293 class predvalseq(tuple): 294 """ A subtype of tuple used to represent a sequence of predicate values """ 295 pass 296 297 global_registry = Registry('global') 298 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/pyramid/registry.py b/pyramid/registry.py --- a/pyramid/registry.py +++ b/pyramid/registry.py @@ -56,7 +56,7 @@ _settings = None - def __init__(self, package_name=CALLER_PACKAGE): + def __init__(self, package_name=CALLER_PACKAGE, *args, **kw): # add a registry-instance-specific lock, which is used when the lookup # cache is mutated self._lock = threading.Lock() @@ -64,7 +64,7 @@ self._clear_view_lookup_cache() if package_name is CALLER_PACKAGE: package_name = caller_package().__name__ - Components.__init__(self, package_name) + Components.__init__(self, package_name, *args, **kw) dict.__init__(self) def _clear_view_lookup_cache(self):
{"golden_diff": "diff --git a/pyramid/registry.py b/pyramid/registry.py\n--- a/pyramid/registry.py\n+++ b/pyramid/registry.py\n@@ -56,7 +56,7 @@\n \n _settings = None\n \n- def __init__(self, package_name=CALLER_PACKAGE):\n+ def __init__(self, package_name=CALLER_PACKAGE, *args, **kw):\n # add a registry-instance-specific lock, which is used when the lookup\n # cache is mutated\n self._lock = threading.Lock()\n@@ -64,7 +64,7 @@\n self._clear_view_lookup_cache()\n if package_name is CALLER_PACKAGE:\n package_name = caller_package().__name__\n- Components.__init__(self, package_name)\n+ Components.__init__(self, package_name, *args, **kw)\n dict.__init__(self)\n \n def _clear_view_lookup_cache(self):\n", "issue": "Restore the Registry(*args, **kw) API\nIn the course of #2891, the Registry API was changed in a backwards-incompatible way. While the author of the change (PR 2893) couldn't imagine anyone using the keywords API, I immediately ran into the incompatibility with a number of customer projects. Since the change back to the old API shouldn't introduce another backwards incompatibility, I suggest reverting that change as soon as possible.\n", "before_files": [{"content": "import operator\nimport threading\n\nfrom zope.interface import implementer\n\nfrom zope.interface.registry import Components\n\nfrom pyramid.compat import text_\nfrom pyramid.decorator import reify\n\nfrom pyramid.interfaces import (\n IIntrospector,\n IIntrospectable,\n ISettings,\n )\n\nfrom pyramid.path import (\n CALLER_PACKAGE,\n caller_package,\n)\n\nempty = text_('')\n\nclass Registry(Components, dict):\n \"\"\" A registry object is an :term:`application registry`.\n\n It is used by the framework itself to perform mappings of URLs to view\n callables, as well as servicing other various framework duties. A registry\n has its own internal API, but this API is rarely used by Pyramid\n application developers (it's usually only used by developers of the\n Pyramid framework and Pyramid addons). But it has a number of attributes\n that may be useful to application developers within application code,\n such as ``settings``, which is a dictionary containing application\n deployment settings.\n\n For information about the purpose and usage of the application registry,\n see :ref:`zca_chapter`.\n\n The registry may be used both as an :class:`pyramid.interfaces.IDict` and\n as a Zope component registry.\n These two ways of storing configuration are independent.\n Applications will tend to prefer to store information as key-values\n whereas addons may prefer to use the component registry to avoid naming\n conflicts and to provide more complex lookup mechanisms.\n\n The application registry is usually accessed as ``request.registry`` in\n application code. By the time a registry is used to handle requests it\n should be considered frozen and read-only. Any changes to its internal\n state should be done with caution and concern for thread-safety.\n\n \"\"\"\n\n # for optimization purposes, if no listeners are listening, don't try\n # to notify them\n has_listeners = False\n\n _settings = None\n\n def __init__(self, package_name=CALLER_PACKAGE):\n # add a registry-instance-specific lock, which is used when the lookup\n # cache is mutated\n self._lock = threading.Lock()\n # add a view lookup cache\n self._clear_view_lookup_cache()\n if package_name is CALLER_PACKAGE:\n package_name = caller_package().__name__\n Components.__init__(self, package_name)\n dict.__init__(self)\n\n def _clear_view_lookup_cache(self):\n self._view_lookup_cache = {}\n\n def __nonzero__(self):\n # defeat bool determination via dict.__len__\n return True\n\n @reify\n def package_name(self):\n return self.__name__\n\n def registerSubscriptionAdapter(self, *arg, **kw):\n result = Components.registerSubscriptionAdapter(self, *arg, **kw)\n self.has_listeners = True\n return result\n\n def registerSelfAdapter(self, required=None, provided=None, name=empty,\n info=empty, event=True):\n # registerAdapter analogue which always returns the object itself\n # when required is matched\n return self.registerAdapter(lambda x: x, required=required,\n provided=provided, name=name,\n info=info, event=event)\n\n def queryAdapterOrSelf(self, object, interface, default=None):\n # queryAdapter analogue which returns the object if it implements\n # the interface, otherwise it will return an adaptation to the\n # interface\n if not interface.providedBy(object):\n return self.queryAdapter(object, interface, default=default)\n return object\n\n def registerHandler(self, *arg, **kw):\n result = Components.registerHandler(self, *arg, **kw)\n self.has_listeners = True\n return result\n\n def notify(self, *events):\n if self.has_listeners:\n # iterating over subscribers assures they get executed\n [ _ for _ in self.subscribers(events, None) ]\n\n # backwards compatibility for code that wants to look up a settings\n # object via ``registry.getUtility(ISettings)``\n def _get_settings(self):\n return self._settings\n\n def _set_settings(self, settings):\n self.registerUtility(settings, ISettings)\n self._settings = settings\n\n settings = property(_get_settings, _set_settings)\n\n@implementer(IIntrospector)\nclass Introspector(object):\n def __init__(self):\n self._refs = {}\n self._categories = {}\n self._counter = 0\n\n def add(self, intr):\n category = self._categories.setdefault(intr.category_name, {})\n category[intr.discriminator] = intr\n category[intr.discriminator_hash] = intr\n intr.order = self._counter\n self._counter += 1\n\n def get(self, category_name, discriminator, default=None):\n category = self._categories.setdefault(category_name, {})\n intr = category.get(discriminator, default)\n return intr\n\n def get_category(self, category_name, default=None, sort_key=None):\n if sort_key is None:\n sort_key = operator.attrgetter('order')\n category = self._categories.get(category_name)\n if category is None:\n return default\n values = category.values()\n values = sorted(set(values), key=sort_key)\n return [\n {'introspectable': intr,\n 'related': self.related(intr)}\n for intr in values\n ]\n\n def categorized(self, sort_key=None):\n L = []\n for category_name in self.categories():\n L.append((category_name, self.get_category(category_name,\n sort_key=sort_key)))\n return L\n\n def categories(self):\n return sorted(self._categories.keys())\n\n def remove(self, category_name, discriminator):\n intr = self.get(category_name, discriminator)\n if intr is None:\n return\n L = self._refs.pop(intr, [])\n for d in L:\n L2 = self._refs[d]\n L2.remove(intr)\n category = self._categories[intr.category_name]\n del category[intr.discriminator]\n del category[intr.discriminator_hash]\n\n def _get_intrs_by_pairs(self, pairs):\n introspectables = []\n for pair in pairs:\n category_name, discriminator = pair\n intr = self._categories.get(category_name, {}).get(discriminator)\n if intr is None:\n raise KeyError((category_name, discriminator))\n introspectables.append(intr)\n return introspectables\n\n def relate(self, *pairs):\n introspectables = self._get_intrs_by_pairs(pairs)\n relatable = ((x,y) for x in introspectables for y in introspectables)\n for x, y in relatable:\n L = self._refs.setdefault(x, [])\n if x is not y and y not in L:\n L.append(y)\n\n def unrelate(self, *pairs):\n introspectables = self._get_intrs_by_pairs(pairs)\n relatable = ((x,y) for x in introspectables for y in introspectables)\n for x, y in relatable:\n L = self._refs.get(x, [])\n if y in L:\n L.remove(y)\n\n def related(self, intr):\n category_name, discriminator = intr.category_name, intr.discriminator\n intr = self._categories.get(category_name, {}).get(discriminator)\n if intr is None:\n raise KeyError((category_name, discriminator))\n return self._refs.get(intr, [])\n\n@implementer(IIntrospectable)\nclass Introspectable(dict):\n\n order = 0 # mutated by introspector.add\n action_info = None # mutated by self.register\n\n def __init__(self, category_name, discriminator, title, type_name):\n self.category_name = category_name\n self.discriminator = discriminator\n self.title = title\n self.type_name = type_name\n self._relations = []\n\n def relate(self, category_name, discriminator):\n self._relations.append((True, category_name, discriminator))\n\n def unrelate(self, category_name, discriminator):\n self._relations.append((False, category_name, discriminator))\n\n def _assert_resolved(self):\n assert undefer(self.discriminator) is self.discriminator\n\n @property\n def discriminator_hash(self):\n self._assert_resolved()\n return hash(self.discriminator)\n\n def __hash__(self):\n self._assert_resolved()\n return hash((self.category_name,) + (self.discriminator,))\n\n def __repr__(self):\n self._assert_resolved()\n return '<%s category %r, discriminator %r>' % (self.__class__.__name__,\n self.category_name,\n self.discriminator)\n\n def __nonzero__(self):\n return True\n\n __bool__ = __nonzero__ # py3\n\n def register(self, introspector, action_info):\n self.discriminator = undefer(self.discriminator)\n self.action_info = action_info\n introspector.add(self)\n for relate, category_name, discriminator in self._relations:\n discriminator = undefer(discriminator)\n if relate:\n method = introspector.relate\n else:\n method = introspector.unrelate\n method(\n (self.category_name, self.discriminator),\n (category_name, discriminator)\n )\n\nclass Deferred(object):\n \"\"\" Can be used by a third-party configuration extender to wrap a\n :term:`discriminator` during configuration if an immediately hashable\n discriminator cannot be computed because it relies on unresolved values.\n The function should accept no arguments and should return a hashable\n discriminator.\"\"\"\n def __init__(self, func):\n self.func = func\n\n @reify\n def value(self):\n return self.func()\n\n def resolve(self):\n return self.value\n\ndef undefer(v):\n \"\"\" Function which accepts an object and returns it unless it is a\n :class:`pyramid.registry.Deferred` instance. If it is an instance of\n that class, its ``resolve`` method is called, and the result of the\n method is returned.\"\"\"\n if isinstance(v, Deferred):\n v = v.resolve()\n return v\n\nclass predvalseq(tuple):\n \"\"\" A subtype of tuple used to represent a sequence of predicate values \"\"\"\n pass\n\nglobal_registry = Registry('global')\n", "path": "pyramid/registry.py"}], "after_files": [{"content": "import operator\nimport threading\n\nfrom zope.interface import implementer\n\nfrom zope.interface.registry import Components\n\nfrom pyramid.compat import text_\nfrom pyramid.decorator import reify\n\nfrom pyramid.interfaces import (\n IIntrospector,\n IIntrospectable,\n ISettings,\n )\n\nfrom pyramid.path import (\n CALLER_PACKAGE,\n caller_package,\n)\n\nempty = text_('')\n\nclass Registry(Components, dict):\n \"\"\" A registry object is an :term:`application registry`.\n\n It is used by the framework itself to perform mappings of URLs to view\n callables, as well as servicing other various framework duties. A registry\n has its own internal API, but this API is rarely used by Pyramid\n application developers (it's usually only used by developers of the\n Pyramid framework and Pyramid addons). But it has a number of attributes\n that may be useful to application developers within application code,\n such as ``settings``, which is a dictionary containing application\n deployment settings.\n\n For information about the purpose and usage of the application registry,\n see :ref:`zca_chapter`.\n\n The registry may be used both as an :class:`pyramid.interfaces.IDict` and\n as a Zope component registry.\n These two ways of storing configuration are independent.\n Applications will tend to prefer to store information as key-values\n whereas addons may prefer to use the component registry to avoid naming\n conflicts and to provide more complex lookup mechanisms.\n\n The application registry is usually accessed as ``request.registry`` in\n application code. By the time a registry is used to handle requests it\n should be considered frozen and read-only. Any changes to its internal\n state should be done with caution and concern for thread-safety.\n\n \"\"\"\n\n # for optimization purposes, if no listeners are listening, don't try\n # to notify them\n has_listeners = False\n\n _settings = None\n\n def __init__(self, package_name=CALLER_PACKAGE, *args, **kw):\n # add a registry-instance-specific lock, which is used when the lookup\n # cache is mutated\n self._lock = threading.Lock()\n # add a view lookup cache\n self._clear_view_lookup_cache()\n if package_name is CALLER_PACKAGE:\n package_name = caller_package().__name__\n Components.__init__(self, package_name, *args, **kw)\n dict.__init__(self)\n\n def _clear_view_lookup_cache(self):\n self._view_lookup_cache = {}\n\n def __nonzero__(self):\n # defeat bool determination via dict.__len__\n return True\n\n @reify\n def package_name(self):\n return self.__name__\n\n def registerSubscriptionAdapter(self, *arg, **kw):\n result = Components.registerSubscriptionAdapter(self, *arg, **kw)\n self.has_listeners = True\n return result\n\n def registerSelfAdapter(self, required=None, provided=None, name=empty,\n info=empty, event=True):\n # registerAdapter analogue which always returns the object itself\n # when required is matched\n return self.registerAdapter(lambda x: x, required=required,\n provided=provided, name=name,\n info=info, event=event)\n\n def queryAdapterOrSelf(self, object, interface, default=None):\n # queryAdapter analogue which returns the object if it implements\n # the interface, otherwise it will return an adaptation to the\n # interface\n if not interface.providedBy(object):\n return self.queryAdapter(object, interface, default=default)\n return object\n\n def registerHandler(self, *arg, **kw):\n result = Components.registerHandler(self, *arg, **kw)\n self.has_listeners = True\n return result\n\n def notify(self, *events):\n if self.has_listeners:\n # iterating over subscribers assures they get executed\n [ _ for _ in self.subscribers(events, None) ]\n\n # backwards compatibility for code that wants to look up a settings\n # object via ``registry.getUtility(ISettings)``\n def _get_settings(self):\n return self._settings\n\n def _set_settings(self, settings):\n self.registerUtility(settings, ISettings)\n self._settings = settings\n\n settings = property(_get_settings, _set_settings)\n\n@implementer(IIntrospector)\nclass Introspector(object):\n def __init__(self):\n self._refs = {}\n self._categories = {}\n self._counter = 0\n\n def add(self, intr):\n category = self._categories.setdefault(intr.category_name, {})\n category[intr.discriminator] = intr\n category[intr.discriminator_hash] = intr\n intr.order = self._counter\n self._counter += 1\n\n def get(self, category_name, discriminator, default=None):\n category = self._categories.setdefault(category_name, {})\n intr = category.get(discriminator, default)\n return intr\n\n def get_category(self, category_name, default=None, sort_key=None):\n if sort_key is None:\n sort_key = operator.attrgetter('order')\n category = self._categories.get(category_name)\n if category is None:\n return default\n values = category.values()\n values = sorted(set(values), key=sort_key)\n return [\n {'introspectable': intr,\n 'related': self.related(intr)}\n for intr in values\n ]\n\n def categorized(self, sort_key=None):\n L = []\n for category_name in self.categories():\n L.append((category_name, self.get_category(category_name,\n sort_key=sort_key)))\n return L\n\n def categories(self):\n return sorted(self._categories.keys())\n\n def remove(self, category_name, discriminator):\n intr = self.get(category_name, discriminator)\n if intr is None:\n return\n L = self._refs.pop(intr, [])\n for d in L:\n L2 = self._refs[d]\n L2.remove(intr)\n category = self._categories[intr.category_name]\n del category[intr.discriminator]\n del category[intr.discriminator_hash]\n\n def _get_intrs_by_pairs(self, pairs):\n introspectables = []\n for pair in pairs:\n category_name, discriminator = pair\n intr = self._categories.get(category_name, {}).get(discriminator)\n if intr is None:\n raise KeyError((category_name, discriminator))\n introspectables.append(intr)\n return introspectables\n\n def relate(self, *pairs):\n introspectables = self._get_intrs_by_pairs(pairs)\n relatable = ((x,y) for x in introspectables for y in introspectables)\n for x, y in relatable:\n L = self._refs.setdefault(x, [])\n if x is not y and y not in L:\n L.append(y)\n\n def unrelate(self, *pairs):\n introspectables = self._get_intrs_by_pairs(pairs)\n relatable = ((x,y) for x in introspectables for y in introspectables)\n for x, y in relatable:\n L = self._refs.get(x, [])\n if y in L:\n L.remove(y)\n\n def related(self, intr):\n category_name, discriminator = intr.category_name, intr.discriminator\n intr = self._categories.get(category_name, {}).get(discriminator)\n if intr is None:\n raise KeyError((category_name, discriminator))\n return self._refs.get(intr, [])\n\n@implementer(IIntrospectable)\nclass Introspectable(dict):\n\n order = 0 # mutated by introspector.add\n action_info = None # mutated by self.register\n\n def __init__(self, category_name, discriminator, title, type_name):\n self.category_name = category_name\n self.discriminator = discriminator\n self.title = title\n self.type_name = type_name\n self._relations = []\n\n def relate(self, category_name, discriminator):\n self._relations.append((True, category_name, discriminator))\n\n def unrelate(self, category_name, discriminator):\n self._relations.append((False, category_name, discriminator))\n\n def _assert_resolved(self):\n assert undefer(self.discriminator) is self.discriminator\n\n @property\n def discriminator_hash(self):\n self._assert_resolved()\n return hash(self.discriminator)\n\n def __hash__(self):\n self._assert_resolved()\n return hash((self.category_name,) + (self.discriminator,))\n\n def __repr__(self):\n self._assert_resolved()\n return '<%s category %r, discriminator %r>' % (self.__class__.__name__,\n self.category_name,\n self.discriminator)\n\n def __nonzero__(self):\n return True\n\n __bool__ = __nonzero__ # py3\n\n def register(self, introspector, action_info):\n self.discriminator = undefer(self.discriminator)\n self.action_info = action_info\n introspector.add(self)\n for relate, category_name, discriminator in self._relations:\n discriminator = undefer(discriminator)\n if relate:\n method = introspector.relate\n else:\n method = introspector.unrelate\n method(\n (self.category_name, self.discriminator),\n (category_name, discriminator)\n )\n\nclass Deferred(object):\n \"\"\" Can be used by a third-party configuration extender to wrap a\n :term:`discriminator` during configuration if an immediately hashable\n discriminator cannot be computed because it relies on unresolved values.\n The function should accept no arguments and should return a hashable\n discriminator.\"\"\"\n def __init__(self, func):\n self.func = func\n\n @reify\n def value(self):\n return self.func()\n\n def resolve(self):\n return self.value\n\ndef undefer(v):\n \"\"\" Function which accepts an object and returns it unless it is a\n :class:`pyramid.registry.Deferred` instance. If it is an instance of\n that class, its ``resolve`` method is called, and the result of the\n method is returned.\"\"\"\n if isinstance(v, Deferred):\n v = v.resolve()\n return v\n\nclass predvalseq(tuple):\n \"\"\" A subtype of tuple used to represent a sequence of predicate values \"\"\"\n pass\n\nglobal_registry = Registry('global')\n", "path": "pyramid/registry.py"}]}
3,384
200
gh_patches_debug_30216
rasdani/github-patches
git_diff
vega__altair-982
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- BUG: if selenium is installed but not properly configured, Altair cannot be imported Fix is to use a more robust lazy import of selenium. The main issue is that ``import altair`` ends up trying to import selenium. It would be better if selenium weren't imported until it is actually needed. Same for other optional imports. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `altair/utils/headless.py` Content: ``` 1 """ 2 Utilities that use selenium + chrome headless to save figures 3 """ 4 5 import contextlib 6 import os 7 import tempfile 8 9 try: 10 import selenium.webdriver 11 except ImportError: 12 selenium = None 13 14 15 @contextlib.contextmanager 16 def temporary_filename(**kwargs): 17 """Create and clean-up a temporary file 18 19 Arguments are the same as those passed to tempfile.mkstemp 20 21 We could use tempfile.NamedTemporaryFile here, but that causes issues on 22 windows (see https://bugs.python.org/issue14243). 23 """ 24 filedescriptor, filename = tempfile.mkstemp(**kwargs) 25 os.close(filedescriptor) 26 27 try: 28 yield filename 29 finally: 30 if os.path.exists(filename): 31 os.remove(filename) 32 33 34 HTML_TEMPLATE = """ 35 <!DOCTYPE html> 36 <html> 37 <head> 38 <title>Embedding Vega-Lite</title> 39 <script src="https://cdn.jsdelivr.net/npm/vega@{vega_version}"></script> 40 <script src="https://cdn.jsdelivr.net/npm/vega-lite@{vegalite_version}"></script> 41 <script src="https://cdn.jsdelivr.net/npm/vega-embed@{vegaembed_version}"></script> 42 </head> 43 <body> 44 <div id="vis"></div> 45 </body> 46 </html> 47 """ 48 49 EXTRACT_CODE = { 50 'png': """ 51 var spec = arguments[0]; 52 var mode = arguments[1]; 53 var scaleFactor = arguments[2]; 54 var done = arguments[3]; 55 56 if(mode === 'vega-lite'){ 57 // compile vega-lite to vega 58 const compiled = vl.compile(spec); 59 spec = compiled.spec; 60 } 61 62 new vega.View(vega.parse(spec), { 63 loader: vega.loader(), 64 logLevel: vega.Warn, 65 renderer: 'none', 66 }) 67 .initialize() 68 .toCanvas(scaleFactor) 69 .then(function(canvas){return canvas.toDataURL('image/png');}) 70 .then(done) 71 .catch(function(err) { console.error(err); }); 72 """, 73 'svg': """ 74 var spec = arguments[0]; 75 var mode = arguments[1]; 76 var scaleFactor = arguments[2]; 77 var done = arguments[3]; 78 79 if(mode === 'vega-lite'){ 80 // compile vega-lite to vega 81 const compiled = vl.compile(spec); 82 spec = compiled.spec; 83 } 84 85 new vega.View(vega.parse(spec), { 86 loader: vega.loader(), 87 logLevel: vega.Warn, 88 renderer: 'none', 89 }) 90 .initialize() 91 .toSVG(scaleFactor) 92 .then(done) 93 .catch(function(err) { console.error(err); }); 94 """, 95 'vega': """ 96 var spec = arguments[0]; 97 var mode = arguments[1]; 98 var done = arguments[3]; 99 100 if(mode === 'vega-lite'){ 101 // compile vega-lite to vega 102 const compiled = vl.compile(spec); 103 spec = compiled.spec; 104 } 105 106 done(spec); 107 """} 108 109 110 def compile_spec(spec, format, mode, 111 vega_version, vegaembed_version, vegalite_version, 112 scale_factor=1, driver_timeout=20, webdriver='chrome'): 113 114 # TODO: detect & use local Jupyter caches of JS packages? 115 116 if format not in ['png', 'svg', 'vega']: 117 raise NotImplementedError("format must be 'svg', 'png' or 'vega'") 118 119 if mode not in ['vega', 'vega-lite']: 120 raise ValueError("mode must be either 'vega' or 'vega-lite'") 121 122 if vega_version is None: 123 raise ValueError("must specify vega_version") 124 125 if vegaembed_version is None: 126 raise ValueError("must specify vegaembed_version") 127 128 if mode == 'vega-lite' and vegalite_version is None: 129 raise ValueError("must specify vega-lite version") 130 131 if selenium is None: 132 raise ImportError("selenium package is required " 133 "for saving chart as {0}".format(format)) 134 if webdriver == 'chrome': 135 webdriver_class = selenium.webdriver.Chrome 136 webdriver_options_class = selenium.webdriver.chrome.options.Options 137 elif webdriver == 'firefox': 138 webdriver_class = selenium.webdriver.Firefox 139 webdriver_options_class = selenium.webdriver.firefox.options.Options 140 else: 141 raise ValueError("webdriver must be 'chrome' or 'firefox'") 142 143 html = HTML_TEMPLATE.format(vega_version=vega_version, 144 vegalite_version=vegalite_version, 145 vegaembed_version=vegaembed_version) 146 147 webdriver_options = webdriver_options_class() 148 webdriver_options.add_argument("--headless") 149 150 if issubclass(webdriver_class, selenium.webdriver.Chrome): 151 # for linux/osx root user, need to add --no-sandbox option. 152 # since geteuid doesn't exist on windows, we don't check it 153 if hasattr(os, 'geteuid') and (os.geteuid() == 0): 154 webdriver_options.add_argument('--no-sandbox') 155 156 driver = webdriver_class(options=webdriver_options) 157 158 try: 159 driver.set_page_load_timeout(driver_timeout) 160 161 with temporary_filename(suffix='.html') as htmlfile: 162 with open(htmlfile, 'w') as f: 163 f.write(html) 164 driver.get("file://" + htmlfile) 165 online = driver.execute_script("return navigator.onLine") 166 if not online: 167 raise ValueError("Internet connection required for saving " 168 "chart as {0}".format(format)) 169 return driver.execute_async_script(EXTRACT_CODE[format], 170 spec, mode, scale_factor) 171 finally: 172 driver.close() 173 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/altair/utils/headless.py b/altair/utils/headless.py --- a/altair/utils/headless.py +++ b/altair/utils/headless.py @@ -6,11 +6,6 @@ import os import tempfile -try: - import selenium.webdriver -except ImportError: - selenium = None - @contextlib.contextmanager def temporary_filename(**kwargs): @@ -110,9 +105,15 @@ def compile_spec(spec, format, mode, vega_version, vegaembed_version, vegalite_version, scale_factor=1, driver_timeout=20, webdriver='chrome'): - # TODO: detect & use local Jupyter caches of JS packages? + # selenium is an optional dependency, so import it here + try: + import selenium.webdriver + except ImportError: + raise ImportError("selenium package is required " + "for saving chart as {0}".format(format)) + if format not in ['png', 'svg', 'vega']: raise NotImplementedError("format must be 'svg', 'png' or 'vega'") @@ -128,9 +129,6 @@ if mode == 'vega-lite' and vegalite_version is None: raise ValueError("must specify vega-lite version") - if selenium is None: - raise ImportError("selenium package is required " - "for saving chart as {0}".format(format)) if webdriver == 'chrome': webdriver_class = selenium.webdriver.Chrome webdriver_options_class = selenium.webdriver.chrome.options.Options
{"golden_diff": "diff --git a/altair/utils/headless.py b/altair/utils/headless.py\n--- a/altair/utils/headless.py\n+++ b/altair/utils/headless.py\n@@ -6,11 +6,6 @@\n import os\n import tempfile\n \n-try:\n- import selenium.webdriver\n-except ImportError:\n- selenium = None\n-\n \n @contextlib.contextmanager\n def temporary_filename(**kwargs):\n@@ -110,9 +105,15 @@\n def compile_spec(spec, format, mode,\n vega_version, vegaembed_version, vegalite_version,\n scale_factor=1, driver_timeout=20, webdriver='chrome'):\n- \n # TODO: detect & use local Jupyter caches of JS packages?\n \n+ # selenium is an optional dependency, so import it here\n+ try:\n+ import selenium.webdriver\n+ except ImportError:\n+ raise ImportError(\"selenium package is required \"\n+ \"for saving chart as {0}\".format(format))\n+\n if format not in ['png', 'svg', 'vega']:\n raise NotImplementedError(\"format must be 'svg', 'png' or 'vega'\")\n \n@@ -128,9 +129,6 @@\n if mode == 'vega-lite' and vegalite_version is None:\n raise ValueError(\"must specify vega-lite version\")\n \n- if selenium is None:\n- raise ImportError(\"selenium package is required \"\n- \"for saving chart as {0}\".format(format))\n if webdriver == 'chrome':\n webdriver_class = selenium.webdriver.Chrome\n webdriver_options_class = selenium.webdriver.chrome.options.Options\n", "issue": "BUG: if selenium is installed but not properly configured, Altair cannot be imported\nFix is to use a more robust lazy import of selenium.\r\n\r\nThe main issue is that ``import altair`` ends up trying to import selenium. It would be better if selenium weren't imported until it is actually needed. Same for other optional imports.\n", "before_files": [{"content": "\"\"\"\nUtilities that use selenium + chrome headless to save figures\n\"\"\"\n\nimport contextlib\nimport os\nimport tempfile\n\ntry:\n import selenium.webdriver\nexcept ImportError:\n selenium = None\n\n\[email protected]\ndef temporary_filename(**kwargs):\n \"\"\"Create and clean-up a temporary file\n\n Arguments are the same as those passed to tempfile.mkstemp\n\n We could use tempfile.NamedTemporaryFile here, but that causes issues on\n windows (see https://bugs.python.org/issue14243).\n \"\"\"\n filedescriptor, filename = tempfile.mkstemp(**kwargs)\n os.close(filedescriptor)\n\n try:\n yield filename\n finally:\n if os.path.exists(filename):\n os.remove(filename)\n\n\nHTML_TEMPLATE = \"\"\"\n<!DOCTYPE html>\n<html>\n<head>\n <title>Embedding Vega-Lite</title>\n <script src=\"https://cdn.jsdelivr.net/npm/vega@{vega_version}\"></script>\n <script src=\"https://cdn.jsdelivr.net/npm/vega-lite@{vegalite_version}\"></script>\n <script src=\"https://cdn.jsdelivr.net/npm/vega-embed@{vegaembed_version}\"></script>\n</head>\n<body>\n <div id=\"vis\"></div>\n</body>\n</html>\n\"\"\"\n\nEXTRACT_CODE = {\n'png': \"\"\"\n var spec = arguments[0];\n var mode = arguments[1];\n var scaleFactor = arguments[2];\n var done = arguments[3];\n\n if(mode === 'vega-lite'){\n // compile vega-lite to vega\n const compiled = vl.compile(spec);\n spec = compiled.spec;\n }\n\n new vega.View(vega.parse(spec), {\n loader: vega.loader(),\n logLevel: vega.Warn,\n renderer: 'none',\n })\n .initialize()\n .toCanvas(scaleFactor)\n .then(function(canvas){return canvas.toDataURL('image/png');})\n .then(done)\n .catch(function(err) { console.error(err); });\n \"\"\",\n'svg': \"\"\"\n var spec = arguments[0];\n var mode = arguments[1];\n var scaleFactor = arguments[2];\n var done = arguments[3];\n\n if(mode === 'vega-lite'){\n // compile vega-lite to vega\n const compiled = vl.compile(spec);\n spec = compiled.spec;\n }\n\n new vega.View(vega.parse(spec), {\n loader: vega.loader(),\n logLevel: vega.Warn,\n renderer: 'none',\n })\n .initialize()\n .toSVG(scaleFactor)\n .then(done)\n .catch(function(err) { console.error(err); });\n \"\"\",\n'vega': \"\"\"\n var spec = arguments[0];\n var mode = arguments[1];\n var done = arguments[3];\n\n if(mode === 'vega-lite'){\n // compile vega-lite to vega\n const compiled = vl.compile(spec);\n spec = compiled.spec;\n }\n\n done(spec);\n \"\"\"}\n\n\ndef compile_spec(spec, format, mode,\n vega_version, vegaembed_version, vegalite_version,\n scale_factor=1, driver_timeout=20, webdriver='chrome'):\n \n # TODO: detect & use local Jupyter caches of JS packages?\n\n if format not in ['png', 'svg', 'vega']:\n raise NotImplementedError(\"format must be 'svg', 'png' or 'vega'\")\n\n if mode not in ['vega', 'vega-lite']:\n raise ValueError(\"mode must be either 'vega' or 'vega-lite'\")\n\n if vega_version is None:\n raise ValueError(\"must specify vega_version\")\n\n if vegaembed_version is None:\n raise ValueError(\"must specify vegaembed_version\")\n\n if mode == 'vega-lite' and vegalite_version is None:\n raise ValueError(\"must specify vega-lite version\")\n\n if selenium is None:\n raise ImportError(\"selenium package is required \"\n \"for saving chart as {0}\".format(format))\n if webdriver == 'chrome':\n webdriver_class = selenium.webdriver.Chrome\n webdriver_options_class = selenium.webdriver.chrome.options.Options\n elif webdriver == 'firefox':\n webdriver_class = selenium.webdriver.Firefox\n webdriver_options_class = selenium.webdriver.firefox.options.Options\n else:\n raise ValueError(\"webdriver must be 'chrome' or 'firefox'\")\n\n html = HTML_TEMPLATE.format(vega_version=vega_version,\n vegalite_version=vegalite_version,\n vegaembed_version=vegaembed_version)\n\n webdriver_options = webdriver_options_class()\n webdriver_options.add_argument(\"--headless\")\n\n if issubclass(webdriver_class, selenium.webdriver.Chrome):\n # for linux/osx root user, need to add --no-sandbox option.\n # since geteuid doesn't exist on windows, we don't check it\n if hasattr(os, 'geteuid') and (os.geteuid() == 0):\n webdriver_options.add_argument('--no-sandbox')\n\n driver = webdriver_class(options=webdriver_options)\n\n try:\n driver.set_page_load_timeout(driver_timeout)\n\n with temporary_filename(suffix='.html') as htmlfile:\n with open(htmlfile, 'w') as f:\n f.write(html)\n driver.get(\"file://\" + htmlfile)\n online = driver.execute_script(\"return navigator.onLine\")\n if not online:\n raise ValueError(\"Internet connection required for saving \"\n \"chart as {0}\".format(format))\n return driver.execute_async_script(EXTRACT_CODE[format],\n spec, mode, scale_factor)\n finally:\n driver.close()\n", "path": "altair/utils/headless.py"}], "after_files": [{"content": "\"\"\"\nUtilities that use selenium + chrome headless to save figures\n\"\"\"\n\nimport contextlib\nimport os\nimport tempfile\n\n\[email protected]\ndef temporary_filename(**kwargs):\n \"\"\"Create and clean-up a temporary file\n\n Arguments are the same as those passed to tempfile.mkstemp\n\n We could use tempfile.NamedTemporaryFile here, but that causes issues on\n windows (see https://bugs.python.org/issue14243).\n \"\"\"\n filedescriptor, filename = tempfile.mkstemp(**kwargs)\n os.close(filedescriptor)\n\n try:\n yield filename\n finally:\n if os.path.exists(filename):\n os.remove(filename)\n\n\nHTML_TEMPLATE = \"\"\"\n<!DOCTYPE html>\n<html>\n<head>\n <title>Embedding Vega-Lite</title>\n <script src=\"https://cdn.jsdelivr.net/npm/vega@{vega_version}\"></script>\n <script src=\"https://cdn.jsdelivr.net/npm/vega-lite@{vegalite_version}\"></script>\n <script src=\"https://cdn.jsdelivr.net/npm/vega-embed@{vegaembed_version}\"></script>\n</head>\n<body>\n <div id=\"vis\"></div>\n</body>\n</html>\n\"\"\"\n\nEXTRACT_CODE = {\n'png': \"\"\"\n var spec = arguments[0];\n var mode = arguments[1];\n var scaleFactor = arguments[2];\n var done = arguments[3];\n\n if(mode === 'vega-lite'){\n // compile vega-lite to vega\n const compiled = vl.compile(spec);\n spec = compiled.spec;\n }\n\n new vega.View(vega.parse(spec), {\n loader: vega.loader(),\n logLevel: vega.Warn,\n renderer: 'none',\n })\n .initialize()\n .toCanvas(scaleFactor)\n .then(function(canvas){return canvas.toDataURL('image/png');})\n .then(done)\n .catch(function(err) { console.error(err); });\n \"\"\",\n'svg': \"\"\"\n var spec = arguments[0];\n var mode = arguments[1];\n var scaleFactor = arguments[2];\n var done = arguments[3];\n\n if(mode === 'vega-lite'){\n // compile vega-lite to vega\n const compiled = vl.compile(spec);\n spec = compiled.spec;\n }\n\n new vega.View(vega.parse(spec), {\n loader: vega.loader(),\n logLevel: vega.Warn,\n renderer: 'none',\n })\n .initialize()\n .toSVG(scaleFactor)\n .then(done)\n .catch(function(err) { console.error(err); });\n \"\"\",\n'vega': \"\"\"\n var spec = arguments[0];\n var mode = arguments[1];\n var done = arguments[3];\n\n if(mode === 'vega-lite'){\n // compile vega-lite to vega\n const compiled = vl.compile(spec);\n spec = compiled.spec;\n }\n\n done(spec);\n \"\"\"}\n\n\ndef compile_spec(spec, format, mode,\n vega_version, vegaembed_version, vegalite_version,\n scale_factor=1, driver_timeout=20, webdriver='chrome'):\n # TODO: detect & use local Jupyter caches of JS packages?\n\n # selenium is an optional dependency, so import it here\n try:\n import selenium.webdriver\n except ImportError:\n raise ImportError(\"selenium package is required \"\n \"for saving chart as {0}\".format(format))\n\n if format not in ['png', 'svg', 'vega']:\n raise NotImplementedError(\"format must be 'svg', 'png' or 'vega'\")\n\n if mode not in ['vega', 'vega-lite']:\n raise ValueError(\"mode must be either 'vega' or 'vega-lite'\")\n\n if vega_version is None:\n raise ValueError(\"must specify vega_version\")\n\n if vegaembed_version is None:\n raise ValueError(\"must specify vegaembed_version\")\n\n if mode == 'vega-lite' and vegalite_version is None:\n raise ValueError(\"must specify vega-lite version\")\n\n if webdriver == 'chrome':\n webdriver_class = selenium.webdriver.Chrome\n webdriver_options_class = selenium.webdriver.chrome.options.Options\n elif webdriver == 'firefox':\n webdriver_class = selenium.webdriver.Firefox\n webdriver_options_class = selenium.webdriver.firefox.options.Options\n else:\n raise ValueError(\"webdriver must be 'chrome' or 'firefox'\")\n\n html = HTML_TEMPLATE.format(vega_version=vega_version,\n vegalite_version=vegalite_version,\n vegaembed_version=vegaembed_version)\n\n webdriver_options = webdriver_options_class()\n webdriver_options.add_argument(\"--headless\")\n\n if issubclass(webdriver_class, selenium.webdriver.Chrome):\n # for linux/osx root user, need to add --no-sandbox option.\n # since geteuid doesn't exist on windows, we don't check it\n if hasattr(os, 'geteuid') and (os.geteuid() == 0):\n webdriver_options.add_argument('--no-sandbox')\n\n driver = webdriver_class(options=webdriver_options)\n\n try:\n driver.set_page_load_timeout(driver_timeout)\n\n with temporary_filename(suffix='.html') as htmlfile:\n with open(htmlfile, 'w') as f:\n f.write(html)\n driver.get(\"file://\" + htmlfile)\n online = driver.execute_script(\"return navigator.onLine\")\n if not online:\n raise ValueError(\"Internet connection required for saving \"\n \"chart as {0}\".format(format))\n return driver.execute_async_script(EXTRACT_CODE[format],\n spec, mode, scale_factor)\n finally:\n driver.close()\n", "path": "altair/utils/headless.py"}]}
1,952
351
gh_patches_debug_61783
rasdani/github-patches
git_diff
electricitymaps__electricitymaps-contrib-1155
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Taiwan TW is offline Currently, Taiwan is grey and 24-hours-history is empty as well. - [The link ](http://data.taipower.com.tw/opendata01/apply/file/d006001/001.txt) in the [TW.py parser](https://github.com/tmrowco/electricitymap/blob/master/parsers/TW.py) seems to show data, though. Maybe there have been some crucial changes? Some other TW related things that should be fixed: - The source link on the electricitymap website for Taiwan is not shown / shown as "?". ![image](https://user-images.githubusercontent.com/25743609/36668983-6d5ee628-1af3-11e8-91a0-db8cff5b6a64.png) - In general, the link in README.md will show 404 error and is leading nowhere. Seems like they updated/revised their website a bit? Here is the website with the 10-min-generation mix that should be linked in README.md: http://www.taipower.com.tw/tc/page.aspx?mid=206&cid=404&cchk=8ccc1918-8cae-4f40-a2d0-b43454f4f218 ![image](https://user-images.githubusercontent.com/25743609/36669034-afe4857a-1af3-11e8-8be7-b15da8a1b58d.png) Taiwan TW is offline Currently, Taiwan is grey and 24-hours-history is empty as well. - [The link ](http://data.taipower.com.tw/opendata01/apply/file/d006001/001.txt) in the [TW.py parser](https://github.com/tmrowco/electricitymap/blob/master/parsers/TW.py) seems to show data, though. Maybe there have been some crucial changes? Some other TW related things that should be fixed: - The source link on the electricitymap website for Taiwan is not shown / shown as "?". ![image](https://user-images.githubusercontent.com/25743609/36668983-6d5ee628-1af3-11e8-91a0-db8cff5b6a64.png) - In general, the link in README.md will show 404 error and is leading nowhere. Seems like they updated/revised their website a bit? Here is the website with the 10-min-generation mix that should be linked in README.md: http://www.taipower.com.tw/tc/page.aspx?mid=206&cid=404&cchk=8ccc1918-8cae-4f40-a2d0-b43454f4f218 ![image](https://user-images.githubusercontent.com/25743609/36669034-afe4857a-1af3-11e8-8be7-b15da8a1b58d.png) --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `parsers/TW.py` Content: ``` 1 #!/usr/bin/env python3 2 import arrow 3 import requests 4 import pandas 5 import dateutil 6 7 8 def fetch_production(country_code='TW'): 9 url = 'http://data.taipower.com.tw/opendata01/apply/file/d006001/001.txt' 10 response = requests.get(url) 11 data = response.json() 12 13 dumpDate = data[''] 14 prodData = data['aaData'] 15 16 tz = 'Asia/Taipei' 17 dumpDate = arrow.get(dumpDate, 'YYYY-MM-DD HH:mm').replace(tzinfo=dateutil.tz.gettz(tz)) 18 19 objData = pandas.DataFrame(prodData) 20 21 objData.columns = ['fueltype', 'name', 'capacity', 'output', 'percentage', 22 'additional'] 23 24 objData['fueltype'] = objData.fueltype.str.split('(').str[1] 25 objData['fueltype'] = objData.fueltype.str.split(')').str[0] 26 objData.drop('additional', axis=1, inplace=True) 27 objData.drop('percentage', axis=1, inplace=True) 28 29 objData = objData.convert_objects(convert_numeric=True) 30 production = pandas.DataFrame(objData.groupby('fueltype').sum()) 31 production.columns = ['capacity', 'output'] 32 33 coal_capacity = production.ix['Coal'].capacity + production.ix['IPP-Coal'].capacity 34 gas_capacity = production.ix['LNG'].capacity + production.ix['IPP-LNG'].capacity 35 oil_capacity = production.ix['Oil'].capacity + production.ix['Diesel'].capacity 36 37 coal_production = production.ix['Coal'].output + production.ix['IPP-Coal'].output 38 gas_production = production.ix['LNG'].output + production.ix['IPP-LNG'].output 39 oil_production = production.ix['Oil'].output + production.ix['Diesel'].output 40 41 # For storage, note that load will be negative, and generation positive. 42 # We require the opposite 43 44 returndata = { 45 'countryCode': country_code, 46 'datetime': dumpDate.datetime, 47 'production': { 48 'coal': coal_production, 49 'gas': gas_production, 50 'oil': oil_production, 51 'hydro': production.ix['Hydro'].output, 52 'nuclear': production.ix['Nuclear'].output, 53 'solar': production.ix['Solar'].output, 54 'wind': production.ix['Wind'].output, 55 'unknown': production.ix['Co-Gen'].output 56 }, 57 'capacity': { 58 'coal': coal_capacity, 59 'gas': gas_capacity, 60 'oil': oil_capacity, 61 'hydro': production.ix['Hydro'].capacity, 62 'nuclear': production.ix['Nuclear'].capacity, 63 'solar': production.ix['Solar'].capacity, 64 'wind': production.ix['Wind'].capacity, 65 'unknown': production.ix['Co-Gen'].capacity 66 }, 67 'storage': { 68 'hydro': -1 * production.ix['Pumping Load'].output - production.ix['Pumping Gen'].output 69 }, 70 'source': 'taipower.com.tw' 71 } 72 73 return returndata 74 75 76 if __name__ == '__main__': 77 print(fetch_production()) 78 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/parsers/TW.py b/parsers/TW.py --- a/parsers/TW.py +++ b/parsers/TW.py @@ -5,7 +5,7 @@ import dateutil -def fetch_production(country_code='TW'): +def fetch_production(country_code='TW', session=None): url = 'http://data.taipower.com.tw/opendata01/apply/file/d006001/001.txt' response = requests.get(url) data = response.json()
{"golden_diff": "diff --git a/parsers/TW.py b/parsers/TW.py\n--- a/parsers/TW.py\n+++ b/parsers/TW.py\n@@ -5,7 +5,7 @@\n import dateutil\n \n \n-def fetch_production(country_code='TW'):\n+def fetch_production(country_code='TW', session=None):\n url = 'http://data.taipower.com.tw/opendata01/apply/file/d006001/001.txt'\n response = requests.get(url)\n data = response.json()\n", "issue": "Taiwan TW is offline\nCurrently, Taiwan is grey and 24-hours-history is empty as well.\r\n\r\n- [The link ](http://data.taipower.com.tw/opendata01/apply/file/d006001/001.txt) in the [TW.py parser](https://github.com/tmrowco/electricitymap/blob/master/parsers/TW.py) seems to show data, though. \r\nMaybe there have been some crucial changes?\r\n\r\nSome other TW related things that should be fixed:\r\n- The source link on the electricitymap website for Taiwan is not shown / shown as \"?\".\r\n![image](https://user-images.githubusercontent.com/25743609/36668983-6d5ee628-1af3-11e8-91a0-db8cff5b6a64.png)\r\n\r\n- In general, the link in README.md will show 404 error and is leading nowhere. Seems like they updated/revised their website a bit?\r\nHere is the website with the 10-min-generation mix that should be linked in README.md:\r\nhttp://www.taipower.com.tw/tc/page.aspx?mid=206&cid=404&cchk=8ccc1918-8cae-4f40-a2d0-b43454f4f218\r\n\r\n![image](https://user-images.githubusercontent.com/25743609/36669034-afe4857a-1af3-11e8-8be7-b15da8a1b58d.png)\r\n\nTaiwan TW is offline\nCurrently, Taiwan is grey and 24-hours-history is empty as well.\r\n\r\n- [The link ](http://data.taipower.com.tw/opendata01/apply/file/d006001/001.txt) in the [TW.py parser](https://github.com/tmrowco/electricitymap/blob/master/parsers/TW.py) seems to show data, though. \r\nMaybe there have been some crucial changes?\r\n\r\nSome other TW related things that should be fixed:\r\n- The source link on the electricitymap website for Taiwan is not shown / shown as \"?\".\r\n![image](https://user-images.githubusercontent.com/25743609/36668983-6d5ee628-1af3-11e8-91a0-db8cff5b6a64.png)\r\n\r\n- In general, the link in README.md will show 404 error and is leading nowhere. Seems like they updated/revised their website a bit?\r\nHere is the website with the 10-min-generation mix that should be linked in README.md:\r\nhttp://www.taipower.com.tw/tc/page.aspx?mid=206&cid=404&cchk=8ccc1918-8cae-4f40-a2d0-b43454f4f218\r\n\r\n![image](https://user-images.githubusercontent.com/25743609/36669034-afe4857a-1af3-11e8-8be7-b15da8a1b58d.png)\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\nimport arrow\nimport requests\nimport pandas\nimport dateutil\n\n\ndef fetch_production(country_code='TW'):\n url = 'http://data.taipower.com.tw/opendata01/apply/file/d006001/001.txt'\n response = requests.get(url)\n data = response.json()\n\n dumpDate = data['']\n prodData = data['aaData']\n\n tz = 'Asia/Taipei'\n dumpDate = arrow.get(dumpDate, 'YYYY-MM-DD HH:mm').replace(tzinfo=dateutil.tz.gettz(tz))\n\n objData = pandas.DataFrame(prodData)\n\n objData.columns = ['fueltype', 'name', 'capacity', 'output', 'percentage',\n 'additional']\n\n objData['fueltype'] = objData.fueltype.str.split('(').str[1]\n objData['fueltype'] = objData.fueltype.str.split(')').str[0]\n objData.drop('additional', axis=1, inplace=True)\n objData.drop('percentage', axis=1, inplace=True)\n\n objData = objData.convert_objects(convert_numeric=True)\n production = pandas.DataFrame(objData.groupby('fueltype').sum())\n production.columns = ['capacity', 'output']\n\n coal_capacity = production.ix['Coal'].capacity + production.ix['IPP-Coal'].capacity\n gas_capacity = production.ix['LNG'].capacity + production.ix['IPP-LNG'].capacity\n oil_capacity = production.ix['Oil'].capacity + production.ix['Diesel'].capacity\n\n coal_production = production.ix['Coal'].output + production.ix['IPP-Coal'].output\n gas_production = production.ix['LNG'].output + production.ix['IPP-LNG'].output\n oil_production = production.ix['Oil'].output + production.ix['Diesel'].output\n\n # For storage, note that load will be negative, and generation positive.\n # We require the opposite\n\n returndata = {\n 'countryCode': country_code,\n 'datetime': dumpDate.datetime,\n 'production': {\n 'coal': coal_production,\n 'gas': gas_production,\n 'oil': oil_production,\n 'hydro': production.ix['Hydro'].output,\n 'nuclear': production.ix['Nuclear'].output,\n 'solar': production.ix['Solar'].output,\n 'wind': production.ix['Wind'].output,\n 'unknown': production.ix['Co-Gen'].output\n },\n 'capacity': {\n 'coal': coal_capacity,\n 'gas': gas_capacity,\n 'oil': oil_capacity,\n 'hydro': production.ix['Hydro'].capacity,\n 'nuclear': production.ix['Nuclear'].capacity,\n 'solar': production.ix['Solar'].capacity,\n 'wind': production.ix['Wind'].capacity,\n 'unknown': production.ix['Co-Gen'].capacity\n },\n 'storage': {\n 'hydro': -1 * production.ix['Pumping Load'].output - production.ix['Pumping Gen'].output\n },\n 'source': 'taipower.com.tw'\n }\n\n return returndata\n\n\nif __name__ == '__main__':\n print(fetch_production())\n", "path": "parsers/TW.py"}], "after_files": [{"content": "#!/usr/bin/env python3\nimport arrow\nimport requests\nimport pandas\nimport dateutil\n\n\ndef fetch_production(country_code='TW', session=None):\n url = 'http://data.taipower.com.tw/opendata01/apply/file/d006001/001.txt'\n response = requests.get(url)\n data = response.json()\n\n dumpDate = data['']\n prodData = data['aaData']\n\n tz = 'Asia/Taipei'\n dumpDate = arrow.get(dumpDate, 'YYYY-MM-DD HH:mm').replace(tzinfo=dateutil.tz.gettz(tz))\n\n objData = pandas.DataFrame(prodData)\n\n objData.columns = ['fueltype', 'name', 'capacity', 'output', 'percentage',\n 'additional']\n\n objData['fueltype'] = objData.fueltype.str.split('(').str[1]\n objData['fueltype'] = objData.fueltype.str.split(')').str[0]\n objData.drop('additional', axis=1, inplace=True)\n objData.drop('percentage', axis=1, inplace=True)\n\n objData = objData.convert_objects(convert_numeric=True)\n production = pandas.DataFrame(objData.groupby('fueltype').sum())\n production.columns = ['capacity', 'output']\n\n coal_capacity = production.ix['Coal'].capacity + production.ix['IPP-Coal'].capacity\n gas_capacity = production.ix['LNG'].capacity + production.ix['IPP-LNG'].capacity\n oil_capacity = production.ix['Oil'].capacity + production.ix['Diesel'].capacity\n\n coal_production = production.ix['Coal'].output + production.ix['IPP-Coal'].output\n gas_production = production.ix['LNG'].output + production.ix['IPP-LNG'].output\n oil_production = production.ix['Oil'].output + production.ix['Diesel'].output\n\n # For storage, note that load will be negative, and generation positive.\n # We require the opposite\n\n returndata = {\n 'countryCode': country_code,\n 'datetime': dumpDate.datetime,\n 'production': {\n 'coal': coal_production,\n 'gas': gas_production,\n 'oil': oil_production,\n 'hydro': production.ix['Hydro'].output,\n 'nuclear': production.ix['Nuclear'].output,\n 'solar': production.ix['Solar'].output,\n 'wind': production.ix['Wind'].output,\n 'unknown': production.ix['Co-Gen'].output\n },\n 'capacity': {\n 'coal': coal_capacity,\n 'gas': gas_capacity,\n 'oil': oil_capacity,\n 'hydro': production.ix['Hydro'].capacity,\n 'nuclear': production.ix['Nuclear'].capacity,\n 'solar': production.ix['Solar'].capacity,\n 'wind': production.ix['Wind'].capacity,\n 'unknown': production.ix['Co-Gen'].capacity\n },\n 'storage': {\n 'hydro': -1 * production.ix['Pumping Load'].output - production.ix['Pumping Gen'].output\n },\n 'source': 'taipower.com.tw'\n }\n\n return returndata\n\n\nif __name__ == '__main__':\n print(fetch_production())\n", "path": "parsers/TW.py"}]}
1,822
113
gh_patches_debug_14691
rasdani/github-patches
git_diff
google__timesketch-406
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Pylint not present in requirements.txt Not pinning version of Pylint makes our build a bit non-deterministic. Pylint's behavior can change between versions and break our build. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `setup.py` Content: ``` 1 #!/usr/bin/env python 2 # Copyright 2015 Google Inc. All rights reserved. 3 # 4 # Licensed under the Apache License, Version 2.0 (the "License"); 5 # you may not use this file except in compliance with the License. 6 # You may obtain a copy of the License at 7 # 8 # http://www.apache.org/licenses/LICENSE-2.0 9 # 10 # Unless required by applicable law or agreed to in writing, software 11 # distributed under the License is distributed on an "AS IS" BASIS, 12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 # See the License for the specific language governing permissions and 14 # limitations under the License. 15 """This is the setup file for the project. The standard setup rules apply: 16 17 python setup.py build 18 sudo python setup.py install 19 """ 20 21 import os.path 22 import sys 23 import time 24 25 from setuptools import find_packages 26 from setuptools import setup 27 28 timesketch_version = u'20170721' 29 30 timesketch_description = ( 31 u'Timesketch is a web based tool for collaborative forensic timeline ' 32 u'analysis. Using sketches you and your collaborators can easily organize ' 33 u'timelines and analyze them all at the same time. Add meaning to ' 34 u'your raw data with rich annotations, comments, tags and stars.') 35 36 def check_before_upload(): 37 """Warn user if frontend build is not present or is not recent. 38 39 Make sure that .js and .css bundles included in the PyPI package are up to 40 date. 41 42 Raises: 43 UserWarning 44 """ 45 this_dir = os.path.dirname(__file__) 46 frontend_dist_dir = os.path.join( 47 this_dir, 'timesketch', 'ui', 'static', 'dist', 48 ) 49 js = os.path.join(frontend_dist_dir, 'bundle.js') 50 css = os.path.join(frontend_dist_dir, 'bundle.css') 51 if not (os.path.isfile(js) and os.path.isfile(css)): 52 raise UserWarning( 53 "Build the frontend before uploading to PyPI!" 54 + " (see docs/Developers-Guide.md)" 55 ) 56 mtime = min(os.path.getmtime(js), os.path.getmtime(css)) 57 if time.time() - mtime > 180: 58 raise UserWarning( 59 "Frontend build is older than 3 minutes, please rebuild!" 60 + " (see docs/Developers-Guide.md)" 61 ) 62 63 if 'upload' in sys.argv: 64 check_before_upload() 65 66 setup( 67 name=u'timesketch', 68 version=timesketch_version, 69 description=u'Digital forensic timeline analysis', 70 long_description=timesketch_description, 71 license=u'Apache License, Version 2.0', 72 url=u'http://www.timesketch.org/', 73 maintainer=u'Timesketch development team', 74 maintainer_email=u'[email protected]', 75 classifiers=[ 76 u'Development Status :: 4 - Beta', 77 u'Environment :: Web Environment', 78 u'Operating System :: OS Independent', 79 u'Programming Language :: Python', 80 ], 81 data_files=[(u'share/timesketch', [u'timesketch.conf'])], 82 packages=find_packages(), 83 include_package_data=True, 84 zip_safe=False, 85 scripts=[u'tsctl'], 86 install_requires=frozenset([ 87 u'Flask', u'Flask-Login', u'Flask-script', u'Flask-SQLAlchemy', 88 u'Flask-Bcrypt', u'Flask-RESTful', u'Flask-WTF', u'Flask-Migrate', 89 u'SQLAlchemy', u'celery', u'redis', u'blinker', u'elasticsearch', 90 u'neo4jrestclient', u'python-dateutil' 91 ])) 92 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -24,6 +24,8 @@ from setuptools import find_packages from setuptools import setup +from pip.req import parse_requirements +from pip.download import PipSession timesketch_version = u'20170721' @@ -83,9 +85,7 @@ include_package_data=True, zip_safe=False, scripts=[u'tsctl'], - install_requires=frozenset([ - u'Flask', u'Flask-Login', u'Flask-script', u'Flask-SQLAlchemy', - u'Flask-Bcrypt', u'Flask-RESTful', u'Flask-WTF', u'Flask-Migrate', - u'SQLAlchemy', u'celery', u'redis', u'blinker', u'elasticsearch', - u'neo4jrestclient', u'python-dateutil' - ])) + install_requires=[str(req.req) for req in parse_requirements( + "requirements.txt", session=PipSession(), + )], +)
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -24,6 +24,8 @@\n \n from setuptools import find_packages\n from setuptools import setup\n+from pip.req import parse_requirements\n+from pip.download import PipSession\n \n timesketch_version = u'20170721'\n \n@@ -83,9 +85,7 @@\n include_package_data=True,\n zip_safe=False,\n scripts=[u'tsctl'],\n- install_requires=frozenset([\n- u'Flask', u'Flask-Login', u'Flask-script', u'Flask-SQLAlchemy',\n- u'Flask-Bcrypt', u'Flask-RESTful', u'Flask-WTF', u'Flask-Migrate',\n- u'SQLAlchemy', u'celery', u'redis', u'blinker', u'elasticsearch',\n- u'neo4jrestclient', u'python-dateutil'\n- ]))\n+ install_requires=[str(req.req) for req in parse_requirements(\n+ \"requirements.txt\", session=PipSession(),\n+ )],\n+)\n", "issue": "Pylint not present in requirements.txt\nNot pinning version of Pylint makes our build a bit non-deterministic. Pylint's behavior can change between versions and break our build.\n", "before_files": [{"content": "#!/usr/bin/env python\n# Copyright 2015 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"This is the setup file for the project. The standard setup rules apply:\n\n python setup.py build\n sudo python setup.py install\n\"\"\"\n\nimport os.path\nimport sys\nimport time\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\ntimesketch_version = u'20170721'\n\ntimesketch_description = (\n u'Timesketch is a web based tool for collaborative forensic timeline '\n u'analysis. Using sketches you and your collaborators can easily organize '\n u'timelines and analyze them all at the same time. Add meaning to '\n u'your raw data with rich annotations, comments, tags and stars.')\n\ndef check_before_upload():\n \"\"\"Warn user if frontend build is not present or is not recent.\n\n Make sure that .js and .css bundles included in the PyPI package are up to\n date.\n\n Raises:\n UserWarning\n \"\"\"\n this_dir = os.path.dirname(__file__)\n frontend_dist_dir = os.path.join(\n this_dir, 'timesketch', 'ui', 'static', 'dist',\n )\n js = os.path.join(frontend_dist_dir, 'bundle.js')\n css = os.path.join(frontend_dist_dir, 'bundle.css')\n if not (os.path.isfile(js) and os.path.isfile(css)):\n raise UserWarning(\n \"Build the frontend before uploading to PyPI!\"\n + \" (see docs/Developers-Guide.md)\"\n )\n mtime = min(os.path.getmtime(js), os.path.getmtime(css))\n if time.time() - mtime > 180:\n raise UserWarning(\n \"Frontend build is older than 3 minutes, please rebuild!\"\n + \" (see docs/Developers-Guide.md)\"\n )\n\nif 'upload' in sys.argv:\n check_before_upload()\n\nsetup(\n name=u'timesketch',\n version=timesketch_version,\n description=u'Digital forensic timeline analysis',\n long_description=timesketch_description,\n license=u'Apache License, Version 2.0',\n url=u'http://www.timesketch.org/',\n maintainer=u'Timesketch development team',\n maintainer_email=u'[email protected]',\n classifiers=[\n u'Development Status :: 4 - Beta',\n u'Environment :: Web Environment',\n u'Operating System :: OS Independent',\n u'Programming Language :: Python',\n ],\n data_files=[(u'share/timesketch', [u'timesketch.conf'])],\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n scripts=[u'tsctl'],\n install_requires=frozenset([\n u'Flask', u'Flask-Login', u'Flask-script', u'Flask-SQLAlchemy',\n u'Flask-Bcrypt', u'Flask-RESTful', u'Flask-WTF', u'Flask-Migrate',\n u'SQLAlchemy', u'celery', u'redis', u'blinker', u'elasticsearch',\n u'neo4jrestclient', u'python-dateutil'\n ]))\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# Copyright 2015 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"This is the setup file for the project. The standard setup rules apply:\n\n python setup.py build\n sudo python setup.py install\n\"\"\"\n\nimport os.path\nimport sys\nimport time\n\nfrom setuptools import find_packages\nfrom setuptools import setup\nfrom pip.req import parse_requirements\nfrom pip.download import PipSession\n\ntimesketch_version = u'20170721'\n\ntimesketch_description = (\n u'Timesketch is a web based tool for collaborative forensic timeline '\n u'analysis. Using sketches you and your collaborators can easily organize '\n u'timelines and analyze them all at the same time. Add meaning to '\n u'your raw data with rich annotations, comments, tags and stars.')\n\ndef check_before_upload():\n \"\"\"Warn user if frontend build is not present or is not recent.\n\n Make sure that .js and .css bundles included in the PyPI package are up to\n date.\n\n Raises:\n UserWarning\n \"\"\"\n this_dir = os.path.dirname(__file__)\n frontend_dist_dir = os.path.join(\n this_dir, 'timesketch', 'ui', 'static', 'dist',\n )\n js = os.path.join(frontend_dist_dir, 'bundle.js')\n css = os.path.join(frontend_dist_dir, 'bundle.css')\n if not (os.path.isfile(js) and os.path.isfile(css)):\n raise UserWarning(\n \"Build the frontend before uploading to PyPI!\"\n + \" (see docs/Developers-Guide.md)\"\n )\n mtime = min(os.path.getmtime(js), os.path.getmtime(css))\n if time.time() - mtime > 180:\n raise UserWarning(\n \"Frontend build is older than 3 minutes, please rebuild!\"\n + \" (see docs/Developers-Guide.md)\"\n )\n\nif 'upload' in sys.argv:\n check_before_upload()\n\nsetup(\n name=u'timesketch',\n version=timesketch_version,\n description=u'Digital forensic timeline analysis',\n long_description=timesketch_description,\n license=u'Apache License, Version 2.0',\n url=u'http://www.timesketch.org/',\n maintainer=u'Timesketch development team',\n maintainer_email=u'[email protected]',\n classifiers=[\n u'Development Status :: 4 - Beta',\n u'Environment :: Web Environment',\n u'Operating System :: OS Independent',\n u'Programming Language :: Python',\n ],\n data_files=[(u'share/timesketch', [u'timesketch.conf'])],\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n scripts=[u'tsctl'],\n install_requires=[str(req.req) for req in parse_requirements(\n \"requirements.txt\", session=PipSession(),\n )],\n)\n", "path": "setup.py"}]}
1,291
254
gh_patches_debug_29180
rasdani/github-patches
git_diff
digitalfabrik__integreat-cms-417
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Add contribution guide to documentation Add a guide to the documentation which explains how to contribute to this project. It should contain the following: - [x] Bug Reporting Guide - [x] Code Style Guide - [x] GitHub Workflow Guide - [x] Code of Conduct --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `sphinx/conf.py` Content: ``` 1 """ 2 Configuration file for the Sphinx documentation builder. 3 4 This file only contains a selection of the most common options. For a full 5 list see the documentation: 6 https://www.sphinx-doc.org/en/master/usage/configuration.html 7 """ 8 9 # -- Path setup -------------------------------------------------------------- 10 11 import os 12 import sys 13 import inspect 14 import importlib 15 import django 16 17 from sphinx.writers.html import HTMLTranslator 18 19 from backend.settings import VERSION 20 21 # Append project source directory to path environment variable 22 sys.path.append(os.path.abspath('../src/')) 23 os.environ['DJANGO_SETTINGS_MODULE'] = 'backend.settings' 24 25 26 # Setup Django 27 django.setup() 28 29 30 def setup(app): 31 """ 32 Registeration and setup. 33 34 This method does the initial setup for the docs generation. 35 """ 36 # Register the docstring processor with sphinx to improve the appearance of Django models 37 app.connect('autodoc-process-docstring', process_django_models) 38 # Patch HTMLTranslator to open external links in new tab 39 app.set_translator('html', PatchedHTMLTranslator) 40 41 42 # -- Project information ----------------------------------------------------- 43 44 45 project = 'integreat-cms' 46 # pylint: disable=redefined-builtin 47 copyright = '2020, Integreat' 48 author = 'Integreat' 49 50 # The full version, including alpha/beta/rc tags 51 release = VERSION 52 53 # -- General configuration --------------------------------------------------- 54 55 # All enabled sphinx extensions 56 extensions = [ 57 'sphinx.ext.autodoc', 58 'sphinx.ext.githubpages', 59 'sphinx.ext.intersphinx', 60 'sphinx.ext.linkcode', 61 'sphinxcontrib_django', 62 'sphinx_rtd_theme', 63 ] 64 65 # Enable cross-references to other documentations 66 intersphinx_mapping = { 67 'python': ('https://docs.python.org/3.7', None), 68 'sphinx': ('https://www.sphinx-doc.org/en/master/', None), 69 'django': ('https://docs.djangoproject.com/en/2.2/', 70 'https://docs.djangoproject.com/en/2.2/_objects/'), 71 'django-mptt': ('https://django-mptt.readthedocs.io/en/latest/', None), 72 } 73 74 # The path for patched template files 75 templates_path = ['templates'] 76 77 # -- Options for HTML output ------------------------------------------------- 78 79 # The theme to use for HTML and HTML Help pages. 80 html_theme = 'sphinx_rtd_theme' 81 # Do not show the project name, only the logo 82 html_theme_options = { 83 'logo_only': True, 84 'collapse_navigation': False, 85 } 86 # The logo shown in the menu bar 87 html_logo = '../src/cms/static/images/integreat-logo-white.png' 88 # The facivon of the html doc files 89 html_favicon = '../src/cms/static/images/favicon.ico' 90 # The url where the docs should be published (via gh-pages) 91 html_baseurl = 'https://Integreat.github.io/cms-django/' 92 # Do not include links to the documentation source (.rst files) in build 93 html_show_sourcelink = False 94 95 # -- Modify default Django model parameter types------------------------------ 96 97 98 # pylint: disable=unused-argument, too-many-locals, too-many-branches 99 def process_django_models(app, what, name, obj, options, lines): 100 """Append correct param types from fields to model documentation.""" 101 if inspect.isclass(obj) and issubclass(obj, django.db.models.Model): 102 # Intersphinx mapping to django.contrib.postgres documentation does not work, so here the manual link 103 postgres_docu = intersphinx_mapping.get('django')[1][0] + 'ref/contrib/postgres/fields/' 104 # include_hidden to get also ManyToManyFields 105 for field in obj._meta.get_fields(include_hidden=True): 106 field_type = type(field).__name__ 107 field_module = type(field).__module__ 108 if field_module == 'django.contrib.postgres.fields.array': 109 # Fix intersphinx mappings for django.contrib.postgres fields 110 type_line = f':type {field.name}: `{field_module}.ArrayField <{postgres_docu}#arrayfield>`_' 111 elif field_module == 'django.contrib.postgres.fields.jsonb': 112 # Fix intersphinx mappings for django.contrib.postgres fields 113 type_line = f':type {field.name}: `{field_module}.JSONField <{postgres_docu}#jsonfield>`_' 114 elif field_module in ['django.db.models.fields.related', 'mptt.fields']: 115 # Fix intersphinx mappings for related fields (ForeignKey, OneToOneField, ManyToManyField, ...) 116 # Also includes related MPTT fields (TreeForeignKey, TreeOneToOneField, TreeManyToManyField, ...) 117 remote_model = field.remote_field.get_related_field().model 118 type_line = f':type {field.name}: {field_type} to :class:`~{remote_model.__module__}.{remote_model.__name__}`' 119 elif field_module == 'django.db.models.fields.reverse_related': 120 # Fix intersphinx mappings for reverse related fields (ManyToOneRel, OneToOneRel, ManyToManyRel, ...) 121 remote_model = field.remote_field.model 122 type_line = f':type {field.name}: Reverse {field_type[:-3]} Relation from :class:`~{remote_model.__module__}.{remote_model.__name__}`' 123 else: 124 if 'django.db.models' in field_module: 125 # Scope with django.db.models * imports (remove all sub-module-paths) 126 field_module = 'django.db.models' 127 # Fix type hint to enable correct intersphinx mappings to other documentations 128 type_line = f':type {field.name}: {field_module}.{field_type}' 129 # This loop gets the indexes which are needed to update the type hints of the model parameters. 130 # It makes it possible to split the parameter section into multiple parts, e.g. params inherited from a base 131 # model and params of a sub model (otherwise the type hints would not be recognized when separated from 132 # the parameter description). 133 param_index = None 134 next_param_index = None 135 type_index = None 136 for index, line in enumerate(lines): 137 if param_index is None and f':param {field.name}:' in line: 138 # The index of the field param is only used to determine the next param line 139 param_index = index 140 elif param_index is not None and next_param_index is None and (':param ' in line or line == ''): 141 # The line of the next param after the field, this is the index where we will insert the type. 142 # Sometimes the param descriptions extend over multiple lines, so we cannot just do param_index + 1. 143 # If the line is empty, the param description is finished, even if it extends over multiple lines. 144 next_param_index = index 145 elif type_index is None and f':type {field.name}:' in line: 146 # The index of the old type hint, we will either move this line or replace it 147 type_index = index 148 break 149 if next_param_index is None: 150 # In case the current field is the last param, we just append the type at the very end of lines 151 next_param_index = len(lines) 152 # For some params, the type line is not automatically generated and thus the type_index might be `None` 153 if type_index is not None: 154 # We delete the old type index, because we will replace it with the new type line 155 del lines[type_index] 156 # Insert the new type line just before the next param 157 lines.insert(next_param_index, type_line) 158 return lines 159 160 161 # -- Source Code links to GitHub --------------------------------------------- 162 163 164 def linkcode_resolve(domain, info): 165 """Link source code to GitHub.""" 166 if domain != 'py' or not info['module']: 167 return None 168 filename = info['module'].replace('.', '/') 169 module = importlib.import_module(info['module']) 170 basename = os.path.splitext(module.__file__)[0] 171 if basename.endswith('__init__'): 172 filename += '/__init__' 173 item = module 174 line_number_reference = '' 175 for piece in info['fullname'].split('.'): 176 item = getattr(item, piece) 177 try: 178 line_number_reference = f'#L{inspect.getsourcelines(item)[1]}' 179 except (TypeError, IOError): 180 pass 181 return f"https://github.com/Integreat/cms-django/blob/develop/src/{filename}.py{line_number_reference}" 182 183 # -- Link targets ------------------------------------------------------------ 184 185 186 # pylint: disable=abstract-method 187 class PatchedHTMLTranslator(HTMLTranslator): 188 """Open external links in a new tab""" 189 190 def visit_reference(self, node): 191 if ( 192 node.get('newtab') or 193 not ( 194 node.get('target') or 195 node.get('internal') or 196 'refuri' not in node 197 ) 198 ): 199 node['target'] = '_blank' 200 super().visit_reference(node) 201 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/sphinx/conf.py b/sphinx/conf.py --- a/sphinx/conf.py +++ b/sphinx/conf.py @@ -65,7 +65,9 @@ # Enable cross-references to other documentations intersphinx_mapping = { 'python': ('https://docs.python.org/3.7', None), + 'pipenv': ('https://pipenv.pypa.io/en/latest/', None), 'sphinx': ('https://www.sphinx-doc.org/en/master/', None), + 'sphinx-rtd-tutorial': ('https://sphinx-rtd-tutorial.readthedocs.io/en/latest/', None), 'django': ('https://docs.djangoproject.com/en/2.2/', 'https://docs.djangoproject.com/en/2.2/_objects/'), 'django-mptt': ('https://django-mptt.readthedocs.io/en/latest/', None), @@ -80,7 +82,7 @@ html_theme = 'sphinx_rtd_theme' # Do not show the project name, only the logo html_theme_options = { - 'logo_only': True, + 'logo_only': False, 'collapse_navigation': False, } # The logo shown in the menu bar @@ -91,6 +93,10 @@ html_baseurl = 'https://Integreat.github.io/cms-django/' # Do not include links to the documentation source (.rst files) in build html_show_sourcelink = False +# Do not include a link to sphinx +html_show_sphinx = False +# Include last updated timestamp +html_last_updated_fmt = '%b %d, %Y' # -- Modify default Django model parameter types------------------------------
{"golden_diff": "diff --git a/sphinx/conf.py b/sphinx/conf.py\n--- a/sphinx/conf.py\n+++ b/sphinx/conf.py\n@@ -65,7 +65,9 @@\n # Enable cross-references to other documentations\n intersphinx_mapping = {\n 'python': ('https://docs.python.org/3.7', None),\n+ 'pipenv': ('https://pipenv.pypa.io/en/latest/', None),\n 'sphinx': ('https://www.sphinx-doc.org/en/master/', None),\n+ 'sphinx-rtd-tutorial': ('https://sphinx-rtd-tutorial.readthedocs.io/en/latest/', None),\n 'django': ('https://docs.djangoproject.com/en/2.2/',\n 'https://docs.djangoproject.com/en/2.2/_objects/'),\n 'django-mptt': ('https://django-mptt.readthedocs.io/en/latest/', None),\n@@ -80,7 +82,7 @@\n html_theme = 'sphinx_rtd_theme'\n # Do not show the project name, only the logo\n html_theme_options = {\n- 'logo_only': True,\n+ 'logo_only': False,\n 'collapse_navigation': False,\n }\n # The logo shown in the menu bar\n@@ -91,6 +93,10 @@\n html_baseurl = 'https://Integreat.github.io/cms-django/'\n # Do not include links to the documentation source (.rst files) in build\n html_show_sourcelink = False\n+# Do not include a link to sphinx\n+html_show_sphinx = False\n+# Include last updated timestamp\n+html_last_updated_fmt = '%b %d, %Y'\n \n # -- Modify default Django model parameter types------------------------------\n", "issue": "Add contribution guide to documentation\nAdd a guide to the documentation which explains how to contribute to this project.\r\n\r\nIt should contain the following:\r\n\r\n- [x] Bug Reporting Guide\r\n- [x] Code Style Guide\r\n- [x] GitHub Workflow Guide\r\n- [x] Code of Conduct\n", "before_files": [{"content": "\"\"\"\nConfiguration file for the Sphinx documentation builder.\n\nThis file only contains a selection of the most common options. For a full\nlist see the documentation:\nhttps://www.sphinx-doc.org/en/master/usage/configuration.html\n\"\"\"\n\n# -- Path setup --------------------------------------------------------------\n\nimport os\nimport sys\nimport inspect\nimport importlib\nimport django\n\nfrom sphinx.writers.html import HTMLTranslator\n\nfrom backend.settings import VERSION\n\n# Append project source directory to path environment variable\nsys.path.append(os.path.abspath('../src/'))\nos.environ['DJANGO_SETTINGS_MODULE'] = 'backend.settings'\n\n\n# Setup Django\ndjango.setup()\n\n\ndef setup(app):\n \"\"\"\n Registeration and setup.\n\n This method does the initial setup for the docs generation.\n \"\"\"\n # Register the docstring processor with sphinx to improve the appearance of Django models\n app.connect('autodoc-process-docstring', process_django_models)\n # Patch HTMLTranslator to open external links in new tab\n app.set_translator('html', PatchedHTMLTranslator)\n\n\n# -- Project information -----------------------------------------------------\n\n\nproject = 'integreat-cms'\n# pylint: disable=redefined-builtin\ncopyright = '2020, Integreat'\nauthor = 'Integreat'\n\n# The full version, including alpha/beta/rc tags\nrelease = VERSION\n\n# -- General configuration ---------------------------------------------------\n\n# All enabled sphinx extensions\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.githubpages',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.linkcode',\n 'sphinxcontrib_django',\n 'sphinx_rtd_theme',\n]\n\n# Enable cross-references to other documentations\nintersphinx_mapping = {\n 'python': ('https://docs.python.org/3.7', None),\n 'sphinx': ('https://www.sphinx-doc.org/en/master/', None),\n 'django': ('https://docs.djangoproject.com/en/2.2/',\n 'https://docs.djangoproject.com/en/2.2/_objects/'),\n 'django-mptt': ('https://django-mptt.readthedocs.io/en/latest/', None),\n}\n\n# The path for patched template files\ntemplates_path = ['templates']\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages.\nhtml_theme = 'sphinx_rtd_theme'\n# Do not show the project name, only the logo\nhtml_theme_options = {\n 'logo_only': True,\n 'collapse_navigation': False,\n}\n# The logo shown in the menu bar\nhtml_logo = '../src/cms/static/images/integreat-logo-white.png'\n# The facivon of the html doc files\nhtml_favicon = '../src/cms/static/images/favicon.ico'\n# The url where the docs should be published (via gh-pages)\nhtml_baseurl = 'https://Integreat.github.io/cms-django/'\n# Do not include links to the documentation source (.rst files) in build\nhtml_show_sourcelink = False\n\n# -- Modify default Django model parameter types------------------------------\n\n\n# pylint: disable=unused-argument, too-many-locals, too-many-branches\ndef process_django_models(app, what, name, obj, options, lines):\n \"\"\"Append correct param types from fields to model documentation.\"\"\"\n if inspect.isclass(obj) and issubclass(obj, django.db.models.Model):\n # Intersphinx mapping to django.contrib.postgres documentation does not work, so here the manual link\n postgres_docu = intersphinx_mapping.get('django')[1][0] + 'ref/contrib/postgres/fields/'\n # include_hidden to get also ManyToManyFields\n for field in obj._meta.get_fields(include_hidden=True):\n field_type = type(field).__name__\n field_module = type(field).__module__\n if field_module == 'django.contrib.postgres.fields.array':\n # Fix intersphinx mappings for django.contrib.postgres fields\n type_line = f':type {field.name}: `{field_module}.ArrayField <{postgres_docu}#arrayfield>`_'\n elif field_module == 'django.contrib.postgres.fields.jsonb':\n # Fix intersphinx mappings for django.contrib.postgres fields\n type_line = f':type {field.name}: `{field_module}.JSONField <{postgres_docu}#jsonfield>`_'\n elif field_module in ['django.db.models.fields.related', 'mptt.fields']:\n # Fix intersphinx mappings for related fields (ForeignKey, OneToOneField, ManyToManyField, ...)\n # Also includes related MPTT fields (TreeForeignKey, TreeOneToOneField, TreeManyToManyField, ...)\n remote_model = field.remote_field.get_related_field().model\n type_line = f':type {field.name}: {field_type} to :class:`~{remote_model.__module__}.{remote_model.__name__}`'\n elif field_module == 'django.db.models.fields.reverse_related':\n # Fix intersphinx mappings for reverse related fields (ManyToOneRel, OneToOneRel, ManyToManyRel, ...)\n remote_model = field.remote_field.model\n type_line = f':type {field.name}: Reverse {field_type[:-3]} Relation from :class:`~{remote_model.__module__}.{remote_model.__name__}`'\n else:\n if 'django.db.models' in field_module:\n # Scope with django.db.models * imports (remove all sub-module-paths)\n field_module = 'django.db.models'\n # Fix type hint to enable correct intersphinx mappings to other documentations\n type_line = f':type {field.name}: {field_module}.{field_type}'\n # This loop gets the indexes which are needed to update the type hints of the model parameters.\n # It makes it possible to split the parameter section into multiple parts, e.g. params inherited from a base\n # model and params of a sub model (otherwise the type hints would not be recognized when separated from\n # the parameter description).\n param_index = None\n next_param_index = None\n type_index = None\n for index, line in enumerate(lines):\n if param_index is None and f':param {field.name}:' in line:\n # The index of the field param is only used to determine the next param line\n param_index = index\n elif param_index is not None and next_param_index is None and (':param ' in line or line == ''):\n # The line of the next param after the field, this is the index where we will insert the type.\n # Sometimes the param descriptions extend over multiple lines, so we cannot just do param_index + 1.\n # If the line is empty, the param description is finished, even if it extends over multiple lines.\n next_param_index = index\n elif type_index is None and f':type {field.name}:' in line:\n # The index of the old type hint, we will either move this line or replace it\n type_index = index\n break\n if next_param_index is None:\n # In case the current field is the last param, we just append the type at the very end of lines\n next_param_index = len(lines)\n # For some params, the type line is not automatically generated and thus the type_index might be `None`\n if type_index is not None:\n # We delete the old type index, because we will replace it with the new type line\n del lines[type_index]\n # Insert the new type line just before the next param\n lines.insert(next_param_index, type_line)\n return lines\n\n\n# -- Source Code links to GitHub ---------------------------------------------\n\n\ndef linkcode_resolve(domain, info):\n \"\"\"Link source code to GitHub.\"\"\"\n if domain != 'py' or not info['module']:\n return None\n filename = info['module'].replace('.', '/')\n module = importlib.import_module(info['module'])\n basename = os.path.splitext(module.__file__)[0]\n if basename.endswith('__init__'):\n filename += '/__init__'\n item = module\n line_number_reference = ''\n for piece in info['fullname'].split('.'):\n item = getattr(item, piece)\n try:\n line_number_reference = f'#L{inspect.getsourcelines(item)[1]}'\n except (TypeError, IOError):\n pass\n return f\"https://github.com/Integreat/cms-django/blob/develop/src/{filename}.py{line_number_reference}\"\n\n# -- Link targets ------------------------------------------------------------\n\n\n# pylint: disable=abstract-method\nclass PatchedHTMLTranslator(HTMLTranslator):\n \"\"\"Open external links in a new tab\"\"\"\n\n def visit_reference(self, node):\n if (\n node.get('newtab') or\n not (\n node.get('target') or\n node.get('internal') or\n 'refuri' not in node\n )\n ):\n node['target'] = '_blank'\n super().visit_reference(node)\n", "path": "sphinx/conf.py"}], "after_files": [{"content": "\"\"\"\nConfiguration file for the Sphinx documentation builder.\n\nThis file only contains a selection of the most common options. For a full\nlist see the documentation:\nhttps://www.sphinx-doc.org/en/master/usage/configuration.html\n\"\"\"\n\n# -- Path setup --------------------------------------------------------------\n\nimport os\nimport sys\nimport inspect\nimport importlib\nimport django\n\nfrom sphinx.writers.html import HTMLTranslator\n\nfrom backend.settings import VERSION\n\n# Append project source directory to path environment variable\nsys.path.append(os.path.abspath('../src/'))\nos.environ['DJANGO_SETTINGS_MODULE'] = 'backend.settings'\n\n\n# Setup Django\ndjango.setup()\n\n\ndef setup(app):\n \"\"\"\n Registeration and setup.\n\n This method does the initial setup for the docs generation.\n \"\"\"\n # Register the docstring processor with sphinx to improve the appearance of Django models\n app.connect('autodoc-process-docstring', process_django_models)\n # Patch HTMLTranslator to open external links in new tab\n app.set_translator('html', PatchedHTMLTranslator)\n\n\n# -- Project information -----------------------------------------------------\n\n\nproject = 'integreat-cms'\n# pylint: disable=redefined-builtin\ncopyright = '2020, Integreat'\nauthor = 'Integreat'\n\n# The full version, including alpha/beta/rc tags\nrelease = VERSION\n\n# -- General configuration ---------------------------------------------------\n\n# All enabled sphinx extensions\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.githubpages',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.linkcode',\n 'sphinxcontrib_django',\n 'sphinx_rtd_theme',\n]\n\n# Enable cross-references to other documentations\nintersphinx_mapping = {\n 'python': ('https://docs.python.org/3.7', None),\n 'pipenv': ('https://pipenv.pypa.io/en/latest/', None),\n 'sphinx': ('https://www.sphinx-doc.org/en/master/', None),\n 'sphinx-rtd-tutorial': ('https://sphinx-rtd-tutorial.readthedocs.io/en/latest/', None),\n 'django': ('https://docs.djangoproject.com/en/2.2/',\n 'https://docs.djangoproject.com/en/2.2/_objects/'),\n 'django-mptt': ('https://django-mptt.readthedocs.io/en/latest/', None),\n}\n\n# The path for patched template files\ntemplates_path = ['templates']\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages.\nhtml_theme = 'sphinx_rtd_theme'\n# Do not show the project name, only the logo\nhtml_theme_options = {\n 'logo_only': False,\n 'collapse_navigation': False,\n}\n# The logo shown in the menu bar\nhtml_logo = '../src/cms/static/images/integreat-logo-white.png'\n# The facivon of the html doc files\nhtml_favicon = '../src/cms/static/images/favicon.ico'\n# The url where the docs should be published (via gh-pages)\nhtml_baseurl = 'https://Integreat.github.io/cms-django/'\n# Do not include links to the documentation source (.rst files) in build\nhtml_show_sourcelink = False\n# Do not include a link to sphinx\nhtml_show_sphinx = False\n# Include last updated timestamp\nhtml_last_updated_fmt = '%b %d, %Y'\n\n# -- Modify default Django model parameter types------------------------------\n\n\n# pylint: disable=unused-argument, too-many-locals, too-many-branches\ndef process_django_models(app, what, name, obj, options, lines):\n \"\"\"Append correct param types from fields to model documentation.\"\"\"\n if inspect.isclass(obj) and issubclass(obj, django.db.models.Model):\n # Intersphinx mapping to django.contrib.postgres documentation does not work, so here the manual link\n postgres_docu = intersphinx_mapping.get('django')[1][0] + 'ref/contrib/postgres/fields/'\n # include_hidden to get also ManyToManyFields\n for field in obj._meta.get_fields(include_hidden=True):\n field_type = type(field).__name__\n field_module = type(field).__module__\n if field_module == 'django.contrib.postgres.fields.array':\n # Fix intersphinx mappings for django.contrib.postgres fields\n type_line = f':type {field.name}: `{field_module}.ArrayField <{postgres_docu}#arrayfield>`_'\n elif field_module == 'django.contrib.postgres.fields.jsonb':\n # Fix intersphinx mappings for django.contrib.postgres fields\n type_line = f':type {field.name}: `{field_module}.JSONField <{postgres_docu}#jsonfield>`_'\n elif field_module in ['django.db.models.fields.related', 'mptt.fields']:\n # Fix intersphinx mappings for related fields (ForeignKey, OneToOneField, ManyToManyField, ...)\n # Also includes related MPTT fields (TreeForeignKey, TreeOneToOneField, TreeManyToManyField, ...)\n remote_model = field.remote_field.get_related_field().model\n type_line = f':type {field.name}: {field_type} to :class:`~{remote_model.__module__}.{remote_model.__name__}`'\n elif field_module == 'django.db.models.fields.reverse_related':\n # Fix intersphinx mappings for reverse related fields (ManyToOneRel, OneToOneRel, ManyToManyRel, ...)\n remote_model = field.remote_field.model\n type_line = f':type {field.name}: Reverse {field_type[:-3]} Relation from :class:`~{remote_model.__module__}.{remote_model.__name__}`'\n else:\n if 'django.db.models' in field_module:\n # Scope with django.db.models * imports (remove all sub-module-paths)\n field_module = 'django.db.models'\n # Fix type hint to enable correct intersphinx mappings to other documentations\n type_line = f':type {field.name}: {field_module}.{field_type}'\n # This loop gets the indexes which are needed to update the type hints of the model parameters.\n # It makes it possible to split the parameter section into multiple parts, e.g. params inherited from a base\n # model and params of a sub model (otherwise the type hints would not be recognized when separated from\n # the parameter description).\n param_index = None\n next_param_index = None\n type_index = None\n for index, line in enumerate(lines):\n if param_index is None and f':param {field.name}:' in line:\n # The index of the field param is only used to determine the next param line\n param_index = index\n elif param_index is not None and next_param_index is None and (':param ' in line or line == ''):\n # The line of the next param after the field, this is the index where we will insert the type.\n # Sometimes the param descriptions extend over multiple lines, so we cannot just do param_index + 1.\n # If the line is empty, the param description is finished, even if it extends over multiple lines.\n next_param_index = index\n elif type_index is None and f':type {field.name}:' in line:\n # The index of the old type hint, we will either move this line or replace it\n type_index = index\n break\n if next_param_index is None:\n # In case the current field is the last param, we just append the type at the very end of lines\n next_param_index = len(lines)\n # For some params, the type line is not automatically generated and thus the type_index might be `None`\n if type_index is not None:\n # We delete the old type index, because we will replace it with the new type line\n del lines[type_index]\n # Insert the new type line just before the next param\n lines.insert(next_param_index, type_line)\n return lines\n\n\n# -- Source Code links to GitHub ---------------------------------------------\n\n\ndef linkcode_resolve(domain, info):\n \"\"\"Link source code to GitHub.\"\"\"\n if domain != 'py' or not info['module']:\n return None\n filename = info['module'].replace('.', '/')\n module = importlib.import_module(info['module'])\n basename = os.path.splitext(module.__file__)[0]\n if basename.endswith('__init__'):\n filename += '/__init__'\n item = module\n line_number_reference = ''\n for piece in info['fullname'].split('.'):\n item = getattr(item, piece)\n try:\n line_number_reference = f'#L{inspect.getsourcelines(item)[1]}'\n except (TypeError, IOError):\n pass\n return f\"https://github.com/Integreat/cms-django/blob/develop/src/{filename}.py{line_number_reference}\"\n\n# -- Link targets ------------------------------------------------------------\n\n\n# pylint: disable=abstract-method\nclass PatchedHTMLTranslator(HTMLTranslator):\n \"\"\"Open external links in a new tab\"\"\"\n\n def visit_reference(self, node):\n if (\n node.get('newtab') or\n not (\n node.get('target') or\n node.get('internal') or\n 'refuri' not in node\n )\n ):\n node['target'] = '_blank'\n super().visit_reference(node)\n", "path": "sphinx/conf.py"}]}
2,700
368
gh_patches_debug_18530
rasdani/github-patches
git_diff
pulp__pulpcore-5377
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Task cleanup must not delete content nor artifacts Deleting content or artifacts outside of orphan cleanup is breaking the rules. And no, we cannot get away with that. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `pulpcore/tasking/util.py` Content: ``` 1 import logging 2 from gettext import gettext as _ 3 4 from django.db import transaction 5 from django.db import connection 6 7 from pulpcore.app.models import Task 8 from pulpcore.constants import TASK_FINAL_STATES, TASK_INCOMPLETE_STATES, TASK_STATES 9 10 _logger = logging.getLogger(__name__) 11 12 13 def cancel(task_id): 14 """ 15 Cancel the task that is represented by the given task_id. 16 17 This method cancels only the task with given task_id, not the spawned tasks. This also updates 18 task's state to either 'canceled' or 'canceling'. 19 20 Args: 21 task_id (str): The ID of the task you wish to cancel 22 23 Raises: 24 rest_framework.exceptions.NotFound: If a task with given task_id does not exist 25 """ 26 task_status = Task.objects.get(pk=task_id) 27 28 if task_status.state in TASK_FINAL_STATES: 29 # If the task is already done, just stop 30 _logger.debug( 31 "Task [{task_id}] already in a final state: {state}".format( 32 task_id=task_id, state=task_status.state 33 ) 34 ) 35 return task_status 36 37 _logger.info(_("Canceling task: {id}").format(id=task_id)) 38 39 task = task_status 40 # This is the only valid transition without holding the task lock 41 rows = Task.objects.filter(pk=task.pk, state__in=TASK_INCOMPLETE_STATES).update( 42 state=TASK_STATES.CANCELING 43 ) 44 # Notify the worker that might be running that task and other workers to clean up 45 with connection.cursor() as cursor: 46 cursor.execute("SELECT pg_notify('pulp_worker_cancel', %s)", (str(task.pk),)) 47 cursor.execute("NOTIFY pulp_worker_wakeup") 48 if rows == 1: 49 task.refresh_from_db() 50 return task 51 52 53 def _delete_incomplete_resources(task): 54 """ 55 Delete all incomplete created-resources on a canceled task. 56 57 Args: 58 task (Task): A task. 59 """ 60 if task.state not in [TASK_STATES.CANCELED, TASK_STATES.CANCELING]: 61 raise RuntimeError(_("Task must be canceled.")) 62 for model in (r.content_object for r in task.created_resources.all()): 63 try: 64 if model.complete: 65 continue 66 except AttributeError: 67 continue 68 try: 69 with transaction.atomic(): 70 model.delete() 71 except Exception as error: 72 _logger.error(_("Delete created resource, failed: {}").format(str(error))) 73 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/pulpcore/tasking/util.py b/pulpcore/tasking/util.py --- a/pulpcore/tasking/util.py +++ b/pulpcore/tasking/util.py @@ -4,7 +4,7 @@ from django.db import transaction from django.db import connection -from pulpcore.app.models import Task +from pulpcore.app.models import Artifact, Content, Task from pulpcore.constants import TASK_FINAL_STATES, TASK_INCOMPLETE_STATES, TASK_STATES _logger = logging.getLogger(__name__) @@ -60,6 +60,8 @@ if task.state not in [TASK_STATES.CANCELED, TASK_STATES.CANCELING]: raise RuntimeError(_("Task must be canceled.")) for model in (r.content_object for r in task.created_resources.all()): + if isinstance(model, (Artifact, Content)): + continue try: if model.complete: continue
{"golden_diff": "diff --git a/pulpcore/tasking/util.py b/pulpcore/tasking/util.py\n--- a/pulpcore/tasking/util.py\n+++ b/pulpcore/tasking/util.py\n@@ -4,7 +4,7 @@\n from django.db import transaction\n from django.db import connection\n \n-from pulpcore.app.models import Task\n+from pulpcore.app.models import Artifact, Content, Task\n from pulpcore.constants import TASK_FINAL_STATES, TASK_INCOMPLETE_STATES, TASK_STATES\n \n _logger = logging.getLogger(__name__)\n@@ -60,6 +60,8 @@\n if task.state not in [TASK_STATES.CANCELED, TASK_STATES.CANCELING]:\n raise RuntimeError(_(\"Task must be canceled.\"))\n for model in (r.content_object for r in task.created_resources.all()):\n+ if isinstance(model, (Artifact, Content)):\n+ continue\n try:\n if model.complete:\n continue\n", "issue": "Task cleanup must not delete content nor artifacts\nDeleting content or artifacts outside of orphan cleanup is breaking the rules.\r\nAnd no, we cannot get away with that.\r\n\n", "before_files": [{"content": "import logging\nfrom gettext import gettext as _\n\nfrom django.db import transaction\nfrom django.db import connection\n\nfrom pulpcore.app.models import Task\nfrom pulpcore.constants import TASK_FINAL_STATES, TASK_INCOMPLETE_STATES, TASK_STATES\n\n_logger = logging.getLogger(__name__)\n\n\ndef cancel(task_id):\n \"\"\"\n Cancel the task that is represented by the given task_id.\n\n This method cancels only the task with given task_id, not the spawned tasks. This also updates\n task's state to either 'canceled' or 'canceling'.\n\n Args:\n task_id (str): The ID of the task you wish to cancel\n\n Raises:\n rest_framework.exceptions.NotFound: If a task with given task_id does not exist\n \"\"\"\n task_status = Task.objects.get(pk=task_id)\n\n if task_status.state in TASK_FINAL_STATES:\n # If the task is already done, just stop\n _logger.debug(\n \"Task [{task_id}] already in a final state: {state}\".format(\n task_id=task_id, state=task_status.state\n )\n )\n return task_status\n\n _logger.info(_(\"Canceling task: {id}\").format(id=task_id))\n\n task = task_status\n # This is the only valid transition without holding the task lock\n rows = Task.objects.filter(pk=task.pk, state__in=TASK_INCOMPLETE_STATES).update(\n state=TASK_STATES.CANCELING\n )\n # Notify the worker that might be running that task and other workers to clean up\n with connection.cursor() as cursor:\n cursor.execute(\"SELECT pg_notify('pulp_worker_cancel', %s)\", (str(task.pk),))\n cursor.execute(\"NOTIFY pulp_worker_wakeup\")\n if rows == 1:\n task.refresh_from_db()\n return task\n\n\ndef _delete_incomplete_resources(task):\n \"\"\"\n Delete all incomplete created-resources on a canceled task.\n\n Args:\n task (Task): A task.\n \"\"\"\n if task.state not in [TASK_STATES.CANCELED, TASK_STATES.CANCELING]:\n raise RuntimeError(_(\"Task must be canceled.\"))\n for model in (r.content_object for r in task.created_resources.all()):\n try:\n if model.complete:\n continue\n except AttributeError:\n continue\n try:\n with transaction.atomic():\n model.delete()\n except Exception as error:\n _logger.error(_(\"Delete created resource, failed: {}\").format(str(error)))\n", "path": "pulpcore/tasking/util.py"}], "after_files": [{"content": "import logging\nfrom gettext import gettext as _\n\nfrom django.db import transaction\nfrom django.db import connection\n\nfrom pulpcore.app.models import Artifact, Content, Task\nfrom pulpcore.constants import TASK_FINAL_STATES, TASK_INCOMPLETE_STATES, TASK_STATES\n\n_logger = logging.getLogger(__name__)\n\n\ndef cancel(task_id):\n \"\"\"\n Cancel the task that is represented by the given task_id.\n\n This method cancels only the task with given task_id, not the spawned tasks. This also updates\n task's state to either 'canceled' or 'canceling'.\n\n Args:\n task_id (str): The ID of the task you wish to cancel\n\n Raises:\n rest_framework.exceptions.NotFound: If a task with given task_id does not exist\n \"\"\"\n task_status = Task.objects.get(pk=task_id)\n\n if task_status.state in TASK_FINAL_STATES:\n # If the task is already done, just stop\n _logger.debug(\n \"Task [{task_id}] already in a final state: {state}\".format(\n task_id=task_id, state=task_status.state\n )\n )\n return task_status\n\n _logger.info(_(\"Canceling task: {id}\").format(id=task_id))\n\n task = task_status\n # This is the only valid transition without holding the task lock\n rows = Task.objects.filter(pk=task.pk, state__in=TASK_INCOMPLETE_STATES).update(\n state=TASK_STATES.CANCELING\n )\n # Notify the worker that might be running that task and other workers to clean up\n with connection.cursor() as cursor:\n cursor.execute(\"SELECT pg_notify('pulp_worker_cancel', %s)\", (str(task.pk),))\n cursor.execute(\"NOTIFY pulp_worker_wakeup\")\n if rows == 1:\n task.refresh_from_db()\n return task\n\n\ndef _delete_incomplete_resources(task):\n \"\"\"\n Delete all incomplete created-resources on a canceled task.\n\n Args:\n task (Task): A task.\n \"\"\"\n if task.state not in [TASK_STATES.CANCELED, TASK_STATES.CANCELING]:\n raise RuntimeError(_(\"Task must be canceled.\"))\n for model in (r.content_object for r in task.created_resources.all()):\n if isinstance(model, (Artifact, Content)):\n continue\n try:\n if model.complete:\n continue\n except AttributeError:\n continue\n try:\n with transaction.atomic():\n model.delete()\n except Exception as error:\n _logger.error(_(\"Delete created resource, failed: {}\").format(str(error)))\n", "path": "pulpcore/tasking/util.py"}]}
956
190
gh_patches_debug_10535
rasdani/github-patches
git_diff
oppia__oppia-16730
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [Feature Request]: Consider allowing Python version to be 3.8.x where x >= 12 ### Is there an existing issue for this? - [X] I have searched the existing issues ### A clear and concise description of what you want to happen. In https://github.com/oppia/oppia/issues/16436, @sagangwee noted that he couldn't get Python 3.8.12 to work on his machine, but Python 3.8.13 worked fine. We might want to consider allowing 3.8.13 etc. in the start script checks for local developers. ### Describe the solution you'd like Consider expanding the start.py script checks to include later versions of Python that are still on the 3.8.x series. ### Describe alternatives you've considered We could leave the existing behaviour as-is, but this poses serious problems for developers who can't install 3.8.12. ### Additional context _No response_ --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `scripts/setup.py` Content: ``` 1 # Copyright 2019 The Oppia Authors. All Rights Reserved. 2 # 3 # Licensed under the Apache License, Version 2.0 (the 'License'); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an 'AS-IS' BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 """Python execution environent set up for all scripts.""" 16 17 from __future__ import annotations 18 19 import argparse 20 import os 21 import subprocess 22 import sys 23 import tarfile 24 25 from typing import Final, List, Optional 26 27 from . import clean 28 from . import common 29 30 _PARSER: Final = argparse.ArgumentParser( 31 description=""" 32 Python execution environent set up for all scripts. 33 """) 34 35 36 def create_directory(directory_path: str) -> None: 37 """Creates a new directory. Does not do anything if directory already 38 exists. 39 40 Args: 41 directory_path: str. Directory path to be created. 42 """ 43 if os.path.exists(directory_path): 44 return 45 os.makedirs(directory_path) 46 47 48 # This function takes a command for python as its only input. 49 # It checks this input for a specific version of python and returns false 50 # if it does not match the expected prefix. 51 def test_python_version() -> None: 52 running_python_version = '{0[0]}.{0[1]}.{0[2]}'.format(sys.version_info) 53 if running_python_version != '3.8.12': 54 print('Please use Python 3.8.12. Exiting...') 55 # If OS is Windows, print helpful error message about adding Python to 56 # path. 57 if common.is_windows_os(): 58 common.print_each_string_after_two_new_lines([ 59 'It looks like you are using Windows. If you have Python ' 60 'installed,', 61 'make sure it is in your PATH and that PYTHONPATH is set.', 62 'If you have two versions of Python (ie, Python 2.7 and 3), ' 63 'specify 2.7 before other versions of Python when setting the ' 64 'PATH.', 65 'Here are some helpful articles:', 66 'http://docs.python-guide.org/en/latest/starting/install/win/', 67 'https://stackoverflow.com/questions/3701646/how-to-add-to-the-' 68 'pythonpath-in-windows-7']) 69 # Exit when no suitable Python environment can be found. 70 raise Exception('No suitable python version found.') 71 72 # Verify that Python 2 is available. Python 2 is needed for the 73 # app_devserver. See the Google Cloud docs: 74 # https://cloud.google.com/appengine/docs/standard/python3/testing-and-deploying-your-app#local-dev-server 75 return_code = subprocess.call( 76 'python2 -V', stderr=subprocess.DEVNULL, shell=True 77 ) 78 if return_code != 0: 79 print( 80 '\033[91m' 81 'The Oppia server needs Python 2 to be installed. ' 82 'Please follow the instructions at ' 83 'https://github.com/oppia/oppia/wiki/Troubleshooting#' 84 'python-2-is-not-available to fix this.' 85 '\033[0m' 86 ) 87 sys.exit(1) 88 89 90 def download_and_install_package(url_to_retrieve: str, filename: str) -> None: 91 """Downloads and installs package in Oppia tools directory. 92 93 Args: 94 url_to_retrieve: string. The url from which package is to be 95 downloaded. 96 filename: string. The name of the tar file. 97 """ 98 common.url_retrieve(url_to_retrieve, filename) 99 tar = tarfile.open(name=filename) 100 tar.extractall(path=common.OPPIA_TOOLS_DIR) 101 tar.close() 102 rename_yarn_folder(filename, common.OPPIA_TOOLS_DIR) 103 os.remove(filename) 104 105 106 def rename_yarn_folder(filename: str, path: str) -> None: 107 """Removes the `v` from the yarn folder name. 108 109 Args: 110 filename: string. The name of the tar file. 111 path: string. The path of the yarn file. 112 """ 113 if 'yarn' in filename: 114 old_name = filename.split('.tar.gz')[0] 115 new_name = ''.join(old_name.split('v')) 116 os.rename(path + '/' + old_name, path + '/' + new_name) 117 118 119 def download_and_install_node() -> None: 120 """Download and install node to Oppia tools directory.""" 121 outfile_name = 'node-download' 122 123 if common.is_windows_os(): 124 if common.is_x64_architecture(): 125 architecture = 'x64' 126 else: 127 architecture = 'x86' 128 129 extension = '.zip' 130 node_file_name = 'node-v%s-win-%s' % ( 131 common.NODE_VERSION, architecture) 132 url_to_retrieve = 'https://nodejs.org/dist/v%s/%s%s' % ( 133 common.NODE_VERSION, node_file_name, extension) 134 common.url_retrieve(url_to_retrieve, outfile_name) 135 subprocess.check_call( 136 ['powershell.exe', '-c', 'expand-archive', 137 outfile_name, '-DestinationPath', 138 common.OPPIA_TOOLS_DIR]) 139 else: 140 extension = '.tar.gz' 141 if common.is_x64_architecture(): 142 if common.is_mac_os(): 143 node_file_name = 'node-v%s-darwin-x64' % (common.NODE_VERSION) 144 elif common.is_linux_os(): 145 node_file_name = 'node-v%s-linux-x64' % (common.NODE_VERSION) 146 # Oppia only suppports windows, mac and linux operating systems. 147 else: 148 raise Exception( 149 'System\'s Operating System is not compatible.') 150 else: 151 node_file_name = 'node-v%s' % common.NODE_VERSION 152 download_and_install_package( 153 'https://nodejs.org/dist/v%s/%s%s' % ( 154 common.NODE_VERSION, node_file_name, extension), 155 outfile_name) 156 os.rename( 157 os.path.join(common.OPPIA_TOOLS_DIR, node_file_name), 158 common.NODE_PATH) 159 if node_file_name == 'node-v%s' % common.NODE_VERSION: 160 with common.CD(common.NODE_PATH): 161 subprocess.check_call(['./configure']) 162 subprocess.check_call(['make']) 163 164 165 def main(args: Optional[List[str]] = None) -> None: 166 """Runs the script to setup Oppia.""" 167 unused_parsed_args = _PARSER.parse_args(args=args) 168 test_python_version() 169 170 # The second option allows this script to also be run from deployment 171 # folders. 172 if not os.getcwd().endswith(('oppia', 'deploy-')): 173 print('') 174 print('WARNING This script should be run from the oppia/ root folder.') 175 print('') 176 raise Exception('Invalid root directory.') 177 178 # Set COMMON_DIR to the absolute path of the directory above OPPIA_DIR. This 179 # is necessary becaue COMMON_DIR (or subsequent variables which refer to it) 180 # may use it in a situation where relative paths won't work as expected(such 181 # as $PYTHONPATH). 182 create_directory(common.OPPIA_TOOLS_DIR) 183 create_directory(common.THIRD_PARTY_DIR) 184 common.create_readme( 185 common.THIRD_PARTY_DIR, 186 'This folder contains third party libraries used in Oppia codebase.\n' 187 'You can regenerate this folder by deleting it and then running ' 188 'the start.py script.\n') 189 create_directory(common.NODE_MODULES_PATH) 190 common.create_readme( 191 common.NODE_MODULES_PATH, 192 'This folder contains node utilities used in Oppia codebase.\n' 193 'You can regenerate this folder by deleting it and then running ' 194 'the start.py script.\n') 195 196 # Download and install node.js. 197 print('Checking if node.js is installed in %s' % common.OPPIA_TOOLS_DIR) 198 if not os.path.exists(common.NODE_PATH): 199 print('Installing Node.js') 200 download_and_install_node() 201 # Change ownership of node_modules. 202 # Note: on some machines, these commands seem to take quite a long time. 203 if not common.is_windows_os(): 204 common.recursive_chown(common.NODE_MODULES_PATH, os.getuid(), -1) 205 common.recursive_chmod(common.NODE_MODULES_PATH, 0o744) 206 207 # Download and install yarn. 208 print('Checking if yarn is installed in %s' % common.OPPIA_TOOLS_DIR) 209 if not os.path.exists(common.YARN_PATH): 210 print('Removing package-lock.json') 211 clean.delete_file('package-lock.json') 212 common.print_each_string_after_two_new_lines([ 213 'Installing yarn', 214 'WARNING: Please note that Oppia uses Yarn to manage node packages', 215 'do *NOT* use npm. For more information on how to use yarn,', 216 'visit https://yarnpkg.com/en/docs/usage.']) 217 218 # NB: Update .yarnrc if the yarn version below is changed. 219 yarn_file_name = 'yarn-v%s.tar.gz' % common.YARN_VERSION 220 download_and_install_package( 221 'https://github.com/yarnpkg/yarn/releases/download/v%s/%s' 222 % (common.YARN_VERSION, yarn_file_name), yarn_file_name) 223 224 print('Environment setup completed.') 225 226 227 # The 'no coverage' pragma is used as this line is un-testable. This is because 228 # it will only be called when setup.py is used as a script. 229 if __name__ == '__main__': # pragma: no cover 230 main() 231 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/scripts/setup.py b/scripts/setup.py --- a/scripts/setup.py +++ b/scripts/setup.py @@ -50,8 +50,8 @@ # if it does not match the expected prefix. def test_python_version() -> None: running_python_version = '{0[0]}.{0[1]}.{0[2]}'.format(sys.version_info) - if running_python_version != '3.8.12': - print('Please use Python 3.8.12. Exiting...') + if running_python_version != '3.8.15': + print('Please use Python 3.8.15. Exiting...') # If OS is Windows, print helpful error message about adding Python to # path. if common.is_windows_os():
{"golden_diff": "diff --git a/scripts/setup.py b/scripts/setup.py\n--- a/scripts/setup.py\n+++ b/scripts/setup.py\n@@ -50,8 +50,8 @@\n # if it does not match the expected prefix.\n def test_python_version() -> None:\n running_python_version = '{0[0]}.{0[1]}.{0[2]}'.format(sys.version_info)\n- if running_python_version != '3.8.12':\n- print('Please use Python 3.8.12. Exiting...')\n+ if running_python_version != '3.8.15':\n+ print('Please use Python 3.8.15. Exiting...')\n # If OS is Windows, print helpful error message about adding Python to\n # path.\n if common.is_windows_os():\n", "issue": "[Feature Request]: Consider allowing Python version to be 3.8.x where x >= 12\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### A clear and concise description of what you want to happen.\n\nIn https://github.com/oppia/oppia/issues/16436, @sagangwee noted that he couldn't get Python 3.8.12 to work on his machine, but Python 3.8.13 worked fine. We might want to consider allowing 3.8.13 etc. in the start script checks for local developers.\n\n### Describe the solution you'd like\n\nConsider expanding the start.py script checks to include later versions of Python that are still on the 3.8.x series.\n\n### Describe alternatives you've considered\n\nWe could leave the existing behaviour as-is, but this poses serious problems for developers who can't install 3.8.12. \n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "# Copyright 2019 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the 'License');\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an 'AS-IS' BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Python execution environent set up for all scripts.\"\"\"\n\nfrom __future__ import annotations\n\nimport argparse\nimport os\nimport subprocess\nimport sys\nimport tarfile\n\nfrom typing import Final, List, Optional\n\nfrom . import clean\nfrom . import common\n\n_PARSER: Final = argparse.ArgumentParser(\n description=\"\"\"\nPython execution environent set up for all scripts.\n\"\"\")\n\n\ndef create_directory(directory_path: str) -> None:\n \"\"\"Creates a new directory. Does not do anything if directory already\n exists.\n\n Args:\n directory_path: str. Directory path to be created.\n \"\"\"\n if os.path.exists(directory_path):\n return\n os.makedirs(directory_path)\n\n\n# This function takes a command for python as its only input.\n# It checks this input for a specific version of python and returns false\n# if it does not match the expected prefix.\ndef test_python_version() -> None:\n running_python_version = '{0[0]}.{0[1]}.{0[2]}'.format(sys.version_info)\n if running_python_version != '3.8.12':\n print('Please use Python 3.8.12. Exiting...')\n # If OS is Windows, print helpful error message about adding Python to\n # path.\n if common.is_windows_os():\n common.print_each_string_after_two_new_lines([\n 'It looks like you are using Windows. If you have Python '\n 'installed,',\n 'make sure it is in your PATH and that PYTHONPATH is set.',\n 'If you have two versions of Python (ie, Python 2.7 and 3), '\n 'specify 2.7 before other versions of Python when setting the '\n 'PATH.',\n 'Here are some helpful articles:',\n 'http://docs.python-guide.org/en/latest/starting/install/win/',\n 'https://stackoverflow.com/questions/3701646/how-to-add-to-the-'\n 'pythonpath-in-windows-7'])\n # Exit when no suitable Python environment can be found.\n raise Exception('No suitable python version found.')\n\n # Verify that Python 2 is available. Python 2 is needed for the\n # app_devserver. See the Google Cloud docs:\n # https://cloud.google.com/appengine/docs/standard/python3/testing-and-deploying-your-app#local-dev-server\n return_code = subprocess.call(\n 'python2 -V', stderr=subprocess.DEVNULL, shell=True\n )\n if return_code != 0:\n print(\n '\\033[91m'\n 'The Oppia server needs Python 2 to be installed. '\n 'Please follow the instructions at '\n 'https://github.com/oppia/oppia/wiki/Troubleshooting#'\n 'python-2-is-not-available to fix this.'\n '\\033[0m'\n )\n sys.exit(1)\n\n\ndef download_and_install_package(url_to_retrieve: str, filename: str) -> None:\n \"\"\"Downloads and installs package in Oppia tools directory.\n\n Args:\n url_to_retrieve: string. The url from which package is to be\n downloaded.\n filename: string. The name of the tar file.\n \"\"\"\n common.url_retrieve(url_to_retrieve, filename)\n tar = tarfile.open(name=filename)\n tar.extractall(path=common.OPPIA_TOOLS_DIR)\n tar.close()\n rename_yarn_folder(filename, common.OPPIA_TOOLS_DIR)\n os.remove(filename)\n\n\ndef rename_yarn_folder(filename: str, path: str) -> None:\n \"\"\"Removes the `v` from the yarn folder name.\n\n Args:\n filename: string. The name of the tar file.\n path: string. The path of the yarn file.\n \"\"\"\n if 'yarn' in filename:\n old_name = filename.split('.tar.gz')[0]\n new_name = ''.join(old_name.split('v'))\n os.rename(path + '/' + old_name, path + '/' + new_name)\n\n\ndef download_and_install_node() -> None:\n \"\"\"Download and install node to Oppia tools directory.\"\"\"\n outfile_name = 'node-download'\n\n if common.is_windows_os():\n if common.is_x64_architecture():\n architecture = 'x64'\n else:\n architecture = 'x86'\n\n extension = '.zip'\n node_file_name = 'node-v%s-win-%s' % (\n common.NODE_VERSION, architecture)\n url_to_retrieve = 'https://nodejs.org/dist/v%s/%s%s' % (\n common.NODE_VERSION, node_file_name, extension)\n common.url_retrieve(url_to_retrieve, outfile_name)\n subprocess.check_call(\n ['powershell.exe', '-c', 'expand-archive',\n outfile_name, '-DestinationPath',\n common.OPPIA_TOOLS_DIR])\n else:\n extension = '.tar.gz'\n if common.is_x64_architecture():\n if common.is_mac_os():\n node_file_name = 'node-v%s-darwin-x64' % (common.NODE_VERSION)\n elif common.is_linux_os():\n node_file_name = 'node-v%s-linux-x64' % (common.NODE_VERSION)\n # Oppia only suppports windows, mac and linux operating systems.\n else:\n raise Exception(\n 'System\\'s Operating System is not compatible.')\n else:\n node_file_name = 'node-v%s' % common.NODE_VERSION\n download_and_install_package(\n 'https://nodejs.org/dist/v%s/%s%s' % (\n common.NODE_VERSION, node_file_name, extension),\n outfile_name)\n os.rename(\n os.path.join(common.OPPIA_TOOLS_DIR, node_file_name),\n common.NODE_PATH)\n if node_file_name == 'node-v%s' % common.NODE_VERSION:\n with common.CD(common.NODE_PATH):\n subprocess.check_call(['./configure'])\n subprocess.check_call(['make'])\n\n\ndef main(args: Optional[List[str]] = None) -> None:\n \"\"\"Runs the script to setup Oppia.\"\"\"\n unused_parsed_args = _PARSER.parse_args(args=args)\n test_python_version()\n\n # The second option allows this script to also be run from deployment\n # folders.\n if not os.getcwd().endswith(('oppia', 'deploy-')):\n print('')\n print('WARNING This script should be run from the oppia/ root folder.')\n print('')\n raise Exception('Invalid root directory.')\n\n # Set COMMON_DIR to the absolute path of the directory above OPPIA_DIR. This\n # is necessary becaue COMMON_DIR (or subsequent variables which refer to it)\n # may use it in a situation where relative paths won't work as expected(such\n # as $PYTHONPATH).\n create_directory(common.OPPIA_TOOLS_DIR)\n create_directory(common.THIRD_PARTY_DIR)\n common.create_readme(\n common.THIRD_PARTY_DIR,\n 'This folder contains third party libraries used in Oppia codebase.\\n'\n 'You can regenerate this folder by deleting it and then running '\n 'the start.py script.\\n')\n create_directory(common.NODE_MODULES_PATH)\n common.create_readme(\n common.NODE_MODULES_PATH,\n 'This folder contains node utilities used in Oppia codebase.\\n'\n 'You can regenerate this folder by deleting it and then running '\n 'the start.py script.\\n')\n\n # Download and install node.js.\n print('Checking if node.js is installed in %s' % common.OPPIA_TOOLS_DIR)\n if not os.path.exists(common.NODE_PATH):\n print('Installing Node.js')\n download_and_install_node()\n # Change ownership of node_modules.\n # Note: on some machines, these commands seem to take quite a long time.\n if not common.is_windows_os():\n common.recursive_chown(common.NODE_MODULES_PATH, os.getuid(), -1)\n common.recursive_chmod(common.NODE_MODULES_PATH, 0o744)\n\n # Download and install yarn.\n print('Checking if yarn is installed in %s' % common.OPPIA_TOOLS_DIR)\n if not os.path.exists(common.YARN_PATH):\n print('Removing package-lock.json')\n clean.delete_file('package-lock.json')\n common.print_each_string_after_two_new_lines([\n 'Installing yarn',\n 'WARNING: Please note that Oppia uses Yarn to manage node packages',\n 'do *NOT* use npm. For more information on how to use yarn,',\n 'visit https://yarnpkg.com/en/docs/usage.'])\n\n # NB: Update .yarnrc if the yarn version below is changed.\n yarn_file_name = 'yarn-v%s.tar.gz' % common.YARN_VERSION\n download_and_install_package(\n 'https://github.com/yarnpkg/yarn/releases/download/v%s/%s'\n % (common.YARN_VERSION, yarn_file_name), yarn_file_name)\n\n print('Environment setup completed.')\n\n\n# The 'no coverage' pragma is used as this line is un-testable. This is because\n# it will only be called when setup.py is used as a script.\nif __name__ == '__main__': # pragma: no cover\n main()\n", "path": "scripts/setup.py"}], "after_files": [{"content": "# Copyright 2019 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the 'License');\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an 'AS-IS' BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Python execution environent set up for all scripts.\"\"\"\n\nfrom __future__ import annotations\n\nimport argparse\nimport os\nimport subprocess\nimport sys\nimport tarfile\n\nfrom typing import Final, List, Optional\n\nfrom . import clean\nfrom . import common\n\n_PARSER: Final = argparse.ArgumentParser(\n description=\"\"\"\nPython execution environent set up for all scripts.\n\"\"\")\n\n\ndef create_directory(directory_path: str) -> None:\n \"\"\"Creates a new directory. Does not do anything if directory already\n exists.\n\n Args:\n directory_path: str. Directory path to be created.\n \"\"\"\n if os.path.exists(directory_path):\n return\n os.makedirs(directory_path)\n\n\n# This function takes a command for python as its only input.\n# It checks this input for a specific version of python and returns false\n# if it does not match the expected prefix.\ndef test_python_version() -> None:\n running_python_version = '{0[0]}.{0[1]}.{0[2]}'.format(sys.version_info)\n if running_python_version != '3.8.15':\n print('Please use Python 3.8.15. Exiting...')\n # If OS is Windows, print helpful error message about adding Python to\n # path.\n if common.is_windows_os():\n common.print_each_string_after_two_new_lines([\n 'It looks like you are using Windows. If you have Python '\n 'installed,',\n 'make sure it is in your PATH and that PYTHONPATH is set.',\n 'If you have two versions of Python (ie, Python 2.7 and 3), '\n 'specify 2.7 before other versions of Python when setting the '\n 'PATH.',\n 'Here are some helpful articles:',\n 'http://docs.python-guide.org/en/latest/starting/install/win/',\n 'https://stackoverflow.com/questions/3701646/how-to-add-to-the-'\n 'pythonpath-in-windows-7'])\n # Exit when no suitable Python environment can be found.\n raise Exception('No suitable python version found.')\n\n # Verify that Python 2 is available. Python 2 is needed for the\n # app_devserver. See the Google Cloud docs:\n # https://cloud.google.com/appengine/docs/standard/python3/testing-and-deploying-your-app#local-dev-server\n return_code = subprocess.call(\n 'python2 -V', stderr=subprocess.DEVNULL, shell=True\n )\n if return_code != 0:\n print(\n '\\033[91m'\n 'The Oppia server needs Python 2 to be installed. '\n 'Please follow the instructions at '\n 'https://github.com/oppia/oppia/wiki/Troubleshooting#'\n 'python-2-is-not-available to fix this.'\n '\\033[0m'\n )\n sys.exit(1)\n\n\ndef download_and_install_package(url_to_retrieve: str, filename: str) -> None:\n \"\"\"Downloads and installs package in Oppia tools directory.\n\n Args:\n url_to_retrieve: string. The url from which package is to be\n downloaded.\n filename: string. The name of the tar file.\n \"\"\"\n common.url_retrieve(url_to_retrieve, filename)\n tar = tarfile.open(name=filename)\n tar.extractall(path=common.OPPIA_TOOLS_DIR)\n tar.close()\n rename_yarn_folder(filename, common.OPPIA_TOOLS_DIR)\n os.remove(filename)\n\n\ndef rename_yarn_folder(filename: str, path: str) -> None:\n \"\"\"Removes the `v` from the yarn folder name.\n\n Args:\n filename: string. The name of the tar file.\n path: string. The path of the yarn file.\n \"\"\"\n if 'yarn' in filename:\n old_name = filename.split('.tar.gz')[0]\n new_name = ''.join(old_name.split('v'))\n os.rename(path + '/' + old_name, path + '/' + new_name)\n\n\ndef download_and_install_node() -> None:\n \"\"\"Download and install node to Oppia tools directory.\"\"\"\n outfile_name = 'node-download'\n\n if common.is_windows_os():\n if common.is_x64_architecture():\n architecture = 'x64'\n else:\n architecture = 'x86'\n\n extension = '.zip'\n node_file_name = 'node-v%s-win-%s' % (\n common.NODE_VERSION, architecture)\n url_to_retrieve = 'https://nodejs.org/dist/v%s/%s%s' % (\n common.NODE_VERSION, node_file_name, extension)\n common.url_retrieve(url_to_retrieve, outfile_name)\n subprocess.check_call(\n ['powershell.exe', '-c', 'expand-archive',\n outfile_name, '-DestinationPath',\n common.OPPIA_TOOLS_DIR])\n else:\n extension = '.tar.gz'\n if common.is_x64_architecture():\n if common.is_mac_os():\n node_file_name = 'node-v%s-darwin-x64' % (common.NODE_VERSION)\n elif common.is_linux_os():\n node_file_name = 'node-v%s-linux-x64' % (common.NODE_VERSION)\n # Oppia only suppports windows, mac and linux operating systems.\n else:\n raise Exception(\n 'System\\'s Operating System is not compatible.')\n else:\n node_file_name = 'node-v%s' % common.NODE_VERSION\n download_and_install_package(\n 'https://nodejs.org/dist/v%s/%s%s' % (\n common.NODE_VERSION, node_file_name, extension),\n outfile_name)\n os.rename(\n os.path.join(common.OPPIA_TOOLS_DIR, node_file_name),\n common.NODE_PATH)\n if node_file_name == 'node-v%s' % common.NODE_VERSION:\n with common.CD(common.NODE_PATH):\n subprocess.check_call(['./configure'])\n subprocess.check_call(['make'])\n\n\ndef main(args: Optional[List[str]] = None) -> None:\n \"\"\"Runs the script to setup Oppia.\"\"\"\n unused_parsed_args = _PARSER.parse_args(args=args)\n test_python_version()\n\n # The second option allows this script to also be run from deployment\n # folders.\n if not os.getcwd().endswith(('oppia', 'deploy-')):\n print('')\n print('WARNING This script should be run from the oppia/ root folder.')\n print('')\n raise Exception('Invalid root directory.')\n\n # Set COMMON_DIR to the absolute path of the directory above OPPIA_DIR. This\n # is necessary becaue COMMON_DIR (or subsequent variables which refer to it)\n # may use it in a situation where relative paths won't work as expected(such\n # as $PYTHONPATH).\n create_directory(common.OPPIA_TOOLS_DIR)\n create_directory(common.THIRD_PARTY_DIR)\n common.create_readme(\n common.THIRD_PARTY_DIR,\n 'This folder contains third party libraries used in Oppia codebase.\\n'\n 'You can regenerate this folder by deleting it and then running '\n 'the start.py script.\\n')\n create_directory(common.NODE_MODULES_PATH)\n common.create_readme(\n common.NODE_MODULES_PATH,\n 'This folder contains node utilities used in Oppia codebase.\\n'\n 'You can regenerate this folder by deleting it and then running '\n 'the start.py script.\\n')\n\n # Download and install node.js.\n print('Checking if node.js is installed in %s' % common.OPPIA_TOOLS_DIR)\n if not os.path.exists(common.NODE_PATH):\n print('Installing Node.js')\n download_and_install_node()\n # Change ownership of node_modules.\n # Note: on some machines, these commands seem to take quite a long time.\n if not common.is_windows_os():\n common.recursive_chown(common.NODE_MODULES_PATH, os.getuid(), -1)\n common.recursive_chmod(common.NODE_MODULES_PATH, 0o744)\n\n # Download and install yarn.\n print('Checking if yarn is installed in %s' % common.OPPIA_TOOLS_DIR)\n if not os.path.exists(common.YARN_PATH):\n print('Removing package-lock.json')\n clean.delete_file('package-lock.json')\n common.print_each_string_after_two_new_lines([\n 'Installing yarn',\n 'WARNING: Please note that Oppia uses Yarn to manage node packages',\n 'do *NOT* use npm. For more information on how to use yarn,',\n 'visit https://yarnpkg.com/en/docs/usage.'])\n\n # NB: Update .yarnrc if the yarn version below is changed.\n yarn_file_name = 'yarn-v%s.tar.gz' % common.YARN_VERSION\n download_and_install_package(\n 'https://github.com/yarnpkg/yarn/releases/download/v%s/%s'\n % (common.YARN_VERSION, yarn_file_name), yarn_file_name)\n\n print('Environment setup completed.')\n\n\n# The 'no coverage' pragma is used as this line is un-testable. This is because\n# it will only be called when setup.py is used as a script.\nif __name__ == '__main__': # pragma: no cover\n main()\n", "path": "scripts/setup.py"}]}
3,159
171
gh_patches_debug_3397
rasdani/github-patches
git_diff
Netflix__lemur-238
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Creating an authority does not allow others with the role to issue certificates When creating an authority currently only the creator can see the authority, anyone with the owning role should be able to see and use the certificate. Currently even when a valid role is assigned and the user can see the authority they cannot use it because the cannot access the authorities key. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `lemur/authorities/service.py` Content: ``` 1 """ 2 .. module: lemur.authorities.service 3 :platform: Unix 4 :synopsis: This module contains all of the services level functions used to 5 administer authorities in Lemur 6 :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more 7 :license: Apache, see LICENSE for more details. 8 .. moduleauthor:: Kevin Glisson <[email protected]> 9 10 """ 11 from flask import g 12 from flask import current_app 13 14 from lemur import database 15 from lemur.authorities.models import Authority 16 from lemur.roles import service as role_service 17 from lemur.notifications import service as notification_service 18 19 from lemur.roles.models import Role 20 from lemur.certificates.models import Certificate 21 22 from lemur.plugins.base import plugins 23 24 25 def update(authority_id, description=None, owner=None, active=None, roles=None): 26 """ 27 Update a an authority with new values. 28 29 :param authority_id: 30 :param roles: roles that are allowed to use this authority 31 :return: 32 """ 33 authority = get(authority_id) 34 if roles: 35 authority = database.update_list(authority, 'roles', Role, roles) 36 37 if active: 38 authority.active = active 39 40 authority.description = description 41 authority.owner = owner 42 return database.update(authority) 43 44 45 def create(kwargs): 46 """ 47 Create a new authority. 48 49 :return: 50 """ 51 52 issuer = plugins.get(kwargs.get('pluginName')) 53 54 kwargs['creator'] = g.current_user.email 55 cert_body, intermediate, issuer_roles = issuer.create_authority(kwargs) 56 57 cert = Certificate(cert_body, chain=intermediate) 58 cert.owner = kwargs['ownerEmail'] 59 60 if kwargs['caType'] == 'subca': 61 cert.description = "This is the ROOT certificate for the {0} sub certificate authority the parent \ 62 authority is {1}.".format(kwargs.get('caName'), kwargs.get('caParent')) 63 else: 64 cert.description = "This is the ROOT certificate for the {0} certificate authority.".format( 65 kwargs.get('caName') 66 ) 67 68 cert.user = g.current_user 69 70 cert.notifications = notification_service.create_default_expiration_notifications( 71 'DEFAULT_SECURITY', 72 current_app.config.get('LEMUR_SECURITY_TEAM_EMAIL') 73 ) 74 75 # we create and attach any roles that the issuer gives us 76 role_objs = [] 77 for r in issuer_roles: 78 79 role = role_service.create( 80 r['name'], 81 password=r['password'], 82 description="{0} auto generated role".format(kwargs.get('pluginName')), 83 username=r['username']) 84 85 # the user creating the authority should be able to administer it 86 if role.username == 'admin': 87 g.current_user.roles.append(role) 88 89 role_objs.append(role) 90 91 authority = Authority( 92 kwargs.get('caName'), 93 kwargs['ownerEmail'], 94 kwargs['pluginName'], 95 cert_body, 96 description=kwargs['caDescription'], 97 chain=intermediate, 98 roles=role_objs 99 ) 100 101 database.update(cert) 102 authority = database.create(authority) 103 104 g.current_user.authorities.append(authority) 105 106 return authority 107 108 109 def get_all(): 110 """ 111 Get all authorities that are currently in Lemur. 112 113 :rtype : List 114 :return: 115 """ 116 query = database.session_query(Authority) 117 return database.find_all(query, Authority, {}).all() 118 119 120 def get(authority_id): 121 """ 122 Retrieves an authority given it's ID 123 124 :param authority_id: 125 :return: 126 """ 127 return database.get(Authority, authority_id) 128 129 130 def get_by_name(authority_name): 131 """ 132 Retrieves an authority given it's name. 133 134 :param authority_name: 135 :return: 136 """ 137 return database.get(Authority, authority_name, field='name') 138 139 140 def get_authority_role(ca_name): 141 """ 142 Attempts to get the authority role for a given ca uses current_user 143 as a basis for accomplishing that. 144 145 :param ca_name: 146 """ 147 if g.current_user.is_admin: 148 authority = get_by_name(ca_name) 149 # TODO we should pick admin ca roles for admin 150 return authority.roles[0] 151 else: 152 for role in g.current_user.roles: 153 if role.authority: 154 if role.authority.name == ca_name: 155 return role 156 157 158 def render(args): 159 """ 160 Helper that helps us render the REST Api responses. 161 :param args: 162 :return: 163 """ 164 query = database.session_query(Authority) 165 sort_by = args.pop('sort_by') 166 sort_dir = args.pop('sort_dir') 167 page = args.pop('page') 168 count = args.pop('count') 169 filt = args.pop('filter') 170 171 if filt: 172 terms = filt.split(';') 173 if 'active' in filt: # this is really weird but strcmp seems to not work here?? 174 query = query.filter(Authority.active == terms[1]) 175 else: 176 query = database.filter(query, Authority, terms) 177 178 # we make sure that a user can only use an authority they either own are are a member of - admins can see all 179 if not g.current_user.is_admin: 180 authority_ids = [] 181 for role in g.current_user.roles: 182 if role.authority: 183 authority_ids.append(role.authority.id) 184 query = query.filter(Authority.id.in_(authority_ids)) 185 186 query = database.find_all(query, Authority, args) 187 188 if sort_by and sort_dir: 189 query = database.sort(query, Authority, sort_by, sort_dir) 190 191 return database.paginate(query, page, count) 192 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/lemur/authorities/service.py b/lemur/authorities/service.py --- a/lemur/authorities/service.py +++ b/lemur/authorities/service.py @@ -101,6 +101,10 @@ database.update(cert) authority = database.create(authority) + # the owning dl or role should have this authority associated with it + owner_role = role_service.get_by_name(kwargs['ownerEmail']) + owner_role.authority = authority + g.current_user.authorities.append(authority) return authority
{"golden_diff": "diff --git a/lemur/authorities/service.py b/lemur/authorities/service.py\n--- a/lemur/authorities/service.py\n+++ b/lemur/authorities/service.py\n@@ -101,6 +101,10 @@\n database.update(cert)\n authority = database.create(authority)\n \n+ # the owning dl or role should have this authority associated with it\n+ owner_role = role_service.get_by_name(kwargs['ownerEmail'])\n+ owner_role.authority = authority\n+\n g.current_user.authorities.append(authority)\n \n return authority\n", "issue": "Creating an authority does not allow others with the role to issue certificates\nWhen creating an authority currently only the creator can see the authority, anyone with the owning role should be able to see and use the certificate.\n\nCurrently even when a valid role is assigned and the user can see the authority they cannot use it because the cannot access the authorities key.\n\n", "before_files": [{"content": "\"\"\"\n.. module: lemur.authorities.service\n :platform: Unix\n :synopsis: This module contains all of the services level functions used to\n administer authorities in Lemur\n :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more\n :license: Apache, see LICENSE for more details.\n.. moduleauthor:: Kevin Glisson <[email protected]>\n\n\"\"\"\nfrom flask import g\nfrom flask import current_app\n\nfrom lemur import database\nfrom lemur.authorities.models import Authority\nfrom lemur.roles import service as role_service\nfrom lemur.notifications import service as notification_service\n\nfrom lemur.roles.models import Role\nfrom lemur.certificates.models import Certificate\n\nfrom lemur.plugins.base import plugins\n\n\ndef update(authority_id, description=None, owner=None, active=None, roles=None):\n \"\"\"\n Update a an authority with new values.\n\n :param authority_id:\n :param roles: roles that are allowed to use this authority\n :return:\n \"\"\"\n authority = get(authority_id)\n if roles:\n authority = database.update_list(authority, 'roles', Role, roles)\n\n if active:\n authority.active = active\n\n authority.description = description\n authority.owner = owner\n return database.update(authority)\n\n\ndef create(kwargs):\n \"\"\"\n Create a new authority.\n\n :return:\n \"\"\"\n\n issuer = plugins.get(kwargs.get('pluginName'))\n\n kwargs['creator'] = g.current_user.email\n cert_body, intermediate, issuer_roles = issuer.create_authority(kwargs)\n\n cert = Certificate(cert_body, chain=intermediate)\n cert.owner = kwargs['ownerEmail']\n\n if kwargs['caType'] == 'subca':\n cert.description = \"This is the ROOT certificate for the {0} sub certificate authority the parent \\\n authority is {1}.\".format(kwargs.get('caName'), kwargs.get('caParent'))\n else:\n cert.description = \"This is the ROOT certificate for the {0} certificate authority.\".format(\n kwargs.get('caName')\n )\n\n cert.user = g.current_user\n\n cert.notifications = notification_service.create_default_expiration_notifications(\n 'DEFAULT_SECURITY',\n current_app.config.get('LEMUR_SECURITY_TEAM_EMAIL')\n )\n\n # we create and attach any roles that the issuer gives us\n role_objs = []\n for r in issuer_roles:\n\n role = role_service.create(\n r['name'],\n password=r['password'],\n description=\"{0} auto generated role\".format(kwargs.get('pluginName')),\n username=r['username'])\n\n # the user creating the authority should be able to administer it\n if role.username == 'admin':\n g.current_user.roles.append(role)\n\n role_objs.append(role)\n\n authority = Authority(\n kwargs.get('caName'),\n kwargs['ownerEmail'],\n kwargs['pluginName'],\n cert_body,\n description=kwargs['caDescription'],\n chain=intermediate,\n roles=role_objs\n )\n\n database.update(cert)\n authority = database.create(authority)\n\n g.current_user.authorities.append(authority)\n\n return authority\n\n\ndef get_all():\n \"\"\"\n Get all authorities that are currently in Lemur.\n\n :rtype : List\n :return:\n \"\"\"\n query = database.session_query(Authority)\n return database.find_all(query, Authority, {}).all()\n\n\ndef get(authority_id):\n \"\"\"\n Retrieves an authority given it's ID\n\n :param authority_id:\n :return:\n \"\"\"\n return database.get(Authority, authority_id)\n\n\ndef get_by_name(authority_name):\n \"\"\"\n Retrieves an authority given it's name.\n\n :param authority_name:\n :return:\n \"\"\"\n return database.get(Authority, authority_name, field='name')\n\n\ndef get_authority_role(ca_name):\n \"\"\"\n Attempts to get the authority role for a given ca uses current_user\n as a basis for accomplishing that.\n\n :param ca_name:\n \"\"\"\n if g.current_user.is_admin:\n authority = get_by_name(ca_name)\n # TODO we should pick admin ca roles for admin\n return authority.roles[0]\n else:\n for role in g.current_user.roles:\n if role.authority:\n if role.authority.name == ca_name:\n return role\n\n\ndef render(args):\n \"\"\"\n Helper that helps us render the REST Api responses.\n :param args:\n :return:\n \"\"\"\n query = database.session_query(Authority)\n sort_by = args.pop('sort_by')\n sort_dir = args.pop('sort_dir')\n page = args.pop('page')\n count = args.pop('count')\n filt = args.pop('filter')\n\n if filt:\n terms = filt.split(';')\n if 'active' in filt: # this is really weird but strcmp seems to not work here??\n query = query.filter(Authority.active == terms[1])\n else:\n query = database.filter(query, Authority, terms)\n\n # we make sure that a user can only use an authority they either own are are a member of - admins can see all\n if not g.current_user.is_admin:\n authority_ids = []\n for role in g.current_user.roles:\n if role.authority:\n authority_ids.append(role.authority.id)\n query = query.filter(Authority.id.in_(authority_ids))\n\n query = database.find_all(query, Authority, args)\n\n if sort_by and sort_dir:\n query = database.sort(query, Authority, sort_by, sort_dir)\n\n return database.paginate(query, page, count)\n", "path": "lemur/authorities/service.py"}], "after_files": [{"content": "\"\"\"\n.. module: lemur.authorities.service\n :platform: Unix\n :synopsis: This module contains all of the services level functions used to\n administer authorities in Lemur\n :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more\n :license: Apache, see LICENSE for more details.\n.. moduleauthor:: Kevin Glisson <[email protected]>\n\n\"\"\"\nfrom flask import g\nfrom flask import current_app\n\nfrom lemur import database\nfrom lemur.authorities.models import Authority\nfrom lemur.roles import service as role_service\nfrom lemur.notifications import service as notification_service\n\nfrom lemur.roles.models import Role\nfrom lemur.certificates.models import Certificate\n\nfrom lemur.plugins.base import plugins\n\n\ndef update(authority_id, description=None, owner=None, active=None, roles=None):\n \"\"\"\n Update a an authority with new values.\n\n :param authority_id:\n :param roles: roles that are allowed to use this authority\n :return:\n \"\"\"\n authority = get(authority_id)\n if roles:\n authority = database.update_list(authority, 'roles', Role, roles)\n\n if active:\n authority.active = active\n\n authority.description = description\n authority.owner = owner\n return database.update(authority)\n\n\ndef create(kwargs):\n \"\"\"\n Create a new authority.\n\n :return:\n \"\"\"\n\n issuer = plugins.get(kwargs.get('pluginName'))\n\n kwargs['creator'] = g.current_user.email\n cert_body, intermediate, issuer_roles = issuer.create_authority(kwargs)\n\n cert = Certificate(cert_body, chain=intermediate)\n cert.owner = kwargs['ownerEmail']\n\n if kwargs['caType'] == 'subca':\n cert.description = \"This is the ROOT certificate for the {0} sub certificate authority the parent \\\n authority is {1}.\".format(kwargs.get('caName'), kwargs.get('caParent'))\n else:\n cert.description = \"This is the ROOT certificate for the {0} certificate authority.\".format(\n kwargs.get('caName')\n )\n\n cert.user = g.current_user\n\n cert.notifications = notification_service.create_default_expiration_notifications(\n 'DEFAULT_SECURITY',\n current_app.config.get('LEMUR_SECURITY_TEAM_EMAIL')\n )\n\n # we create and attach any roles that the issuer gives us\n role_objs = []\n for r in issuer_roles:\n\n role = role_service.create(\n r['name'],\n password=r['password'],\n description=\"{0} auto generated role\".format(kwargs.get('pluginName')),\n username=r['username'])\n\n # the user creating the authority should be able to administer it\n if role.username == 'admin':\n g.current_user.roles.append(role)\n\n role_objs.append(role)\n\n authority = Authority(\n kwargs.get('caName'),\n kwargs['ownerEmail'],\n kwargs['pluginName'],\n cert_body,\n description=kwargs['caDescription'],\n chain=intermediate,\n roles=role_objs\n )\n\n database.update(cert)\n authority = database.create(authority)\n\n # the owning dl or role should have this authority associated with it\n owner_role = role_service.get_by_name(kwargs['ownerEmail'])\n owner_role.authority = authority\n\n g.current_user.authorities.append(authority)\n\n return authority\n\n\ndef get_all():\n \"\"\"\n Get all authorities that are currently in Lemur.\n\n :rtype : List\n :return:\n \"\"\"\n query = database.session_query(Authority)\n return database.find_all(query, Authority, {}).all()\n\n\ndef get(authority_id):\n \"\"\"\n Retrieves an authority given it's ID\n\n :param authority_id:\n :return:\n \"\"\"\n return database.get(Authority, authority_id)\n\n\ndef get_by_name(authority_name):\n \"\"\"\n Retrieves an authority given it's name.\n\n :param authority_name:\n :return:\n \"\"\"\n return database.get(Authority, authority_name, field='name')\n\n\ndef get_authority_role(ca_name):\n \"\"\"\n Attempts to get the authority role for a given ca uses current_user\n as a basis for accomplishing that.\n\n :param ca_name:\n \"\"\"\n if g.current_user.is_admin:\n authority = get_by_name(ca_name)\n # TODO we should pick admin ca roles for admin\n return authority.roles[0]\n else:\n for role in g.current_user.roles:\n if role.authority:\n if role.authority.name == ca_name:\n return role\n\n\ndef render(args):\n \"\"\"\n Helper that helps us render the REST Api responses.\n :param args:\n :return:\n \"\"\"\n query = database.session_query(Authority)\n sort_by = args.pop('sort_by')\n sort_dir = args.pop('sort_dir')\n page = args.pop('page')\n count = args.pop('count')\n filt = args.pop('filter')\n\n if filt:\n terms = filt.split(';')\n if 'active' in filt: # this is really weird but strcmp seems to not work here??\n query = query.filter(Authority.active == terms[1])\n else:\n query = database.filter(query, Authority, terms)\n\n # we make sure that a user can only use an authority they either own are are a member of - admins can see all\n if not g.current_user.is_admin:\n authority_ids = []\n for role in g.current_user.roles:\n if role.authority:\n authority_ids.append(role.authority.id)\n query = query.filter(Authority.id.in_(authority_ids))\n\n query = database.find_all(query, Authority, args)\n\n if sort_by and sort_dir:\n query = database.sort(query, Authority, sort_by, sort_dir)\n\n return database.paginate(query, page, count)\n", "path": "lemur/authorities/service.py"}]}
2,011
129
gh_patches_debug_11633
rasdani/github-patches
git_diff
pypi__warehouse-1181
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Errors in celery don't get sent to Sentry --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `warehouse/celery.py` Content: ``` 1 # Licensed under the Apache License, Version 2.0 (the "License"); 2 # you may not use this file except in compliance with the License. 3 # You may obtain a copy of the License at 4 # 5 # http://www.apache.org/licenses/LICENSE-2.0 6 # 7 # Unless required by applicable law or agreed to in writing, software 8 # distributed under the License is distributed on an "AS IS" BASIS, 9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 10 # See the License for the specific language governing permissions and 11 # limitations under the License. 12 13 import celery.backends 14 15 # We need to trick Celery into supporting rediss:// URLs which is how redis-py 16 # signals that you should use Redis with TLS. 17 celery.backends.BACKEND_ALIASES["rediss"] = "warehouse.celery:TLSRedisBackend" # noqa 18 19 from celery import Celery, Task 20 from celery.backends.redis import RedisBackend as _RedisBackend 21 from celery.signals import celeryd_init 22 from pyramid import scripting 23 from pyramid.threadlocal import get_current_request 24 25 from warehouse.config import Environment, configure 26 27 28 @celeryd_init.connect 29 def _configure_celery(*args, **kwargs): 30 configure() 31 32 33 class TLSRedisBackend(_RedisBackend): 34 35 def _params_from_url(self, url, defaults): 36 params = super()._params_from_url(url, defaults) 37 params.update({"connection_class": self.redis.SSLConnection}) 38 return params 39 40 41 class WarehouseTask(Task): 42 43 abstract = True 44 45 def __call__(self, *args, **kwargs): 46 registry = self.app.pyramid_config.registry 47 pyramid_env = scripting.prepare(registry=registry) 48 49 try: 50 return super().__call__(pyramid_env["request"], *args, **kwargs) 51 finally: 52 pyramid_env["closer"]() 53 54 def apply_async(self, *args, **kwargs): 55 # The API design of Celery makes this threadlocal pretty impossible to 56 # avoid :( 57 request = get_current_request() 58 59 # If for whatever reason we were unable to get a request we'll just 60 # skip this and call the original method to send this immediately. 61 if request is None or not hasattr(request, "tm"): 62 return super().apply_async(*args, **kwargs) 63 64 # This will break things that expect to get an AsyncResult because 65 # we're no longer going to be returning an async result from this when 66 # called from within a request, response cycle. Ideally we shouldn't be 67 # waiting for responses in a request/response cycle anyways though. 68 request.tm.get().addAfterCommitHook( 69 self._after_commit_hook, 70 args=args, 71 kws=kwargs, 72 ) 73 74 def _after_commit_hook(self, success, *args, **kwargs): 75 if success: 76 super().apply_async(*args, **kwargs) 77 78 79 app = Celery("warehouse") 80 app.Task = WarehouseTask 81 82 83 task = app.task 84 85 86 def includeme(config): 87 s = config.registry.settings 88 app.pyramid_config = config 89 app.conf.update( 90 BROKER_URL=s["celery.broker_url"], 91 BROKER_USE_SSL=s["warehouse.env"] == Environment.production, 92 CELERY_DISABLE_RATE_LIMITS=True, 93 CELERY_RESULT_BACKEND=s["celery.result_url"], 94 CELERY_RESULT_SERIALIZER="json", 95 CELERY_TASK_SERIALIZER="json", 96 CELERY_ACCEPT_CONTENT=["json", "msgpack"], 97 CELERY_MESSAGE_COMPRESSION="gzip", 98 CELERY_QUEUE_HA_POLICY="all", 99 ) 100 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/warehouse/celery.py b/warehouse/celery.py --- a/warehouse/celery.py +++ b/warehouse/celery.py @@ -21,13 +21,16 @@ from celery.signals import celeryd_init from pyramid import scripting from pyramid.threadlocal import get_current_request +from raven.contrib.celery import register_signal, register_logger_signal from warehouse.config import Environment, configure @celeryd_init.connect def _configure_celery(*args, **kwargs): - configure() + config = configure() + register_logger_signal(config.registry["raven.client"]) + register_signal(config.registry["raven.client"]) class TLSRedisBackend(_RedisBackend):
{"golden_diff": "diff --git a/warehouse/celery.py b/warehouse/celery.py\n--- a/warehouse/celery.py\n+++ b/warehouse/celery.py\n@@ -21,13 +21,16 @@\n from celery.signals import celeryd_init\n from pyramid import scripting\n from pyramid.threadlocal import get_current_request\n+from raven.contrib.celery import register_signal, register_logger_signal\n \n from warehouse.config import Environment, configure\n \n \n @celeryd_init.connect\n def _configure_celery(*args, **kwargs):\n- configure()\n+ config = configure()\n+ register_logger_signal(config.registry[\"raven.client\"])\n+ register_signal(config.registry[\"raven.client\"])\n \n \n class TLSRedisBackend(_RedisBackend):\n", "issue": "Errors in celery don't get sent to Sentry\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport celery.backends\n\n# We need to trick Celery into supporting rediss:// URLs which is how redis-py\n# signals that you should use Redis with TLS.\ncelery.backends.BACKEND_ALIASES[\"rediss\"] = \"warehouse.celery:TLSRedisBackend\" # noqa\n\nfrom celery import Celery, Task\nfrom celery.backends.redis import RedisBackend as _RedisBackend\nfrom celery.signals import celeryd_init\nfrom pyramid import scripting\nfrom pyramid.threadlocal import get_current_request\n\nfrom warehouse.config import Environment, configure\n\n\n@celeryd_init.connect\ndef _configure_celery(*args, **kwargs):\n configure()\n\n\nclass TLSRedisBackend(_RedisBackend):\n\n def _params_from_url(self, url, defaults):\n params = super()._params_from_url(url, defaults)\n params.update({\"connection_class\": self.redis.SSLConnection})\n return params\n\n\nclass WarehouseTask(Task):\n\n abstract = True\n\n def __call__(self, *args, **kwargs):\n registry = self.app.pyramid_config.registry\n pyramid_env = scripting.prepare(registry=registry)\n\n try:\n return super().__call__(pyramid_env[\"request\"], *args, **kwargs)\n finally:\n pyramid_env[\"closer\"]()\n\n def apply_async(self, *args, **kwargs):\n # The API design of Celery makes this threadlocal pretty impossible to\n # avoid :(\n request = get_current_request()\n\n # If for whatever reason we were unable to get a request we'll just\n # skip this and call the original method to send this immediately.\n if request is None or not hasattr(request, \"tm\"):\n return super().apply_async(*args, **kwargs)\n\n # This will break things that expect to get an AsyncResult because\n # we're no longer going to be returning an async result from this when\n # called from within a request, response cycle. Ideally we shouldn't be\n # waiting for responses in a request/response cycle anyways though.\n request.tm.get().addAfterCommitHook(\n self._after_commit_hook,\n args=args,\n kws=kwargs,\n )\n\n def _after_commit_hook(self, success, *args, **kwargs):\n if success:\n super().apply_async(*args, **kwargs)\n\n\napp = Celery(\"warehouse\")\napp.Task = WarehouseTask\n\n\ntask = app.task\n\n\ndef includeme(config):\n s = config.registry.settings\n app.pyramid_config = config\n app.conf.update(\n BROKER_URL=s[\"celery.broker_url\"],\n BROKER_USE_SSL=s[\"warehouse.env\"] == Environment.production,\n CELERY_DISABLE_RATE_LIMITS=True,\n CELERY_RESULT_BACKEND=s[\"celery.result_url\"],\n CELERY_RESULT_SERIALIZER=\"json\",\n CELERY_TASK_SERIALIZER=\"json\",\n CELERY_ACCEPT_CONTENT=[\"json\", \"msgpack\"],\n CELERY_MESSAGE_COMPRESSION=\"gzip\",\n CELERY_QUEUE_HA_POLICY=\"all\",\n )\n", "path": "warehouse/celery.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport celery.backends\n\n# We need to trick Celery into supporting rediss:// URLs which is how redis-py\n# signals that you should use Redis with TLS.\ncelery.backends.BACKEND_ALIASES[\"rediss\"] = \"warehouse.celery:TLSRedisBackend\" # noqa\n\nfrom celery import Celery, Task\nfrom celery.backends.redis import RedisBackend as _RedisBackend\nfrom celery.signals import celeryd_init\nfrom pyramid import scripting\nfrom pyramid.threadlocal import get_current_request\nfrom raven.contrib.celery import register_signal, register_logger_signal\n\nfrom warehouse.config import Environment, configure\n\n\n@celeryd_init.connect\ndef _configure_celery(*args, **kwargs):\n config = configure()\n register_logger_signal(config.registry[\"raven.client\"])\n register_signal(config.registry[\"raven.client\"])\n\n\nclass TLSRedisBackend(_RedisBackend):\n\n def _params_from_url(self, url, defaults):\n params = super()._params_from_url(url, defaults)\n params.update({\"connection_class\": self.redis.SSLConnection})\n return params\n\n\nclass WarehouseTask(Task):\n\n abstract = True\n\n def __call__(self, *args, **kwargs):\n registry = self.app.pyramid_config.registry\n pyramid_env = scripting.prepare(registry=registry)\n\n try:\n return super().__call__(pyramid_env[\"request\"], *args, **kwargs)\n finally:\n pyramid_env[\"closer\"]()\n\n def apply_async(self, *args, **kwargs):\n # The API design of Celery makes this threadlocal pretty impossible to\n # avoid :(\n request = get_current_request()\n\n # If for whatever reason we were unable to get a request we'll just\n # skip this and call the original method to send this immediately.\n if request is None or not hasattr(request, \"tm\"):\n return super().apply_async(*args, **kwargs)\n\n # This will break things that expect to get an AsyncResult because\n # we're no longer going to be returning an async result from this when\n # called from within a request, response cycle. Ideally we shouldn't be\n # waiting for responses in a request/response cycle anyways though.\n request.tm.get().addAfterCommitHook(\n self._after_commit_hook,\n args=args,\n kws=kwargs,\n )\n\n def _after_commit_hook(self, success, *args, **kwargs):\n if success:\n super().apply_async(*args, **kwargs)\n\n\napp = Celery(\"warehouse\")\napp.Task = WarehouseTask\n\n\ntask = app.task\n\n\ndef includeme(config):\n s = config.registry.settings\n app.pyramid_config = config\n app.conf.update(\n BROKER_URL=s[\"celery.broker_url\"],\n BROKER_USE_SSL=s[\"warehouse.env\"] == Environment.production,\n CELERY_DISABLE_RATE_LIMITS=True,\n CELERY_RESULT_BACKEND=s[\"celery.result_url\"],\n CELERY_RESULT_SERIALIZER=\"json\",\n CELERY_TASK_SERIALIZER=\"json\",\n CELERY_ACCEPT_CONTENT=[\"json\", \"msgpack\"],\n CELERY_MESSAGE_COMPRESSION=\"gzip\",\n CELERY_QUEUE_HA_POLICY=\"all\",\n )\n", "path": "warehouse/celery.py"}]}
1,222
158
gh_patches_debug_29413
rasdani/github-patches
git_diff
freedomofpress__securedrop-7045
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- determine post-upgrade failure-mode for a SHA-1-signed submission key ## Description After #6948 (for #6399), redwood will refuse to encrypt to a submission key with a SHA-1 signature. After #6928, `securedrop-admin sdconfig` will reject a submission key with a SHA-1 signature. This check guarantees that new and reconfigured instances will comply with #6948. What will happen to an instance with a SHA-1-signed signature after upgrading to v2.7.0? ## Possible approaches | Option | Documentation changes | Code changes | Implication | | --- | --- | --- | --- | | Fail open, but log | optional | ✓ | Admin must monitor logs and/or OSSEC alerts. | | Fail open, but document | ✓ | ✗ | Admin must monitor release notes or check documentation. | | Fail closed | optional | ✓[1] | Admin can contact us for help. | **Notes:** 1. @legoktm observes that, without a code change to handle this case, Apache will come back up after reboot even if the `postinst` script fails under `unattended-upgrades`. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `securedrop/journalist.py` Content: ``` 1 from encryption import EncryptionManager, GpgKeyNotFoundError 2 from execution import asynchronous 3 from journalist_app import create_app 4 from models import Source 5 from sdconfig import SecureDropConfig 6 7 config = SecureDropConfig.get_current() 8 # app is imported by journalist.wsgi 9 app = create_app(config) 10 11 12 @asynchronous 13 def prime_keycache() -> None: 14 """Pre-load the source public keys into Redis.""" 15 with app.app_context(): 16 encryption_mgr = EncryptionManager.get_default() 17 for source in Source.query.filter_by(pending=False, deleted_at=None).all(): 18 try: 19 encryption_mgr.get_source_public_key(source.filesystem_id) 20 except GpgKeyNotFoundError: 21 pass 22 23 24 prime_keycache() 25 26 27 if __name__ == "__main__": # pragma: no cover 28 debug = getattr(config, "env", "prod") != "prod" 29 # nosemgrep: python.flask.security.audit.app-run-param-config.avoid_app_run_with_bad_host 30 app.run(debug=debug, host="0.0.0.0", port=8081) 31 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/securedrop/journalist.py b/securedrop/journalist.py --- a/securedrop/journalist.py +++ b/securedrop/journalist.py @@ -1,9 +1,13 @@ +import sys + from encryption import EncryptionManager, GpgKeyNotFoundError from execution import asynchronous from journalist_app import create_app from models import Source from sdconfig import SecureDropConfig +import redwood + config = SecureDropConfig.get_current() # app is imported by journalist.wsgi app = create_app(config) @@ -21,10 +25,28 @@ pass -prime_keycache() +def validate_journalist_key() -> None: + """Verify the journalist PGP key is valid""" + encryption_mgr = EncryptionManager.get_default() + # First check that we can read it + try: + journalist_key = encryption_mgr.get_journalist_public_key() + except Exception as e: + print(f"ERROR: Unable to read journalist public key: {e}", file=sys.stderr) + app.logger.error(f"ERROR: Unable to read journalist public key: {e}") + sys.exit(1) + # And then what we read is valid + try: + redwood.is_valid_public_key(journalist_key) + except redwood.RedwoodError as e: + print(f"ERROR: Journalist public key is not valid: {e}", file=sys.stderr) + app.logger.error(f"ERROR: Journalist public key is not valid: {e}") + sys.exit(1) if __name__ == "__main__": # pragma: no cover + validate_journalist_key() + prime_keycache() debug = getattr(config, "env", "prod") != "prod" # nosemgrep: python.flask.security.audit.app-run-param-config.avoid_app_run_with_bad_host app.run(debug=debug, host="0.0.0.0", port=8081)
{"golden_diff": "diff --git a/securedrop/journalist.py b/securedrop/journalist.py\n--- a/securedrop/journalist.py\n+++ b/securedrop/journalist.py\n@@ -1,9 +1,13 @@\n+import sys\n+\n from encryption import EncryptionManager, GpgKeyNotFoundError\n from execution import asynchronous\n from journalist_app import create_app\n from models import Source\n from sdconfig import SecureDropConfig\n \n+import redwood\n+\n config = SecureDropConfig.get_current()\n # app is imported by journalist.wsgi\n app = create_app(config)\n@@ -21,10 +25,28 @@\n pass\n \n \n-prime_keycache()\n+def validate_journalist_key() -> None:\n+ \"\"\"Verify the journalist PGP key is valid\"\"\"\n+ encryption_mgr = EncryptionManager.get_default()\n+ # First check that we can read it\n+ try:\n+ journalist_key = encryption_mgr.get_journalist_public_key()\n+ except Exception as e:\n+ print(f\"ERROR: Unable to read journalist public key: {e}\", file=sys.stderr)\n+ app.logger.error(f\"ERROR: Unable to read journalist public key: {e}\")\n+ sys.exit(1)\n+ # And then what we read is valid\n+ try:\n+ redwood.is_valid_public_key(journalist_key)\n+ except redwood.RedwoodError as e:\n+ print(f\"ERROR: Journalist public key is not valid: {e}\", file=sys.stderr)\n+ app.logger.error(f\"ERROR: Journalist public key is not valid: {e}\")\n+ sys.exit(1)\n \n \n if __name__ == \"__main__\": # pragma: no cover\n+ validate_journalist_key()\n+ prime_keycache()\n debug = getattr(config, \"env\", \"prod\") != \"prod\"\n # nosemgrep: python.flask.security.audit.app-run-param-config.avoid_app_run_with_bad_host\n app.run(debug=debug, host=\"0.0.0.0\", port=8081)\n", "issue": "determine post-upgrade failure-mode for a SHA-1-signed submission key\n## Description\r\n\r\nAfter #6948 (for #6399), redwood will refuse to encrypt to a submission key with a SHA-1 signature.\r\n\r\nAfter #6928, `securedrop-admin sdconfig` will reject a submission key with a SHA-1 signature. This check guarantees that new and reconfigured instances will comply with #6948.\r\n\r\nWhat will happen to an instance with a SHA-1-signed signature after upgrading to v2.7.0?\r\n\r\n## Possible approaches\r\n\r\n| Option | Documentation changes | Code changes | Implication |\r\n| --- | --- | --- | --- |\r\n| Fail open, but log | optional | \u2713 | Admin must monitor logs and/or OSSEC alerts. |\r\n| Fail open, but document | \u2713 | \u2717 | Admin must monitor release notes or check documentation. |\r\n| Fail closed | optional | \u2713[1] | Admin can contact us for help. |\r\n\r\n**Notes:**\r\n1. @legoktm observes that, without a code change to handle this case, Apache will come back up after reboot even if the `postinst` script fails under `unattended-upgrades`.\n", "before_files": [{"content": "from encryption import EncryptionManager, GpgKeyNotFoundError\nfrom execution import asynchronous\nfrom journalist_app import create_app\nfrom models import Source\nfrom sdconfig import SecureDropConfig\n\nconfig = SecureDropConfig.get_current()\n# app is imported by journalist.wsgi\napp = create_app(config)\n\n\n@asynchronous\ndef prime_keycache() -> None:\n \"\"\"Pre-load the source public keys into Redis.\"\"\"\n with app.app_context():\n encryption_mgr = EncryptionManager.get_default()\n for source in Source.query.filter_by(pending=False, deleted_at=None).all():\n try:\n encryption_mgr.get_source_public_key(source.filesystem_id)\n except GpgKeyNotFoundError:\n pass\n\n\nprime_keycache()\n\n\nif __name__ == \"__main__\": # pragma: no cover\n debug = getattr(config, \"env\", \"prod\") != \"prod\"\n # nosemgrep: python.flask.security.audit.app-run-param-config.avoid_app_run_with_bad_host\n app.run(debug=debug, host=\"0.0.0.0\", port=8081)\n", "path": "securedrop/journalist.py"}], "after_files": [{"content": "import sys\n\nfrom encryption import EncryptionManager, GpgKeyNotFoundError\nfrom execution import asynchronous\nfrom journalist_app import create_app\nfrom models import Source\nfrom sdconfig import SecureDropConfig\n\nimport redwood\n\nconfig = SecureDropConfig.get_current()\n# app is imported by journalist.wsgi\napp = create_app(config)\n\n\n@asynchronous\ndef prime_keycache() -> None:\n \"\"\"Pre-load the source public keys into Redis.\"\"\"\n with app.app_context():\n encryption_mgr = EncryptionManager.get_default()\n for source in Source.query.filter_by(pending=False, deleted_at=None).all():\n try:\n encryption_mgr.get_source_public_key(source.filesystem_id)\n except GpgKeyNotFoundError:\n pass\n\n\ndef validate_journalist_key() -> None:\n \"\"\"Verify the journalist PGP key is valid\"\"\"\n encryption_mgr = EncryptionManager.get_default()\n # First check that we can read it\n try:\n journalist_key = encryption_mgr.get_journalist_public_key()\n except Exception as e:\n print(f\"ERROR: Unable to read journalist public key: {e}\", file=sys.stderr)\n app.logger.error(f\"ERROR: Unable to read journalist public key: {e}\")\n sys.exit(1)\n # And then what we read is valid\n try:\n redwood.is_valid_public_key(journalist_key)\n except redwood.RedwoodError as e:\n print(f\"ERROR: Journalist public key is not valid: {e}\", file=sys.stderr)\n app.logger.error(f\"ERROR: Journalist public key is not valid: {e}\")\n sys.exit(1)\n\n\nif __name__ == \"__main__\": # pragma: no cover\n validate_journalist_key()\n prime_keycache()\n debug = getattr(config, \"env\", \"prod\") != \"prod\"\n # nosemgrep: python.flask.security.audit.app-run-param-config.avoid_app_run_with_bad_host\n app.run(debug=debug, host=\"0.0.0.0\", port=8081)\n", "path": "securedrop/journalist.py"}]}
797
440
gh_patches_debug_21071
rasdani/github-patches
git_diff
netbox-community__netbox-14608
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Datasources stuck in sync when using git + ssh from ./manage.py syncdatasource ### NetBox version v3.6.1 ### Python version 3.11 ### Steps to Reproduce In Data Sources Add Name: test Type: git URL: [email protected]:netbox-community/netbox.git Create docker compose exec netbox ./manage.py syncdatasource test ### Expected Behavior Usually leads to some sort of ssh question or failure, and I would expect the exception to set the status to failed, and then be able to hit sync again. I'm not sure exactly how NetBox works, but looking at one of the exceptions... core.exceptions.SyncError: Fetching remote data failed (HangupException): class SyncError(Exception): pass Does this mean the status is not being reset correctly due to the status being left as syncing? ### Observed Behavior datasource.status = syncing in nbshell 'syncing' in gui Sync option is now greyed out and cannot reset status without manually setting it in nbshell: for d in DataSource.objects.filter(status='syncing'): d.status = 'failed' d.save() --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `netbox/core/management/commands/syncdatasource.py` Content: ``` 1 from django.core.management.base import BaseCommand, CommandError 2 3 from core.models import DataSource 4 5 6 class Command(BaseCommand): 7 help = "Synchronize a data source from its remote upstream" 8 9 def add_arguments(self, parser): 10 parser.add_argument('name', nargs='*', help="Data source(s) to synchronize") 11 parser.add_argument( 12 "--all", action='store_true', dest='sync_all', 13 help="Synchronize all data sources" 14 ) 15 16 def handle(self, *args, **options): 17 18 # Find DataSources to sync 19 if options['sync_all']: 20 datasources = DataSource.objects.all() 21 elif options['name']: 22 datasources = DataSource.objects.filter(name__in=options['name']) 23 # Check for invalid names 24 found_names = {ds['name'] for ds in datasources.values('name')} 25 if invalid_names := set(options['name']) - found_names: 26 raise CommandError(f"Invalid data source names: {', '.join(invalid_names)}") 27 else: 28 raise CommandError(f"Must specify at least one data source, or set --all.") 29 30 if len(options['name']) > 1: 31 self.stdout.write(f"Syncing {len(datasources)} data sources.") 32 33 for i, datasource in enumerate(datasources, start=1): 34 self.stdout.write(f"[{i}] Syncing {datasource}... ", ending='') 35 self.stdout.flush() 36 datasource.sync() 37 self.stdout.write(datasource.get_status_display()) 38 self.stdout.flush() 39 40 if len(options['name']) > 1: 41 self.stdout.write(f"Finished.") 42 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/netbox/core/management/commands/syncdatasource.py b/netbox/core/management/commands/syncdatasource.py --- a/netbox/core/management/commands/syncdatasource.py +++ b/netbox/core/management/commands/syncdatasource.py @@ -1,5 +1,6 @@ from django.core.management.base import BaseCommand, CommandError +from core.choices import DataSourceStatusChoices from core.models import DataSource @@ -33,9 +34,13 @@ for i, datasource in enumerate(datasources, start=1): self.stdout.write(f"[{i}] Syncing {datasource}... ", ending='') self.stdout.flush() - datasource.sync() - self.stdout.write(datasource.get_status_display()) - self.stdout.flush() + try: + datasource.sync() + self.stdout.write(datasource.get_status_display()) + self.stdout.flush() + except Exception as e: + DataSource.objects.filter(pk=datasource.pk).update(status=DataSourceStatusChoices.FAILED) + raise e if len(options['name']) > 1: self.stdout.write(f"Finished.")
{"golden_diff": "diff --git a/netbox/core/management/commands/syncdatasource.py b/netbox/core/management/commands/syncdatasource.py\n--- a/netbox/core/management/commands/syncdatasource.py\n+++ b/netbox/core/management/commands/syncdatasource.py\n@@ -1,5 +1,6 @@\n from django.core.management.base import BaseCommand, CommandError\n \n+from core.choices import DataSourceStatusChoices\n from core.models import DataSource\n \n \n@@ -33,9 +34,13 @@\n for i, datasource in enumerate(datasources, start=1):\n self.stdout.write(f\"[{i}] Syncing {datasource}... \", ending='')\n self.stdout.flush()\n- datasource.sync()\n- self.stdout.write(datasource.get_status_display())\n- self.stdout.flush()\n+ try:\n+ datasource.sync()\n+ self.stdout.write(datasource.get_status_display())\n+ self.stdout.flush()\n+ except Exception as e:\n+ DataSource.objects.filter(pk=datasource.pk).update(status=DataSourceStatusChoices.FAILED)\n+ raise e\n \n if len(options['name']) > 1:\n self.stdout.write(f\"Finished.\")\n", "issue": "Datasources stuck in sync when using git + ssh from ./manage.py syncdatasource\n### NetBox version\n\nv3.6.1\n\n### Python version\n\n3.11\n\n### Steps to Reproduce\n\nIn Data Sources\r\nAdd\r\nName: test\r\nType: git\r\nURL: [email protected]:netbox-community/netbox.git\r\nCreate\r\n\r\ndocker compose exec netbox ./manage.py syncdatasource test\r\n\r\n\r\n\r\n\n\n### Expected Behavior\n\nUsually leads to some sort of ssh question or failure, and I would expect the exception to set the status to failed, and then be able to hit sync again.\r\n\r\nI'm not sure exactly how NetBox works, but looking at one of the exceptions...\r\ncore.exceptions.SyncError: Fetching remote data failed (HangupException): \r\n\r\nclass SyncError(Exception):\r\n pass\r\n\r\nDoes this mean the status is not being reset correctly due to the status being left as syncing?\r\n\r\n\n\n### Observed Behavior\n\ndatasource.status = syncing in nbshell\r\n'syncing' in gui\r\nSync option is now greyed out and cannot reset status without manually setting it in nbshell:\r\n\r\nfor d in DataSource.objects.filter(status='syncing'):\r\n d.status = 'failed'\r\n d.save()\r\n\n", "before_files": [{"content": "from django.core.management.base import BaseCommand, CommandError\n\nfrom core.models import DataSource\n\n\nclass Command(BaseCommand):\n help = \"Synchronize a data source from its remote upstream\"\n\n def add_arguments(self, parser):\n parser.add_argument('name', nargs='*', help=\"Data source(s) to synchronize\")\n parser.add_argument(\n \"--all\", action='store_true', dest='sync_all',\n help=\"Synchronize all data sources\"\n )\n\n def handle(self, *args, **options):\n\n # Find DataSources to sync\n if options['sync_all']:\n datasources = DataSource.objects.all()\n elif options['name']:\n datasources = DataSource.objects.filter(name__in=options['name'])\n # Check for invalid names\n found_names = {ds['name'] for ds in datasources.values('name')}\n if invalid_names := set(options['name']) - found_names:\n raise CommandError(f\"Invalid data source names: {', '.join(invalid_names)}\")\n else:\n raise CommandError(f\"Must specify at least one data source, or set --all.\")\n\n if len(options['name']) > 1:\n self.stdout.write(f\"Syncing {len(datasources)} data sources.\")\n\n for i, datasource in enumerate(datasources, start=1):\n self.stdout.write(f\"[{i}] Syncing {datasource}... \", ending='')\n self.stdout.flush()\n datasource.sync()\n self.stdout.write(datasource.get_status_display())\n self.stdout.flush()\n\n if len(options['name']) > 1:\n self.stdout.write(f\"Finished.\")\n", "path": "netbox/core/management/commands/syncdatasource.py"}], "after_files": [{"content": "from django.core.management.base import BaseCommand, CommandError\n\nfrom core.choices import DataSourceStatusChoices\nfrom core.models import DataSource\n\n\nclass Command(BaseCommand):\n help = \"Synchronize a data source from its remote upstream\"\n\n def add_arguments(self, parser):\n parser.add_argument('name', nargs='*', help=\"Data source(s) to synchronize\")\n parser.add_argument(\n \"--all\", action='store_true', dest='sync_all',\n help=\"Synchronize all data sources\"\n )\n\n def handle(self, *args, **options):\n\n # Find DataSources to sync\n if options['sync_all']:\n datasources = DataSource.objects.all()\n elif options['name']:\n datasources = DataSource.objects.filter(name__in=options['name'])\n # Check for invalid names\n found_names = {ds['name'] for ds in datasources.values('name')}\n if invalid_names := set(options['name']) - found_names:\n raise CommandError(f\"Invalid data source names: {', '.join(invalid_names)}\")\n else:\n raise CommandError(f\"Must specify at least one data source, or set --all.\")\n\n if len(options['name']) > 1:\n self.stdout.write(f\"Syncing {len(datasources)} data sources.\")\n\n for i, datasource in enumerate(datasources, start=1):\n self.stdout.write(f\"[{i}] Syncing {datasource}... \", ending='')\n self.stdout.flush()\n try:\n datasource.sync()\n self.stdout.write(datasource.get_status_display())\n self.stdout.flush()\n except Exception as e:\n DataSource.objects.filter(pk=datasource.pk).update(status=DataSourceStatusChoices.FAILED)\n raise e\n\n if len(options['name']) > 1:\n self.stdout.write(f\"Finished.\")\n", "path": "netbox/core/management/commands/syncdatasource.py"}]}
936
249
gh_patches_debug_28788
rasdani/github-patches
git_diff
pyload__pyload-284
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Problem with zippyshare I have a problem with pyload and zippyshare. When i add a link from zippyshare the software respond: 'NoneType' object has no attribute 'group' XX MiB ZippyshareCom. I have add the swf path on configuration of zippyshare before the installation of sfwtools. I use windows 8 on 64 bit and i had try with youtube and it's ok.. the log is this: https://gist.github.com/djfelix91/6711122 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `module/plugins/hoster/ZippyshareCom.py` Content: ``` 1 #!/usr/bin/env python 2 # -*- coding: utf-8 -*- 3 4 import re 5 import subprocess 6 import tempfile 7 import os 8 9 from module.plugins.internal.SimpleHoster import SimpleHoster, create_getInfo, timestamp 10 from module.plugins.internal.CaptchaService import ReCaptcha 11 from module.common.json_layer import json_loads 12 13 14 class ZippyshareCom(SimpleHoster): 15 __name__ = "ZippyshareCom" 16 __type__ = "hoster" 17 __pattern__ = r"(?P<HOST>http://www\d{0,2}\.zippyshare.com)/v(?:/|iew.jsp.*key=)(?P<KEY>\d+)" 18 __version__ = "0.39" 19 __description__ = """Zippyshare.com Download Hoster""" 20 __author_name__ = ("spoob", "zoidberg", "stickell") 21 __author_mail__ = ("[email protected]", "[email protected]", "[email protected]") 22 __config__ = [("swfdump_path", "string", "Path to swfdump", "")] 23 24 FILE_NAME_PATTERN = r'>Name:</font>\s*<font [^>]*>(?P<N>[^<]+)</font><br />' 25 FILE_SIZE_PATTERN = r'>Size:</font>\s*<font [^>]*>(?P<S>[0-9.,]+) (?P<U>[kKMG]+)i?B</font><br />' 26 FILE_INFO_PATTERN = r'document\.getElementById\(\'dlbutton\'\)\.href = "[^;]*/(?P<N>[^"]+)";' 27 FILE_OFFLINE_PATTERN = r'>File does not exist on this server</div>' 28 29 DOWNLOAD_URL_PATTERN = r"<script type=\"text/javascript\">([^<]*?)document\.getElementById\('dlbutton'\).href = ([^;]+);" 30 SEED_PATTERN = r'swfobject.embedSWF\("([^"]+)".*?seed: (\d+)' 31 CAPTCHA_KEY_PATTERN = r'Recaptcha.create\("([^"]+)"' 32 CAPTCHA_SHORTENCODE_PATTERN = r"shortencode: '([^']+)'" 33 CAPTCHA_DOWNLOAD_PATTERN = r"document.location = '([^']+)'" 34 35 LAST_KNOWN_VALUES = (9, 2374755) # time = (seed * multiply) % modulo 36 37 def setup(self): 38 self.html = None 39 self.wantReconnect = False 40 self.multiDL = True 41 42 def handleFree(self): 43 url = self.get_file_url() 44 if not url: 45 self.fail("Download URL not found.") 46 self.logDebug("Download URL %s" % url) 47 self.download(url, cookies=True) 48 49 check = self.checkDownload({ 50 "swf_values": re.compile(self.SEED_PATTERN) 51 }) 52 53 if check == "swf_values": 54 swf_sts = self.getStorage("swf_sts") 55 if not swf_sts: 56 self.setStorage("swf_sts", 2) 57 self.setStorage("swf_stamp", 0) 58 elif swf_sts == '1': 59 self.setStorage("swf_sts", 2) 60 61 self.retry(max_tries=1) 62 63 def get_file_url(self): 64 """ returns the absolute downloadable filepath 65 """ 66 url = multiply = modulo = None 67 68 found = re.search(self.DOWNLOAD_URL_PATTERN, self.html, re.S) 69 if found: 70 #Method #1: JS eval 71 js = "\n".join(found.groups()) 72 regex = r"document.getElementById\(\\*'dlbutton\\*'\).omg" 73 omg = re.search(regex + r" = ([^;]+);", js).group(1) 74 js = re.sub(regex + r" = ([^;]+);", '', js) 75 js = re.sub(regex, omg, js) 76 url = self.js.eval(js) 77 else: 78 #Method #2: SWF eval 79 seed_search = re.search(self.SEED_PATTERN, self.html) 80 if seed_search: 81 swf_url, file_seed = seed_search.groups() 82 83 swf_sts = self.getStorage("swf_sts") 84 swf_stamp = int(self.getStorage("swf_stamp") or 0) 85 swf_version = self.getStorage("version") 86 self.logDebug("SWF", swf_sts, swf_stamp, swf_version) 87 88 if not swf_sts: 89 self.logDebug('Using default values') 90 multiply, modulo = self.LAST_KNOWN_VALUES 91 elif swf_sts == "1": 92 self.logDebug('Using stored values') 93 multiply = self.getStorage("multiply") 94 modulo = self.getStorage("modulo") 95 elif swf_sts == "2": 96 if swf_version < self.__version__: 97 self.logDebug('Reverting to default values') 98 self.setStorage("swf_sts", "") 99 self.setStorage("version", self.__version__) 100 multiply, modulo = self.LAST_KNOWN_VALUES 101 elif (swf_stamp + 3600000) < timestamp(): 102 swfdump = self.get_swfdump_path() 103 if swfdump: 104 multiply, modulo = self.get_swf_values(self.file_info['HOST'] + swf_url, swfdump) 105 else: 106 self.logWarning("Swfdump not found. Install swftools to bypass captcha.") 107 108 if multiply and modulo: 109 self.logDebug("TIME = (%s * %s) %s" % (file_seed, multiply, modulo)) 110 url = "/download?key=%s&time=%d" % (self.file_info['KEY'], 111 (int(file_seed) * int(multiply)) % int(modulo)) 112 113 if not url: 114 #Method #3: Captcha 115 url = self.do_recaptcha() 116 117 return self.file_info['HOST'] + url 118 119 def get_swf_values(self, swf_url, swfdump): 120 self.logDebug('Parsing values from %s' % swf_url) 121 multiply = modulo = None 122 123 fd, fpath = tempfile.mkstemp() 124 try: 125 swf_data = self.load(swf_url) 126 os.write(fd, swf_data) 127 128 p = subprocess.Popen([swfdump, '-a', fpath], stdout=subprocess.PIPE, stderr=subprocess.PIPE) 129 out, err = p.communicate() 130 131 if err: 132 self.logError(err) 133 else: 134 m_str = re.search(r'::break.*?{(.*?)}', out, re.S).group(1) 135 multiply = re.search(r'pushbyte (\d+)', m_str).group(1) 136 modulo = re.search(r'pushint (\d+)', m_str).group(1) 137 finally: 138 os.close(fd) 139 os.remove(fpath) 140 141 if multiply and modulo: 142 self.setStorage("multiply", multiply) 143 self.setStorage("modulo", modulo) 144 self.setStorage("swf_sts", 1) 145 self.setStorage("version", self.__version__) 146 else: 147 self.logError("Parsing SWF failed: swfdump not installed or plugin out of date") 148 self.setStorage("swf_sts", 2) 149 150 self.setStorage("swf_stamp", timestamp()) 151 152 return multiply, modulo 153 154 def get_swfdump_path(self): 155 # used for detecting if swfdump is installed 156 def is_exe(ppath): 157 return os.path.isfile(ppath) and os.access(ppath, os.X_OK) 158 159 program = self.getConfig("swfdump_path") or "swfdump" 160 swfdump = None 161 ppath, pname = os.path.split(program) 162 if ppath: 163 if is_exe(program): 164 swfdump = program 165 else: 166 for ppath in os.environ["PATH"].split(os.pathsep): 167 exe_file = os.path.join(ppath, program) 168 if is_exe(exe_file): 169 swfdump = exe_file 170 171 # return path to the executable or None if not found 172 return swfdump 173 174 def do_recaptcha(self): 175 self.logDebug('Trying to solve captcha') 176 captcha_key = re.search(self.CAPTCHA_KEY_PATTERN, self.html).group(1) 177 shortencode = re.search(self.CAPTCHA_SHORTENCODE_PATTERN, self.html).group(1) 178 url = re.search(self.CAPTCHA_DOWNLOAD_PATTERN, self.html).group(1) 179 180 recaptcha = ReCaptcha(self) 181 182 for i in range(5): 183 challenge, code = recaptcha.challenge(captcha_key) 184 185 response = json_loads(self.load(self.file_info['HOST'] + '/rest/captcha/test', 186 post={'challenge': challenge, 187 'response': code, 188 'shortencode': shortencode})) 189 self.logDebug("reCaptcha response : %s" % response) 190 if response == True: 191 self.correctCaptcha() 192 break 193 else: 194 self.invalidCaptcha() 195 else: 196 self.fail("Invalid captcha") 197 198 return url 199 200 201 getInfo = create_getInfo(ZippyshareCom) 202 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/module/plugins/hoster/ZippyshareCom.py b/module/plugins/hoster/ZippyshareCom.py --- a/module/plugins/hoster/ZippyshareCom.py +++ b/module/plugins/hoster/ZippyshareCom.py @@ -26,7 +26,7 @@ FILE_INFO_PATTERN = r'document\.getElementById\(\'dlbutton\'\)\.href = "[^;]*/(?P<N>[^"]+)";' FILE_OFFLINE_PATTERN = r'>File does not exist on this server</div>' - DOWNLOAD_URL_PATTERN = r"<script type=\"text/javascript\">([^<]*?)document\.getElementById\('dlbutton'\).href = ([^;]+);" + DOWNLOAD_URL_PATTERN = r"<script type=\"text/javascript\">([^<]*?)(document\.getElementById\('dlbutton'\).href = [^;]+;)" SEED_PATTERN = r'swfobject.embedSWF\("([^"]+)".*?seed: (\d+)' CAPTCHA_KEY_PATTERN = r'Recaptcha.create\("([^"]+)"' CAPTCHA_SHORTENCODE_PATTERN = r"shortencode: '([^']+)'" @@ -69,10 +69,11 @@ if found: #Method #1: JS eval js = "\n".join(found.groups()) - regex = r"document.getElementById\(\\*'dlbutton\\*'\).omg" - omg = re.search(regex + r" = ([^;]+);", js).group(1) - js = re.sub(regex + r" = ([^;]+);", '', js) - js = re.sub(regex, omg, js) + d = re.search(r'span id="omg" class="(\d*)"', self.html).group(1) + regex = r"document.getElementById\('omg'\).getAttribute\('class'\)" + js = re.sub(regex, d, js) + regex = r"document.getElementById\(\\*'dlbutton\\*'\).href = " + js = re.sub(regex, '', js) url = self.js.eval(js) else: #Method #2: SWF eval
{"golden_diff": "diff --git a/module/plugins/hoster/ZippyshareCom.py b/module/plugins/hoster/ZippyshareCom.py\n--- a/module/plugins/hoster/ZippyshareCom.py\n+++ b/module/plugins/hoster/ZippyshareCom.py\n@@ -26,7 +26,7 @@\n FILE_INFO_PATTERN = r'document\\.getElementById\\(\\'dlbutton\\'\\)\\.href = \"[^;]*/(?P<N>[^\"]+)\";'\n FILE_OFFLINE_PATTERN = r'>File does not exist on this server</div>'\n \n- DOWNLOAD_URL_PATTERN = r\"<script type=\\\"text/javascript\\\">([^<]*?)document\\.getElementById\\('dlbutton'\\).href = ([^;]+);\"\n+ DOWNLOAD_URL_PATTERN = r\"<script type=\\\"text/javascript\\\">([^<]*?)(document\\.getElementById\\('dlbutton'\\).href = [^;]+;)\"\n SEED_PATTERN = r'swfobject.embedSWF\\(\"([^\"]+)\".*?seed: (\\d+)'\n CAPTCHA_KEY_PATTERN = r'Recaptcha.create\\(\"([^\"]+)\"'\n CAPTCHA_SHORTENCODE_PATTERN = r\"shortencode: '([^']+)'\"\n@@ -69,10 +69,11 @@\n if found:\n #Method #1: JS eval\n js = \"\\n\".join(found.groups())\n- regex = r\"document.getElementById\\(\\\\*'dlbutton\\\\*'\\).omg\"\n- omg = re.search(regex + r\" = ([^;]+);\", js).group(1)\n- js = re.sub(regex + r\" = ([^;]+);\", '', js)\n- js = re.sub(regex, omg, js)\n+ d = re.search(r'span id=\"omg\" class=\"(\\d*)\"', self.html).group(1)\n+ regex = r\"document.getElementById\\('omg'\\).getAttribute\\('class'\\)\"\n+ js = re.sub(regex, d, js)\n+ regex = r\"document.getElementById\\(\\\\*'dlbutton\\\\*'\\).href = \"\n+ js = re.sub(regex, '', js)\n url = self.js.eval(js)\n else:\n #Method #2: SWF eval\n", "issue": "Problem with zippyshare\nI have a problem with pyload and zippyshare.\n\nWhen i add a link from zippyshare the software respond: 'NoneType' object has no attribute 'group' XX MiB ZippyshareCom.\n\nI have add the swf path on configuration of zippyshare before the installation of sfwtools.\n\nI use windows 8 on 64 bit and i had try with youtube and it's ok..\n\nthe log is this:\n\nhttps://gist.github.com/djfelix91/6711122\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport re\nimport subprocess\nimport tempfile\nimport os\n\nfrom module.plugins.internal.SimpleHoster import SimpleHoster, create_getInfo, timestamp\nfrom module.plugins.internal.CaptchaService import ReCaptcha\nfrom module.common.json_layer import json_loads\n\n\nclass ZippyshareCom(SimpleHoster):\n __name__ = \"ZippyshareCom\"\n __type__ = \"hoster\"\n __pattern__ = r\"(?P<HOST>http://www\\d{0,2}\\.zippyshare.com)/v(?:/|iew.jsp.*key=)(?P<KEY>\\d+)\"\n __version__ = \"0.39\"\n __description__ = \"\"\"Zippyshare.com Download Hoster\"\"\"\n __author_name__ = (\"spoob\", \"zoidberg\", \"stickell\")\n __author_mail__ = (\"[email protected]\", \"[email protected]\", \"[email protected]\")\n __config__ = [(\"swfdump_path\", \"string\", \"Path to swfdump\", \"\")]\n\n FILE_NAME_PATTERN = r'>Name:</font>\\s*<font [^>]*>(?P<N>[^<]+)</font><br />'\n FILE_SIZE_PATTERN = r'>Size:</font>\\s*<font [^>]*>(?P<S>[0-9.,]+) (?P<U>[kKMG]+)i?B</font><br />'\n FILE_INFO_PATTERN = r'document\\.getElementById\\(\\'dlbutton\\'\\)\\.href = \"[^;]*/(?P<N>[^\"]+)\";'\n FILE_OFFLINE_PATTERN = r'>File does not exist on this server</div>'\n\n DOWNLOAD_URL_PATTERN = r\"<script type=\\\"text/javascript\\\">([^<]*?)document\\.getElementById\\('dlbutton'\\).href = ([^;]+);\"\n SEED_PATTERN = r'swfobject.embedSWF\\(\"([^\"]+)\".*?seed: (\\d+)'\n CAPTCHA_KEY_PATTERN = r'Recaptcha.create\\(\"([^\"]+)\"'\n CAPTCHA_SHORTENCODE_PATTERN = r\"shortencode: '([^']+)'\"\n CAPTCHA_DOWNLOAD_PATTERN = r\"document.location = '([^']+)'\"\n\n LAST_KNOWN_VALUES = (9, 2374755) # time = (seed * multiply) % modulo\n\n def setup(self):\n self.html = None\n self.wantReconnect = False\n self.multiDL = True\n\n def handleFree(self):\n url = self.get_file_url()\n if not url:\n self.fail(\"Download URL not found.\")\n self.logDebug(\"Download URL %s\" % url)\n self.download(url, cookies=True)\n\n check = self.checkDownload({\n \"swf_values\": re.compile(self.SEED_PATTERN)\n })\n\n if check == \"swf_values\":\n swf_sts = self.getStorage(\"swf_sts\")\n if not swf_sts:\n self.setStorage(\"swf_sts\", 2)\n self.setStorage(\"swf_stamp\", 0)\n elif swf_sts == '1':\n self.setStorage(\"swf_sts\", 2)\n\n self.retry(max_tries=1)\n\n def get_file_url(self):\n \"\"\" returns the absolute downloadable filepath\n \"\"\"\n url = multiply = modulo = None\n\n found = re.search(self.DOWNLOAD_URL_PATTERN, self.html, re.S)\n if found:\n #Method #1: JS eval\n js = \"\\n\".join(found.groups())\n regex = r\"document.getElementById\\(\\\\*'dlbutton\\\\*'\\).omg\"\n omg = re.search(regex + r\" = ([^;]+);\", js).group(1)\n js = re.sub(regex + r\" = ([^;]+);\", '', js)\n js = re.sub(regex, omg, js)\n url = self.js.eval(js)\n else:\n #Method #2: SWF eval\n seed_search = re.search(self.SEED_PATTERN, self.html)\n if seed_search:\n swf_url, file_seed = seed_search.groups()\n\n swf_sts = self.getStorage(\"swf_sts\")\n swf_stamp = int(self.getStorage(\"swf_stamp\") or 0)\n swf_version = self.getStorage(\"version\")\n self.logDebug(\"SWF\", swf_sts, swf_stamp, swf_version)\n\n if not swf_sts:\n self.logDebug('Using default values')\n multiply, modulo = self.LAST_KNOWN_VALUES\n elif swf_sts == \"1\":\n self.logDebug('Using stored values')\n multiply = self.getStorage(\"multiply\")\n modulo = self.getStorage(\"modulo\")\n elif swf_sts == \"2\":\n if swf_version < self.__version__:\n self.logDebug('Reverting to default values')\n self.setStorage(\"swf_sts\", \"\")\n self.setStorage(\"version\", self.__version__)\n multiply, modulo = self.LAST_KNOWN_VALUES\n elif (swf_stamp + 3600000) < timestamp():\n swfdump = self.get_swfdump_path()\n if swfdump:\n multiply, modulo = self.get_swf_values(self.file_info['HOST'] + swf_url, swfdump)\n else:\n self.logWarning(\"Swfdump not found. Install swftools to bypass captcha.\")\n\n if multiply and modulo:\n self.logDebug(\"TIME = (%s * %s) %s\" % (file_seed, multiply, modulo))\n url = \"/download?key=%s&time=%d\" % (self.file_info['KEY'],\n (int(file_seed) * int(multiply)) % int(modulo))\n\n if not url:\n #Method #3: Captcha\n url = self.do_recaptcha()\n\n return self.file_info['HOST'] + url\n\n def get_swf_values(self, swf_url, swfdump):\n self.logDebug('Parsing values from %s' % swf_url)\n multiply = modulo = None\n\n fd, fpath = tempfile.mkstemp()\n try:\n swf_data = self.load(swf_url)\n os.write(fd, swf_data)\n\n p = subprocess.Popen([swfdump, '-a', fpath], stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n out, err = p.communicate()\n\n if err:\n self.logError(err)\n else:\n m_str = re.search(r'::break.*?{(.*?)}', out, re.S).group(1)\n multiply = re.search(r'pushbyte (\\d+)', m_str).group(1)\n modulo = re.search(r'pushint (\\d+)', m_str).group(1)\n finally:\n os.close(fd)\n os.remove(fpath)\n\n if multiply and modulo:\n self.setStorage(\"multiply\", multiply)\n self.setStorage(\"modulo\", modulo)\n self.setStorage(\"swf_sts\", 1)\n self.setStorage(\"version\", self.__version__)\n else:\n self.logError(\"Parsing SWF failed: swfdump not installed or plugin out of date\")\n self.setStorage(\"swf_sts\", 2)\n\n self.setStorage(\"swf_stamp\", timestamp())\n\n return multiply, modulo\n\n def get_swfdump_path(self):\n # used for detecting if swfdump is installed\n def is_exe(ppath):\n return os.path.isfile(ppath) and os.access(ppath, os.X_OK)\n\n program = self.getConfig(\"swfdump_path\") or \"swfdump\"\n swfdump = None\n ppath, pname = os.path.split(program)\n if ppath:\n if is_exe(program):\n swfdump = program\n else:\n for ppath in os.environ[\"PATH\"].split(os.pathsep):\n exe_file = os.path.join(ppath, program)\n if is_exe(exe_file):\n swfdump = exe_file\n\n # return path to the executable or None if not found\n return swfdump\n\n def do_recaptcha(self):\n self.logDebug('Trying to solve captcha')\n captcha_key = re.search(self.CAPTCHA_KEY_PATTERN, self.html).group(1)\n shortencode = re.search(self.CAPTCHA_SHORTENCODE_PATTERN, self.html).group(1)\n url = re.search(self.CAPTCHA_DOWNLOAD_PATTERN, self.html).group(1)\n\n recaptcha = ReCaptcha(self)\n\n for i in range(5):\n challenge, code = recaptcha.challenge(captcha_key)\n\n response = json_loads(self.load(self.file_info['HOST'] + '/rest/captcha/test',\n post={'challenge': challenge,\n 'response': code,\n 'shortencode': shortencode}))\n self.logDebug(\"reCaptcha response : %s\" % response)\n if response == True:\n self.correctCaptcha()\n break\n else:\n self.invalidCaptcha()\n else:\n self.fail(\"Invalid captcha\")\n\n return url\n\n\ngetInfo = create_getInfo(ZippyshareCom)\n", "path": "module/plugins/hoster/ZippyshareCom.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport re\nimport subprocess\nimport tempfile\nimport os\n\nfrom module.plugins.internal.SimpleHoster import SimpleHoster, create_getInfo, timestamp\nfrom module.plugins.internal.CaptchaService import ReCaptcha\nfrom module.common.json_layer import json_loads\n\n\nclass ZippyshareCom(SimpleHoster):\n __name__ = \"ZippyshareCom\"\n __type__ = \"hoster\"\n __pattern__ = r\"(?P<HOST>http://www\\d{0,2}\\.zippyshare.com)/v(?:/|iew.jsp.*key=)(?P<KEY>\\d+)\"\n __version__ = \"0.39\"\n __description__ = \"\"\"Zippyshare.com Download Hoster\"\"\"\n __author_name__ = (\"spoob\", \"zoidberg\", \"stickell\")\n __author_mail__ = (\"[email protected]\", \"[email protected]\", \"[email protected]\")\n __config__ = [(\"swfdump_path\", \"string\", \"Path to swfdump\", \"\")]\n\n FILE_NAME_PATTERN = r'>Name:</font>\\s*<font [^>]*>(?P<N>[^<]+)</font><br />'\n FILE_SIZE_PATTERN = r'>Size:</font>\\s*<font [^>]*>(?P<S>[0-9.,]+) (?P<U>[kKMG]+)i?B</font><br />'\n FILE_INFO_PATTERN = r'document\\.getElementById\\(\\'dlbutton\\'\\)\\.href = \"[^;]*/(?P<N>[^\"]+)\";'\n FILE_OFFLINE_PATTERN = r'>File does not exist on this server</div>'\n\n DOWNLOAD_URL_PATTERN = r\"<script type=\\\"text/javascript\\\">([^<]*?)(document\\.getElementById\\('dlbutton'\\).href = [^;]+;)\"\n SEED_PATTERN = r'swfobject.embedSWF\\(\"([^\"]+)\".*?seed: (\\d+)'\n CAPTCHA_KEY_PATTERN = r'Recaptcha.create\\(\"([^\"]+)\"'\n CAPTCHA_SHORTENCODE_PATTERN = r\"shortencode: '([^']+)'\"\n CAPTCHA_DOWNLOAD_PATTERN = r\"document.location = '([^']+)'\"\n\n LAST_KNOWN_VALUES = (9, 2374755) # time = (seed * multiply) % modulo\n\n def setup(self):\n self.html = None\n self.wantReconnect = False\n self.multiDL = True\n\n def handleFree(self):\n url = self.get_file_url()\n if not url:\n self.fail(\"Download URL not found.\")\n self.logDebug(\"Download URL %s\" % url)\n self.download(url, cookies=True)\n\n check = self.checkDownload({\n \"swf_values\": re.compile(self.SEED_PATTERN)\n })\n\n if check == \"swf_values\":\n swf_sts = self.getStorage(\"swf_sts\")\n if not swf_sts:\n self.setStorage(\"swf_sts\", 2)\n self.setStorage(\"swf_stamp\", 0)\n elif swf_sts == '1':\n self.setStorage(\"swf_sts\", 2)\n\n self.retry(max_tries=1)\n\n def get_file_url(self):\n \"\"\" returns the absolute downloadable filepath\n \"\"\"\n url = multiply = modulo = None\n\n found = re.search(self.DOWNLOAD_URL_PATTERN, self.html, re.S)\n if found:\n #Method #1: JS eval\n js = \"\\n\".join(found.groups())\n d = re.search(r'span id=\"omg\" class=\"(\\d*)\"', self.html).group(1)\n regex = r\"document.getElementById\\('omg'\\).getAttribute\\('class'\\)\"\n js = re.sub(regex, d, js)\n regex = r\"document.getElementById\\(\\\\*'dlbutton\\\\*'\\).href = \"\n js = re.sub(regex, '', js)\n url = self.js.eval(js)\n else:\n #Method #2: SWF eval\n seed_search = re.search(self.SEED_PATTERN, self.html)\n if seed_search:\n swf_url, file_seed = seed_search.groups()\n\n swf_sts = self.getStorage(\"swf_sts\")\n swf_stamp = int(self.getStorage(\"swf_stamp\") or 0)\n swf_version = self.getStorage(\"version\")\n self.logDebug(\"SWF\", swf_sts, swf_stamp, swf_version)\n\n if not swf_sts:\n self.logDebug('Using default values')\n multiply, modulo = self.LAST_KNOWN_VALUES\n elif swf_sts == \"1\":\n self.logDebug('Using stored values')\n multiply = self.getStorage(\"multiply\")\n modulo = self.getStorage(\"modulo\")\n elif swf_sts == \"2\":\n if swf_version < self.__version__:\n self.logDebug('Reverting to default values')\n self.setStorage(\"swf_sts\", \"\")\n self.setStorage(\"version\", self.__version__)\n multiply, modulo = self.LAST_KNOWN_VALUES\n elif (swf_stamp + 3600000) < timestamp():\n swfdump = self.get_swfdump_path()\n if swfdump:\n multiply, modulo = self.get_swf_values(self.file_info['HOST'] + swf_url, swfdump)\n else:\n self.logWarning(\"Swfdump not found. Install swftools to bypass captcha.\")\n\n if multiply and modulo:\n self.logDebug(\"TIME = (%s * %s) %s\" % (file_seed, multiply, modulo))\n url = \"/download?key=%s&time=%d\" % (self.file_info['KEY'],\n (int(file_seed) * int(multiply)) % int(modulo))\n\n if not url:\n #Method #3: Captcha\n url = self.do_recaptcha()\n\n return self.file_info['HOST'] + url\n\n def get_swf_values(self, swf_url, swfdump):\n self.logDebug('Parsing values from %s' % swf_url)\n multiply = modulo = None\n\n fd, fpath = tempfile.mkstemp()\n try:\n swf_data = self.load(swf_url)\n os.write(fd, swf_data)\n\n p = subprocess.Popen([swfdump, '-a', fpath], stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n out, err = p.communicate()\n\n if err:\n self.logError(err)\n else:\n m_str = re.search(r'::break.*?{(.*?)}', out, re.S).group(1)\n multiply = re.search(r'pushbyte (\\d+)', m_str).group(1)\n modulo = re.search(r'pushint (\\d+)', m_str).group(1)\n finally:\n os.close(fd)\n os.remove(fpath)\n\n if multiply and modulo:\n self.setStorage(\"multiply\", multiply)\n self.setStorage(\"modulo\", modulo)\n self.setStorage(\"swf_sts\", 1)\n self.setStorage(\"version\", self.__version__)\n else:\n self.logError(\"Parsing SWF failed: swfdump not installed or plugin out of date\")\n self.setStorage(\"swf_sts\", 2)\n\n self.setStorage(\"swf_stamp\", timestamp())\n\n return multiply, modulo\n\n def get_swfdump_path(self):\n # used for detecting if swfdump is installed\n def is_exe(ppath):\n return os.path.isfile(ppath) and os.access(ppath, os.X_OK)\n\n program = self.getConfig(\"swfdump_path\") or \"swfdump\"\n swfdump = None\n ppath, pname = os.path.split(program)\n if ppath:\n if is_exe(program):\n swfdump = program\n else:\n for ppath in os.environ[\"PATH\"].split(os.pathsep):\n exe_file = os.path.join(ppath, program)\n if is_exe(exe_file):\n swfdump = exe_file\n\n # return path to the executable or None if not found\n return swfdump\n\n def do_recaptcha(self):\n self.logDebug('Trying to solve captcha')\n captcha_key = re.search(self.CAPTCHA_KEY_PATTERN, self.html).group(1)\n shortencode = re.search(self.CAPTCHA_SHORTENCODE_PATTERN, self.html).group(1)\n url = re.search(self.CAPTCHA_DOWNLOAD_PATTERN, self.html).group(1)\n\n recaptcha = ReCaptcha(self)\n\n for i in range(5):\n challenge, code = recaptcha.challenge(captcha_key)\n\n response = json_loads(self.load(self.file_info['HOST'] + '/rest/captcha/test',\n post={'challenge': challenge,\n 'response': code,\n 'shortencode': shortencode}))\n self.logDebug(\"reCaptcha response : %s\" % response)\n if response == True:\n self.correctCaptcha()\n break\n else:\n self.invalidCaptcha()\n else:\n self.fail(\"Invalid captcha\")\n\n return url\n\n\ngetInfo = create_getInfo(ZippyshareCom)\n", "path": "module/plugins/hoster/ZippyshareCom.py"}]}
2,871
473
gh_patches_debug_31038
rasdani/github-patches
git_diff
encode__httpx-2009
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- When using proxy the request downgrades to HTTP/1 Using the following code ``` client = httpx.Client(http2=True, proxies=proxies) response = client.get('https://www.truepeoplesearch.com/results?name=John', headers=headers) response.http_version ``` It appears that it returns HTTP/1 but if I were to make the same request without proxy, it shows HTTP/2 Version of HTTPX being used. ``` Name: httpx Version: 0.21.1 Summary: The next generation HTTP client. Home-page: https://github.com/encode/httpx Author: Tom Christie Author-email: [email protected] ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `httpx/_transports/default.py` Content: ``` 1 """ 2 Custom transports, with nicely configured defaults. 3 4 The following additional keyword arguments are currently supported by httpcore... 5 6 * uds: str 7 * local_address: str 8 * retries: int 9 10 Example usages... 11 12 # Disable HTTP/2 on a single specific domain. 13 mounts = { 14 "all://": httpx.HTTPTransport(http2=True), 15 "all://*example.org": httpx.HTTPTransport() 16 } 17 18 # Using advanced httpcore configuration, with connection retries. 19 transport = httpx.HTTPTransport(retries=1) 20 client = httpx.Client(transport=transport) 21 22 # Using advanced httpcore configuration, with unix domain sockets. 23 transport = httpx.HTTPTransport(uds="socket.uds") 24 client = httpx.Client(transport=transport) 25 """ 26 import contextlib 27 import typing 28 from types import TracebackType 29 30 import httpcore 31 32 from .._config import DEFAULT_LIMITS, Limits, Proxy, create_ssl_context 33 from .._exceptions import ( 34 ConnectError, 35 ConnectTimeout, 36 LocalProtocolError, 37 NetworkError, 38 PoolTimeout, 39 ProtocolError, 40 ProxyError, 41 ReadError, 42 ReadTimeout, 43 RemoteProtocolError, 44 TimeoutException, 45 UnsupportedProtocol, 46 WriteError, 47 WriteTimeout, 48 ) 49 from .._models import Request, Response 50 from .._types import AsyncByteStream, CertTypes, SyncByteStream, VerifyTypes 51 from .base import AsyncBaseTransport, BaseTransport 52 53 T = typing.TypeVar("T", bound="HTTPTransport") 54 A = typing.TypeVar("A", bound="AsyncHTTPTransport") 55 56 57 @contextlib.contextmanager 58 def map_httpcore_exceptions() -> typing.Iterator[None]: 59 try: 60 yield 61 except Exception as exc: # noqa: PIE-786 62 mapped_exc = None 63 64 for from_exc, to_exc in HTTPCORE_EXC_MAP.items(): 65 if not isinstance(exc, from_exc): 66 continue 67 # We want to map to the most specific exception we can find. 68 # Eg if `exc` is an `httpcore.ReadTimeout`, we want to map to 69 # `httpx.ReadTimeout`, not just `httpx.TimeoutException`. 70 if mapped_exc is None or issubclass(to_exc, mapped_exc): 71 mapped_exc = to_exc 72 73 if mapped_exc is None: # pragma: nocover 74 raise 75 76 message = str(exc) 77 raise mapped_exc(message) from exc 78 79 80 HTTPCORE_EXC_MAP = { 81 httpcore.TimeoutException: TimeoutException, 82 httpcore.ConnectTimeout: ConnectTimeout, 83 httpcore.ReadTimeout: ReadTimeout, 84 httpcore.WriteTimeout: WriteTimeout, 85 httpcore.PoolTimeout: PoolTimeout, 86 httpcore.NetworkError: NetworkError, 87 httpcore.ConnectError: ConnectError, 88 httpcore.ReadError: ReadError, 89 httpcore.WriteError: WriteError, 90 httpcore.ProxyError: ProxyError, 91 httpcore.UnsupportedProtocol: UnsupportedProtocol, 92 httpcore.ProtocolError: ProtocolError, 93 httpcore.LocalProtocolError: LocalProtocolError, 94 httpcore.RemoteProtocolError: RemoteProtocolError, 95 } 96 97 98 class ResponseStream(SyncByteStream): 99 def __init__(self, httpcore_stream: typing.Iterable[bytes]): 100 self._httpcore_stream = httpcore_stream 101 102 def __iter__(self) -> typing.Iterator[bytes]: 103 with map_httpcore_exceptions(): 104 for part in self._httpcore_stream: 105 yield part 106 107 def close(self) -> None: 108 if hasattr(self._httpcore_stream, "close"): 109 self._httpcore_stream.close() # type: ignore 110 111 112 class HTTPTransport(BaseTransport): 113 def __init__( 114 self, 115 verify: VerifyTypes = True, 116 cert: CertTypes = None, 117 http1: bool = True, 118 http2: bool = False, 119 limits: Limits = DEFAULT_LIMITS, 120 trust_env: bool = True, 121 proxy: Proxy = None, 122 uds: str = None, 123 local_address: str = None, 124 retries: int = 0, 125 ) -> None: 126 ssl_context = create_ssl_context(verify=verify, cert=cert, trust_env=trust_env) 127 128 if proxy is None: 129 self._pool = httpcore.ConnectionPool( 130 ssl_context=ssl_context, 131 max_connections=limits.max_connections, 132 max_keepalive_connections=limits.max_keepalive_connections, 133 keepalive_expiry=limits.keepalive_expiry, 134 http1=http1, 135 http2=http2, 136 uds=uds, 137 local_address=local_address, 138 retries=retries, 139 ) 140 else: 141 self._pool = httpcore.HTTPProxy( 142 proxy_url=httpcore.URL( 143 scheme=proxy.url.raw_scheme, 144 host=proxy.url.raw_host, 145 port=proxy.url.port, 146 target=proxy.url.raw_path, 147 ), 148 proxy_headers=proxy.headers.raw, 149 ssl_context=ssl_context, 150 max_connections=limits.max_connections, 151 max_keepalive_connections=limits.max_keepalive_connections, 152 keepalive_expiry=limits.keepalive_expiry, 153 ) 154 155 def __enter__(self: T) -> T: # Use generics for subclass support. 156 self._pool.__enter__() 157 return self 158 159 def __exit__( 160 self, 161 exc_type: typing.Type[BaseException] = None, 162 exc_value: BaseException = None, 163 traceback: TracebackType = None, 164 ) -> None: 165 with map_httpcore_exceptions(): 166 self._pool.__exit__(exc_type, exc_value, traceback) 167 168 def handle_request( 169 self, 170 request: Request, 171 ) -> Response: 172 assert isinstance(request.stream, SyncByteStream) 173 174 req = httpcore.Request( 175 method=request.method, 176 url=httpcore.URL( 177 scheme=request.url.raw_scheme, 178 host=request.url.raw_host, 179 port=request.url.port, 180 target=request.url.raw_path, 181 ), 182 headers=request.headers.raw, 183 content=request.stream, 184 extensions=request.extensions, 185 ) 186 with map_httpcore_exceptions(): 187 resp = self._pool.handle_request(req) 188 189 assert isinstance(resp.stream, typing.Iterable) 190 191 return Response( 192 status_code=resp.status, 193 headers=resp.headers, 194 stream=ResponseStream(resp.stream), 195 extensions=resp.extensions, 196 ) 197 198 def close(self) -> None: 199 self._pool.close() 200 201 202 class AsyncResponseStream(AsyncByteStream): 203 def __init__(self, httpcore_stream: typing.AsyncIterable[bytes]): 204 self._httpcore_stream = httpcore_stream 205 206 async def __aiter__(self) -> typing.AsyncIterator[bytes]: 207 with map_httpcore_exceptions(): 208 async for part in self._httpcore_stream: 209 yield part 210 211 async def aclose(self) -> None: 212 if hasattr(self._httpcore_stream, "aclose"): 213 await self._httpcore_stream.aclose() # type: ignore 214 215 216 class AsyncHTTPTransport(AsyncBaseTransport): 217 def __init__( 218 self, 219 verify: VerifyTypes = True, 220 cert: CertTypes = None, 221 http1: bool = True, 222 http2: bool = False, 223 limits: Limits = DEFAULT_LIMITS, 224 trust_env: bool = True, 225 proxy: Proxy = None, 226 uds: str = None, 227 local_address: str = None, 228 retries: int = 0, 229 ) -> None: 230 ssl_context = create_ssl_context(verify=verify, cert=cert, trust_env=trust_env) 231 232 if proxy is None: 233 self._pool = httpcore.AsyncConnectionPool( 234 ssl_context=ssl_context, 235 max_connections=limits.max_connections, 236 max_keepalive_connections=limits.max_keepalive_connections, 237 keepalive_expiry=limits.keepalive_expiry, 238 http1=http1, 239 http2=http2, 240 uds=uds, 241 local_address=local_address, 242 retries=retries, 243 ) 244 else: 245 self._pool = httpcore.AsyncHTTPProxy( 246 proxy_url=httpcore.URL( 247 scheme=proxy.url.raw_scheme, 248 host=proxy.url.raw_host, 249 port=proxy.url.port, 250 target=proxy.url.raw_path, 251 ), 252 proxy_headers=proxy.headers.raw, 253 ssl_context=ssl_context, 254 max_connections=limits.max_connections, 255 max_keepalive_connections=limits.max_keepalive_connections, 256 keepalive_expiry=limits.keepalive_expiry, 257 ) 258 259 async def __aenter__(self: A) -> A: # Use generics for subclass support. 260 await self._pool.__aenter__() 261 return self 262 263 async def __aexit__( 264 self, 265 exc_type: typing.Type[BaseException] = None, 266 exc_value: BaseException = None, 267 traceback: TracebackType = None, 268 ) -> None: 269 with map_httpcore_exceptions(): 270 await self._pool.__aexit__(exc_type, exc_value, traceback) 271 272 async def handle_async_request( 273 self, 274 request: Request, 275 ) -> Response: 276 assert isinstance(request.stream, AsyncByteStream) 277 278 req = httpcore.Request( 279 method=request.method, 280 url=httpcore.URL( 281 scheme=request.url.raw_scheme, 282 host=request.url.raw_host, 283 port=request.url.port, 284 target=request.url.raw_path, 285 ), 286 headers=request.headers.raw, 287 content=request.stream, 288 extensions=request.extensions, 289 ) 290 with map_httpcore_exceptions(): 291 resp = await self._pool.handle_async_request(req) 292 293 assert isinstance(resp.stream, typing.AsyncIterable) 294 295 return Response( 296 status_code=resp.status, 297 headers=resp.headers, 298 stream=AsyncResponseStream(resp.stream), 299 extensions=resp.extensions, 300 ) 301 302 async def aclose(self) -> None: 303 await self._pool.aclose() 304 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/httpx/_transports/default.py b/httpx/_transports/default.py --- a/httpx/_transports/default.py +++ b/httpx/_transports/default.py @@ -138,19 +138,36 @@ retries=retries, ) else: - self._pool = httpcore.HTTPProxy( - proxy_url=httpcore.URL( - scheme=proxy.url.raw_scheme, - host=proxy.url.raw_host, - port=proxy.url.port, - target=proxy.url.raw_path, - ), - proxy_headers=proxy.headers.raw, - ssl_context=ssl_context, - max_connections=limits.max_connections, - max_keepalive_connections=limits.max_keepalive_connections, - keepalive_expiry=limits.keepalive_expiry, - ) + try: + self._pool = httpcore.HTTPProxy( + proxy_url=httpcore.URL( + scheme=proxy.url.raw_scheme, + host=proxy.url.raw_host, + port=proxy.url.port, + target=proxy.url.raw_path, + ), + proxy_headers=proxy.headers.raw, + ssl_context=ssl_context, + max_connections=limits.max_connections, + max_keepalive_connections=limits.max_keepalive_connections, + keepalive_expiry=limits.keepalive_expiry, + http1=http1, + http2=http2, + ) + except TypeError: # pragma: nocover + self._pool = httpcore.HTTPProxy( + proxy_url=httpcore.URL( + scheme=proxy.url.raw_scheme, + host=proxy.url.raw_host, + port=proxy.url.port, + target=proxy.url.raw_path, + ), + proxy_headers=proxy.headers.raw, + ssl_context=ssl_context, + max_connections=limits.max_connections, + max_keepalive_connections=limits.max_keepalive_connections, + keepalive_expiry=limits.keepalive_expiry, + ) def __enter__(self: T) -> T: # Use generics for subclass support. self._pool.__enter__()
{"golden_diff": "diff --git a/httpx/_transports/default.py b/httpx/_transports/default.py\n--- a/httpx/_transports/default.py\n+++ b/httpx/_transports/default.py\n@@ -138,19 +138,36 @@\n retries=retries,\n )\n else:\n- self._pool = httpcore.HTTPProxy(\n- proxy_url=httpcore.URL(\n- scheme=proxy.url.raw_scheme,\n- host=proxy.url.raw_host,\n- port=proxy.url.port,\n- target=proxy.url.raw_path,\n- ),\n- proxy_headers=proxy.headers.raw,\n- ssl_context=ssl_context,\n- max_connections=limits.max_connections,\n- max_keepalive_connections=limits.max_keepalive_connections,\n- keepalive_expiry=limits.keepalive_expiry,\n- )\n+ try:\n+ self._pool = httpcore.HTTPProxy(\n+ proxy_url=httpcore.URL(\n+ scheme=proxy.url.raw_scheme,\n+ host=proxy.url.raw_host,\n+ port=proxy.url.port,\n+ target=proxy.url.raw_path,\n+ ),\n+ proxy_headers=proxy.headers.raw,\n+ ssl_context=ssl_context,\n+ max_connections=limits.max_connections,\n+ max_keepalive_connections=limits.max_keepalive_connections,\n+ keepalive_expiry=limits.keepalive_expiry,\n+ http1=http1,\n+ http2=http2,\n+ )\n+ except TypeError: # pragma: nocover\n+ self._pool = httpcore.HTTPProxy(\n+ proxy_url=httpcore.URL(\n+ scheme=proxy.url.raw_scheme,\n+ host=proxy.url.raw_host,\n+ port=proxy.url.port,\n+ target=proxy.url.raw_path,\n+ ),\n+ proxy_headers=proxy.headers.raw,\n+ ssl_context=ssl_context,\n+ max_connections=limits.max_connections,\n+ max_keepalive_connections=limits.max_keepalive_connections,\n+ keepalive_expiry=limits.keepalive_expiry,\n+ )\n \n def __enter__(self: T) -> T: # Use generics for subclass support.\n self._pool.__enter__()\n", "issue": "When using proxy the request downgrades to HTTP/1\nUsing the following code\r\n\r\n```\r\nclient = httpx.Client(http2=True, proxies=proxies)\r\n response = client.get('https://www.truepeoplesearch.com/results?name=John', headers=headers)\r\nresponse.http_version\r\n```\r\n\r\nIt appears that it returns HTTP/1\r\n\r\nbut if I were to make the same request without proxy, it shows HTTP/2\r\n\r\n\r\nVersion of HTTPX being used.\r\n```\r\nName: httpx\r\nVersion: 0.21.1\r\nSummary: The next generation HTTP client.\r\nHome-page: https://github.com/encode/httpx\r\nAuthor: Tom Christie\r\nAuthor-email: [email protected]\r\n```\n", "before_files": [{"content": "\"\"\"\nCustom transports, with nicely configured defaults.\n\nThe following additional keyword arguments are currently supported by httpcore...\n\n* uds: str\n* local_address: str\n* retries: int\n\nExample usages...\n\n# Disable HTTP/2 on a single specific domain.\nmounts = {\n \"all://\": httpx.HTTPTransport(http2=True),\n \"all://*example.org\": httpx.HTTPTransport()\n}\n\n# Using advanced httpcore configuration, with connection retries.\ntransport = httpx.HTTPTransport(retries=1)\nclient = httpx.Client(transport=transport)\n\n# Using advanced httpcore configuration, with unix domain sockets.\ntransport = httpx.HTTPTransport(uds=\"socket.uds\")\nclient = httpx.Client(transport=transport)\n\"\"\"\nimport contextlib\nimport typing\nfrom types import TracebackType\n\nimport httpcore\n\nfrom .._config import DEFAULT_LIMITS, Limits, Proxy, create_ssl_context\nfrom .._exceptions import (\n ConnectError,\n ConnectTimeout,\n LocalProtocolError,\n NetworkError,\n PoolTimeout,\n ProtocolError,\n ProxyError,\n ReadError,\n ReadTimeout,\n RemoteProtocolError,\n TimeoutException,\n UnsupportedProtocol,\n WriteError,\n WriteTimeout,\n)\nfrom .._models import Request, Response\nfrom .._types import AsyncByteStream, CertTypes, SyncByteStream, VerifyTypes\nfrom .base import AsyncBaseTransport, BaseTransport\n\nT = typing.TypeVar(\"T\", bound=\"HTTPTransport\")\nA = typing.TypeVar(\"A\", bound=\"AsyncHTTPTransport\")\n\n\[email protected]\ndef map_httpcore_exceptions() -> typing.Iterator[None]:\n try:\n yield\n except Exception as exc: # noqa: PIE-786\n mapped_exc = None\n\n for from_exc, to_exc in HTTPCORE_EXC_MAP.items():\n if not isinstance(exc, from_exc):\n continue\n # We want to map to the most specific exception we can find.\n # Eg if `exc` is an `httpcore.ReadTimeout`, we want to map to\n # `httpx.ReadTimeout`, not just `httpx.TimeoutException`.\n if mapped_exc is None or issubclass(to_exc, mapped_exc):\n mapped_exc = to_exc\n\n if mapped_exc is None: # pragma: nocover\n raise\n\n message = str(exc)\n raise mapped_exc(message) from exc\n\n\nHTTPCORE_EXC_MAP = {\n httpcore.TimeoutException: TimeoutException,\n httpcore.ConnectTimeout: ConnectTimeout,\n httpcore.ReadTimeout: ReadTimeout,\n httpcore.WriteTimeout: WriteTimeout,\n httpcore.PoolTimeout: PoolTimeout,\n httpcore.NetworkError: NetworkError,\n httpcore.ConnectError: ConnectError,\n httpcore.ReadError: ReadError,\n httpcore.WriteError: WriteError,\n httpcore.ProxyError: ProxyError,\n httpcore.UnsupportedProtocol: UnsupportedProtocol,\n httpcore.ProtocolError: ProtocolError,\n httpcore.LocalProtocolError: LocalProtocolError,\n httpcore.RemoteProtocolError: RemoteProtocolError,\n}\n\n\nclass ResponseStream(SyncByteStream):\n def __init__(self, httpcore_stream: typing.Iterable[bytes]):\n self._httpcore_stream = httpcore_stream\n\n def __iter__(self) -> typing.Iterator[bytes]:\n with map_httpcore_exceptions():\n for part in self._httpcore_stream:\n yield part\n\n def close(self) -> None:\n if hasattr(self._httpcore_stream, \"close\"):\n self._httpcore_stream.close() # type: ignore\n\n\nclass HTTPTransport(BaseTransport):\n def __init__(\n self,\n verify: VerifyTypes = True,\n cert: CertTypes = None,\n http1: bool = True,\n http2: bool = False,\n limits: Limits = DEFAULT_LIMITS,\n trust_env: bool = True,\n proxy: Proxy = None,\n uds: str = None,\n local_address: str = None,\n retries: int = 0,\n ) -> None:\n ssl_context = create_ssl_context(verify=verify, cert=cert, trust_env=trust_env)\n\n if proxy is None:\n self._pool = httpcore.ConnectionPool(\n ssl_context=ssl_context,\n max_connections=limits.max_connections,\n max_keepalive_connections=limits.max_keepalive_connections,\n keepalive_expiry=limits.keepalive_expiry,\n http1=http1,\n http2=http2,\n uds=uds,\n local_address=local_address,\n retries=retries,\n )\n else:\n self._pool = httpcore.HTTPProxy(\n proxy_url=httpcore.URL(\n scheme=proxy.url.raw_scheme,\n host=proxy.url.raw_host,\n port=proxy.url.port,\n target=proxy.url.raw_path,\n ),\n proxy_headers=proxy.headers.raw,\n ssl_context=ssl_context,\n max_connections=limits.max_connections,\n max_keepalive_connections=limits.max_keepalive_connections,\n keepalive_expiry=limits.keepalive_expiry,\n )\n\n def __enter__(self: T) -> T: # Use generics for subclass support.\n self._pool.__enter__()\n return self\n\n def __exit__(\n self,\n exc_type: typing.Type[BaseException] = None,\n exc_value: BaseException = None,\n traceback: TracebackType = None,\n ) -> None:\n with map_httpcore_exceptions():\n self._pool.__exit__(exc_type, exc_value, traceback)\n\n def handle_request(\n self,\n request: Request,\n ) -> Response:\n assert isinstance(request.stream, SyncByteStream)\n\n req = httpcore.Request(\n method=request.method,\n url=httpcore.URL(\n scheme=request.url.raw_scheme,\n host=request.url.raw_host,\n port=request.url.port,\n target=request.url.raw_path,\n ),\n headers=request.headers.raw,\n content=request.stream,\n extensions=request.extensions,\n )\n with map_httpcore_exceptions():\n resp = self._pool.handle_request(req)\n\n assert isinstance(resp.stream, typing.Iterable)\n\n return Response(\n status_code=resp.status,\n headers=resp.headers,\n stream=ResponseStream(resp.stream),\n extensions=resp.extensions,\n )\n\n def close(self) -> None:\n self._pool.close()\n\n\nclass AsyncResponseStream(AsyncByteStream):\n def __init__(self, httpcore_stream: typing.AsyncIterable[bytes]):\n self._httpcore_stream = httpcore_stream\n\n async def __aiter__(self) -> typing.AsyncIterator[bytes]:\n with map_httpcore_exceptions():\n async for part in self._httpcore_stream:\n yield part\n\n async def aclose(self) -> None:\n if hasattr(self._httpcore_stream, \"aclose\"):\n await self._httpcore_stream.aclose() # type: ignore\n\n\nclass AsyncHTTPTransport(AsyncBaseTransport):\n def __init__(\n self,\n verify: VerifyTypes = True,\n cert: CertTypes = None,\n http1: bool = True,\n http2: bool = False,\n limits: Limits = DEFAULT_LIMITS,\n trust_env: bool = True,\n proxy: Proxy = None,\n uds: str = None,\n local_address: str = None,\n retries: int = 0,\n ) -> None:\n ssl_context = create_ssl_context(verify=verify, cert=cert, trust_env=trust_env)\n\n if proxy is None:\n self._pool = httpcore.AsyncConnectionPool(\n ssl_context=ssl_context,\n max_connections=limits.max_connections,\n max_keepalive_connections=limits.max_keepalive_connections,\n keepalive_expiry=limits.keepalive_expiry,\n http1=http1,\n http2=http2,\n uds=uds,\n local_address=local_address,\n retries=retries,\n )\n else:\n self._pool = httpcore.AsyncHTTPProxy(\n proxy_url=httpcore.URL(\n scheme=proxy.url.raw_scheme,\n host=proxy.url.raw_host,\n port=proxy.url.port,\n target=proxy.url.raw_path,\n ),\n proxy_headers=proxy.headers.raw,\n ssl_context=ssl_context,\n max_connections=limits.max_connections,\n max_keepalive_connections=limits.max_keepalive_connections,\n keepalive_expiry=limits.keepalive_expiry,\n )\n\n async def __aenter__(self: A) -> A: # Use generics for subclass support.\n await self._pool.__aenter__()\n return self\n\n async def __aexit__(\n self,\n exc_type: typing.Type[BaseException] = None,\n exc_value: BaseException = None,\n traceback: TracebackType = None,\n ) -> None:\n with map_httpcore_exceptions():\n await self._pool.__aexit__(exc_type, exc_value, traceback)\n\n async def handle_async_request(\n self,\n request: Request,\n ) -> Response:\n assert isinstance(request.stream, AsyncByteStream)\n\n req = httpcore.Request(\n method=request.method,\n url=httpcore.URL(\n scheme=request.url.raw_scheme,\n host=request.url.raw_host,\n port=request.url.port,\n target=request.url.raw_path,\n ),\n headers=request.headers.raw,\n content=request.stream,\n extensions=request.extensions,\n )\n with map_httpcore_exceptions():\n resp = await self._pool.handle_async_request(req)\n\n assert isinstance(resp.stream, typing.AsyncIterable)\n\n return Response(\n status_code=resp.status,\n headers=resp.headers,\n stream=AsyncResponseStream(resp.stream),\n extensions=resp.extensions,\n )\n\n async def aclose(self) -> None:\n await self._pool.aclose()\n", "path": "httpx/_transports/default.py"}], "after_files": [{"content": "\"\"\"\nCustom transports, with nicely configured defaults.\n\nThe following additional keyword arguments are currently supported by httpcore...\n\n* uds: str\n* local_address: str\n* retries: int\n\nExample usages...\n\n# Disable HTTP/2 on a single specific domain.\nmounts = {\n \"all://\": httpx.HTTPTransport(http2=True),\n \"all://*example.org\": httpx.HTTPTransport()\n}\n\n# Using advanced httpcore configuration, with connection retries.\ntransport = httpx.HTTPTransport(retries=1)\nclient = httpx.Client(transport=transport)\n\n# Using advanced httpcore configuration, with unix domain sockets.\ntransport = httpx.HTTPTransport(uds=\"socket.uds\")\nclient = httpx.Client(transport=transport)\n\"\"\"\nimport contextlib\nimport typing\nfrom types import TracebackType\n\nimport httpcore\n\nfrom .._config import DEFAULT_LIMITS, Limits, Proxy, create_ssl_context\nfrom .._exceptions import (\n ConnectError,\n ConnectTimeout,\n LocalProtocolError,\n NetworkError,\n PoolTimeout,\n ProtocolError,\n ProxyError,\n ReadError,\n ReadTimeout,\n RemoteProtocolError,\n TimeoutException,\n UnsupportedProtocol,\n WriteError,\n WriteTimeout,\n)\nfrom .._models import Request, Response\nfrom .._types import AsyncByteStream, CertTypes, SyncByteStream, VerifyTypes\nfrom .base import AsyncBaseTransport, BaseTransport\n\nT = typing.TypeVar(\"T\", bound=\"HTTPTransport\")\nA = typing.TypeVar(\"A\", bound=\"AsyncHTTPTransport\")\n\n\[email protected]\ndef map_httpcore_exceptions() -> typing.Iterator[None]:\n try:\n yield\n except Exception as exc: # noqa: PIE-786\n mapped_exc = None\n\n for from_exc, to_exc in HTTPCORE_EXC_MAP.items():\n if not isinstance(exc, from_exc):\n continue\n # We want to map to the most specific exception we can find.\n # Eg if `exc` is an `httpcore.ReadTimeout`, we want to map to\n # `httpx.ReadTimeout`, not just `httpx.TimeoutException`.\n if mapped_exc is None or issubclass(to_exc, mapped_exc):\n mapped_exc = to_exc\n\n if mapped_exc is None: # pragma: nocover\n raise\n\n message = str(exc)\n raise mapped_exc(message) from exc\n\n\nHTTPCORE_EXC_MAP = {\n httpcore.TimeoutException: TimeoutException,\n httpcore.ConnectTimeout: ConnectTimeout,\n httpcore.ReadTimeout: ReadTimeout,\n httpcore.WriteTimeout: WriteTimeout,\n httpcore.PoolTimeout: PoolTimeout,\n httpcore.NetworkError: NetworkError,\n httpcore.ConnectError: ConnectError,\n httpcore.ReadError: ReadError,\n httpcore.WriteError: WriteError,\n httpcore.ProxyError: ProxyError,\n httpcore.UnsupportedProtocol: UnsupportedProtocol,\n httpcore.ProtocolError: ProtocolError,\n httpcore.LocalProtocolError: LocalProtocolError,\n httpcore.RemoteProtocolError: RemoteProtocolError,\n}\n\n\nclass ResponseStream(SyncByteStream):\n def __init__(self, httpcore_stream: typing.Iterable[bytes]):\n self._httpcore_stream = httpcore_stream\n\n def __iter__(self) -> typing.Iterator[bytes]:\n with map_httpcore_exceptions():\n for part in self._httpcore_stream:\n yield part\n\n def close(self) -> None:\n if hasattr(self._httpcore_stream, \"close\"):\n self._httpcore_stream.close() # type: ignore\n\n\nclass HTTPTransport(BaseTransport):\n def __init__(\n self,\n verify: VerifyTypes = True,\n cert: CertTypes = None,\n http1: bool = True,\n http2: bool = False,\n limits: Limits = DEFAULT_LIMITS,\n trust_env: bool = True,\n proxy: Proxy = None,\n uds: str = None,\n local_address: str = None,\n retries: int = 0,\n ) -> None:\n ssl_context = create_ssl_context(verify=verify, cert=cert, trust_env=trust_env)\n\n if proxy is None:\n self._pool = httpcore.ConnectionPool(\n ssl_context=ssl_context,\n max_connections=limits.max_connections,\n max_keepalive_connections=limits.max_keepalive_connections,\n keepalive_expiry=limits.keepalive_expiry,\n http1=http1,\n http2=http2,\n uds=uds,\n local_address=local_address,\n retries=retries,\n )\n else:\n try:\n self._pool = httpcore.HTTPProxy(\n proxy_url=httpcore.URL(\n scheme=proxy.url.raw_scheme,\n host=proxy.url.raw_host,\n port=proxy.url.port,\n target=proxy.url.raw_path,\n ),\n proxy_headers=proxy.headers.raw,\n ssl_context=ssl_context,\n max_connections=limits.max_connections,\n max_keepalive_connections=limits.max_keepalive_connections,\n keepalive_expiry=limits.keepalive_expiry,\n http1=http1,\n http2=http2,\n )\n except TypeError: # pragma: nocover\n self._pool = httpcore.HTTPProxy(\n proxy_url=httpcore.URL(\n scheme=proxy.url.raw_scheme,\n host=proxy.url.raw_host,\n port=proxy.url.port,\n target=proxy.url.raw_path,\n ),\n proxy_headers=proxy.headers.raw,\n ssl_context=ssl_context,\n max_connections=limits.max_connections,\n max_keepalive_connections=limits.max_keepalive_connections,\n keepalive_expiry=limits.keepalive_expiry,\n )\n\n def __enter__(self: T) -> T: # Use generics for subclass support.\n self._pool.__enter__()\n return self\n\n def __exit__(\n self,\n exc_type: typing.Type[BaseException] = None,\n exc_value: BaseException = None,\n traceback: TracebackType = None,\n ) -> None:\n with map_httpcore_exceptions():\n self._pool.__exit__(exc_type, exc_value, traceback)\n\n def handle_request(\n self,\n request: Request,\n ) -> Response:\n assert isinstance(request.stream, SyncByteStream)\n\n req = httpcore.Request(\n method=request.method,\n url=httpcore.URL(\n scheme=request.url.raw_scheme,\n host=request.url.raw_host,\n port=request.url.port,\n target=request.url.raw_path,\n ),\n headers=request.headers.raw,\n content=request.stream,\n extensions=request.extensions,\n )\n with map_httpcore_exceptions():\n resp = self._pool.handle_request(req)\n\n assert isinstance(resp.stream, typing.Iterable)\n\n return Response(\n status_code=resp.status,\n headers=resp.headers,\n stream=ResponseStream(resp.stream),\n extensions=resp.extensions,\n )\n\n def close(self) -> None:\n self._pool.close()\n\n\nclass AsyncResponseStream(AsyncByteStream):\n def __init__(self, httpcore_stream: typing.AsyncIterable[bytes]):\n self._httpcore_stream = httpcore_stream\n\n async def __aiter__(self) -> typing.AsyncIterator[bytes]:\n with map_httpcore_exceptions():\n async for part in self._httpcore_stream:\n yield part\n\n async def aclose(self) -> None:\n if hasattr(self._httpcore_stream, \"aclose\"):\n await self._httpcore_stream.aclose() # type: ignore\n\n\nclass AsyncHTTPTransport(AsyncBaseTransport):\n def __init__(\n self,\n verify: VerifyTypes = True,\n cert: CertTypes = None,\n http1: bool = True,\n http2: bool = False,\n limits: Limits = DEFAULT_LIMITS,\n trust_env: bool = True,\n proxy: Proxy = None,\n uds: str = None,\n local_address: str = None,\n retries: int = 0,\n ) -> None:\n ssl_context = create_ssl_context(verify=verify, cert=cert, trust_env=trust_env)\n\n if proxy is None:\n self._pool = httpcore.AsyncConnectionPool(\n ssl_context=ssl_context,\n max_connections=limits.max_connections,\n max_keepalive_connections=limits.max_keepalive_connections,\n keepalive_expiry=limits.keepalive_expiry,\n http1=http1,\n http2=http2,\n uds=uds,\n local_address=local_address,\n retries=retries,\n )\n else:\n self._pool = httpcore.AsyncHTTPProxy(\n proxy_url=httpcore.URL(\n scheme=proxy.url.raw_scheme,\n host=proxy.url.raw_host,\n port=proxy.url.port,\n target=proxy.url.raw_path,\n ),\n proxy_headers=proxy.headers.raw,\n ssl_context=ssl_context,\n max_connections=limits.max_connections,\n max_keepalive_connections=limits.max_keepalive_connections,\n keepalive_expiry=limits.keepalive_expiry,\n )\n\n async def __aenter__(self: A) -> A: # Use generics for subclass support.\n await self._pool.__aenter__()\n return self\n\n async def __aexit__(\n self,\n exc_type: typing.Type[BaseException] = None,\n exc_value: BaseException = None,\n traceback: TracebackType = None,\n ) -> None:\n with map_httpcore_exceptions():\n await self._pool.__aexit__(exc_type, exc_value, traceback)\n\n async def handle_async_request(\n self,\n request: Request,\n ) -> Response:\n assert isinstance(request.stream, AsyncByteStream)\n\n req = httpcore.Request(\n method=request.method,\n url=httpcore.URL(\n scheme=request.url.raw_scheme,\n host=request.url.raw_host,\n port=request.url.port,\n target=request.url.raw_path,\n ),\n headers=request.headers.raw,\n content=request.stream,\n extensions=request.extensions,\n )\n with map_httpcore_exceptions():\n resp = await self._pool.handle_async_request(req)\n\n assert isinstance(resp.stream, typing.AsyncIterable)\n\n return Response(\n status_code=resp.status,\n headers=resp.headers,\n stream=AsyncResponseStream(resp.stream),\n extensions=resp.extensions,\n )\n\n async def aclose(self) -> None:\n await self._pool.aclose()\n", "path": "httpx/_transports/default.py"}]}
3,271
454
gh_patches_debug_16318
rasdani/github-patches
git_diff
angr__angr-4080
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Sigaction syscall triggers "Not enough data for store" error ### Description Running Angr on an AMD64 binary that makes a `sigaction` syscall triggers an error ``` Traceback (most recent call last): File "/home/ubuntu/angr-exp/sigactionbug/demo.py", line 7, in <module> state_successors = state_successors[0].step() File "/home/ubuntu/angr-dev/angr/angr/sim_state.py", line 607, in step return self.project.factory.successors(self, **kwargs) File "/home/ubuntu/angr-dev/angr/angr/factory.py", line 77, in successors return self.default_engine.process(*args, **kwargs) File "/home/ubuntu/angr-dev/angr/angr/engines/vex/light/slicing.py", line 20, in process return super().process(*args, **kwargs) File "/home/ubuntu/angr-dev/angr/angr/engines/engine.py", line 163, in process self.process_successors(self.successors, **kwargs) File "/home/ubuntu/angr-dev/angr/angr/engines/failure.py", line 24, in process_successors return super().process_successors(successors, **kwargs) File "/home/ubuntu/angr-dev/angr/angr/engines/syscall.py", line 50, in process_successors return self.process_procedure(state, successors, sys_procedure, **kwargs) File "/home/ubuntu/angr-dev/angr/angr/engines/procedure.py", line 39, in process_procedure inst = procedure.execute(state, successors, ret_to=ret_to, arguments=arguments) File "/home/ubuntu/angr-dev/angr/angr/sim_procedure.py", line 286, in execute inst.ret(r) File "/home/ubuntu/angr-dev/angr/angr/sim_procedure.py", line 459, in ret ret_addr = self.cc.teardown_callsite(self.state, return_val=expr, prototype=self.prototype) File "/home/ubuntu/angr-dev/angr/angr/calling_conventions.py", line 921, in teardown_callsite self.set_return_val(state, return_val, prototype.returnty) File "/home/ubuntu/angr-dev/angr/angr/calling_conventions.py", line 1339, in set_return_val super().set_return_val(state, val, ty, **kwargs) File "/home/ubuntu/angr-dev/angr/angr/calling_conventions.py", line 798, in set_return_val loc.set_value(state, val, stack_base=stack_base) File "/home/ubuntu/angr-dev/angr/angr/calling_conventions.py", line 310, in set_value state.registers.store(offset, value, size=self.size) File "/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/unwrapper_mixin.py", line 10, in store return super().store( File "/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/name_resolution_mixin.py", line 60, in store return super().store(addr, data, size=size, **kwargs) File "/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/bvv_conversion_mixin.py", line 26, in store super().store(addr, data_bv, size=size, **kwargs) File "/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/simplification_mixin.py", line 13, in store super().store(addr, real_data, **kwargs) File "/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/clouseau_mixin.py", line 7, in store super().store(addr, data, size=size, condition=condition, endness=endness, inspect=inspect, **kwargs) File "/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/actions_mixin.py", line 34, in store super().store(addr, data, size=size, action=action, condition=condition, **kwargs) File "/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/underconstrained_mixin.py", line 28, in store super().store(addr, data, **kwargs) File "/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/size_resolution_mixin.py", line 99, in store super().store(addr, data, size=size, condition=condition, **kwargs) File "/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/size_resolution_mixin.py", line 43, in store raise SimMemoryError("Not enough data for store") angr.errors.SimMemoryError: Not enough data for store ``` ### Steps to reproduce the bug Here is a minimal code that triggers the bug : ```C #include <signal.h> #include <stdio.h> int main() { sigaction(SIGSEGV, NULL, NULL); return 0; } ``` ```python import angr angr_project = angr.Project("./target_program", exclude_sim_procedures_list=['sigaction']) state_successors = angr_project.factory.entry_state().step() while not state_successors.is_empty: state_successors = state_successors[0].step() ``` The bug happens when the SimProcedure for the sigaction syscall is executed (not the SimProcedure for the libc wrapper ). Excluding the libc wrapper SimProcedure is necessary to actually reach the code calling the syscall and trigger the bug. ### Environment _No response_ ### Additional context As far as I understand, this is due to an inconsistency between the prototype and the implementation of the SimProcedure for the `sigaction` syscall, like #4033. According to its prototype, the SimProcedure should return a long but it actually returns an int. The error happens when the SimProcedure for the syscall returns. It is raised from this check in `angr/storage/memory_mixins/size_resolution_mixin.py` ``` if out_size > max_size: raise SimMemoryError("Not enough data for store") ``` because `max_size` is 4 and `out_size` is 8. `max_size` is determined by the length of the value returned by the SimProcedure, i.e. an int in the current implementation in `angr/procedures/linux_kernel/sigaction.py` ```python class rt_sigaction(angr.SimProcedure): def run(self, signum, act, oldact, sigsetsize): # pylint:disable=arguments-differ,unused-argument # TODO: actually do something # ...hack if self.state.solver.is_true(signum == 33): return self.state.libc.ret_errno("EINVAL") return self.state.solver.BVV(0, self.arch.sizeof["int"]) ``` `out_size` is determined by the return type given in the prototype of the SimProcedure in `angr/procedures/definitions/linux_kernel.py`, i.e. a long : ```python # long sys_rt_sigaction(int, const struct sigaction *, struct sigaction *, size_t); 'rt_sigaction': SimTypeFunction([SimTypeInt(signed=True), SimTypePointer(SimStruct({}, name="sigaction", pack=False, align=None), offset=0), SimTypePointer(SimStruct({}, name="sigaction", pack=False, align=None), offset=0), SimTypeLong(signed=False, label="size_t")], SimTypeLong(signed=True), arg_names=["None", "None", "None", "None"]), ``` Replacing `int` with `long` in the return of the SimProcedure fixes the problem for me. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `angr/procedures/linux_kernel/sigaction.py` Content: ``` 1 import angr 2 3 4 class sigaction(angr.SimProcedure): 5 def run(self, signum, act, oldact): # pylint:disable=arguments-differ,unused-argument 6 # TODO: actually do something 7 return self.state.solver.BVV(0, self.arch.sizeof["int"]) 8 9 10 class rt_sigaction(angr.SimProcedure): 11 def run(self, signum, act, oldact, sigsetsize): # pylint:disable=arguments-differ,unused-argument 12 # TODO: actually do something 13 # ...hack 14 if self.state.solver.is_true(signum == 33): 15 return self.state.libc.ret_errno("EINVAL") 16 return self.state.solver.BVV(0, self.arch.sizeof["int"]) 17 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/angr/procedures/linux_kernel/sigaction.py b/angr/procedures/linux_kernel/sigaction.py --- a/angr/procedures/linux_kernel/sigaction.py +++ b/angr/procedures/linux_kernel/sigaction.py @@ -4,7 +4,7 @@ class sigaction(angr.SimProcedure): def run(self, signum, act, oldact): # pylint:disable=arguments-differ,unused-argument # TODO: actually do something - return self.state.solver.BVV(0, self.arch.sizeof["int"]) + return self.state.solver.BVV(0, self.arch.sizeof["long"]) class rt_sigaction(angr.SimProcedure): @@ -13,4 +13,4 @@ # ...hack if self.state.solver.is_true(signum == 33): return self.state.libc.ret_errno("EINVAL") - return self.state.solver.BVV(0, self.arch.sizeof["int"]) + return self.state.solver.BVV(0, self.arch.sizeof["long"])
{"golden_diff": "diff --git a/angr/procedures/linux_kernel/sigaction.py b/angr/procedures/linux_kernel/sigaction.py\n--- a/angr/procedures/linux_kernel/sigaction.py\n+++ b/angr/procedures/linux_kernel/sigaction.py\n@@ -4,7 +4,7 @@\n class sigaction(angr.SimProcedure):\n def run(self, signum, act, oldact): # pylint:disable=arguments-differ,unused-argument\n # TODO: actually do something\n- return self.state.solver.BVV(0, self.arch.sizeof[\"int\"])\n+ return self.state.solver.BVV(0, self.arch.sizeof[\"long\"])\n \n \n class rt_sigaction(angr.SimProcedure):\n@@ -13,4 +13,4 @@\n # ...hack\n if self.state.solver.is_true(signum == 33):\n return self.state.libc.ret_errno(\"EINVAL\")\n- return self.state.solver.BVV(0, self.arch.sizeof[\"int\"])\n+ return self.state.solver.BVV(0, self.arch.sizeof[\"long\"])\n", "issue": "Sigaction syscall triggers \"Not enough data for store\" error\n### Description\r\n\r\nRunning Angr on an AMD64 binary that makes a `sigaction` syscall triggers an error\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/angr-exp/sigactionbug/demo.py\", line 7, in <module>\r\n state_successors = state_successors[0].step()\r\n File \"/home/ubuntu/angr-dev/angr/angr/sim_state.py\", line 607, in step\r\n return self.project.factory.successors(self, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/factory.py\", line 77, in successors\r\n return self.default_engine.process(*args, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/engines/vex/light/slicing.py\", line 20, in process\r\n return super().process(*args, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/engines/engine.py\", line 163, in process\r\n self.process_successors(self.successors, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/engines/failure.py\", line 24, in process_successors\r\n return super().process_successors(successors, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/engines/syscall.py\", line 50, in process_successors\r\n return self.process_procedure(state, successors, sys_procedure, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/engines/procedure.py\", line 39, in process_procedure\r\n inst = procedure.execute(state, successors, ret_to=ret_to, arguments=arguments)\r\n File \"/home/ubuntu/angr-dev/angr/angr/sim_procedure.py\", line 286, in execute\r\n inst.ret(r)\r\n File \"/home/ubuntu/angr-dev/angr/angr/sim_procedure.py\", line 459, in ret\r\n ret_addr = self.cc.teardown_callsite(self.state, return_val=expr, prototype=self.prototype)\r\n File \"/home/ubuntu/angr-dev/angr/angr/calling_conventions.py\", line 921, in teardown_callsite\r\n self.set_return_val(state, return_val, prototype.returnty)\r\n File \"/home/ubuntu/angr-dev/angr/angr/calling_conventions.py\", line 1339, in set_return_val\r\n super().set_return_val(state, val, ty, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/calling_conventions.py\", line 798, in set_return_val\r\n loc.set_value(state, val, stack_base=stack_base)\r\n File \"/home/ubuntu/angr-dev/angr/angr/calling_conventions.py\", line 310, in set_value\r\n state.registers.store(offset, value, size=self.size)\r\n File \"/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/unwrapper_mixin.py\", line 10, in store\r\n return super().store(\r\n File \"/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/name_resolution_mixin.py\", line 60, in store\r\n return super().store(addr, data, size=size, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/bvv_conversion_mixin.py\", line 26, in store\r\n super().store(addr, data_bv, size=size, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/simplification_mixin.py\", line 13, in store\r\n super().store(addr, real_data, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/clouseau_mixin.py\", line 7, in store\r\n super().store(addr, data, size=size, condition=condition, endness=endness, inspect=inspect, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/actions_mixin.py\", line 34, in store\r\n super().store(addr, data, size=size, action=action, condition=condition, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/underconstrained_mixin.py\", line 28, in store\r\n super().store(addr, data, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/size_resolution_mixin.py\", line 99, in store\r\n super().store(addr, data, size=size, condition=condition, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/size_resolution_mixin.py\", line 43, in store\r\n raise SimMemoryError(\"Not enough data for store\")\r\nangr.errors.SimMemoryError: Not enough data for store\r\n```\r\n\r\n### Steps to reproduce the bug\r\n\r\nHere is a minimal code that triggers the bug :\r\n\r\n```C\r\n#include <signal.h>\r\n#include <stdio.h>\r\n\r\nint main() {\r\n sigaction(SIGSEGV, NULL, NULL);\r\n return 0;\r\n}\r\n```\r\n\r\n```python\r\nimport angr\r\n\r\nangr_project = angr.Project(\"./target_program\", exclude_sim_procedures_list=['sigaction'])\r\nstate_successors = angr_project.factory.entry_state().step()\r\n\r\nwhile not state_successors.is_empty:\r\n state_successors = state_successors[0].step()\r\n```\r\n\r\nThe bug happens when the SimProcedure for the sigaction syscall is executed (not the SimProcedure for the libc wrapper ). Excluding the libc wrapper SimProcedure is necessary to actually reach the code calling the syscall and trigger the bug.\r\n\r\n### Environment\r\n\r\n_No response_\r\n\r\n### Additional context\r\n\r\nAs far as I understand, this is due to an inconsistency between the prototype and the implementation of the SimProcedure for the `sigaction` syscall, like #4033. According to its prototype, the SimProcedure should return a long but it actually returns an int.\r\n\r\nThe error happens when the SimProcedure for the syscall returns. It is raised from this check in `angr/storage/memory_mixins/size_resolution_mixin.py`\r\n\r\n```\r\n if out_size > max_size:\r\n raise SimMemoryError(\"Not enough data for store\")\r\n```\r\n\r\nbecause `max_size` is 4 and `out_size` is 8.\r\n\r\n`max_size` is determined by the length of the value returned by the SimProcedure, i.e. an int in the current implementation in `angr/procedures/linux_kernel/sigaction.py`\r\n```python\r\nclass rt_sigaction(angr.SimProcedure):\r\n def run(self, signum, act, oldact, sigsetsize): # pylint:disable=arguments-differ,unused-argument\r\n # TODO: actually do something\r\n # ...hack\r\n if self.state.solver.is_true(signum == 33):\r\n return self.state.libc.ret_errno(\"EINVAL\")\r\n return self.state.solver.BVV(0, self.arch.sizeof[\"int\"])\r\n```\r\n\r\n\r\n`out_size` is determined by the return type given in the prototype of the SimProcedure in `angr/procedures/definitions/linux_kernel.py`, i.e. a long :\r\n\r\n```python\r\n# long sys_rt_sigaction(int, const struct sigaction *, struct sigaction *, size_t);\r\n'rt_sigaction': SimTypeFunction([SimTypeInt(signed=True), SimTypePointer(SimStruct({}, name=\"sigaction\", pack=False, align=None), offset=0), SimTypePointer(SimStruct({}, name=\"sigaction\", pack=False, align=None), offset=0), SimTypeLong(signed=False, label=\"size_t\")], SimTypeLong(signed=True), arg_names=[\"None\", \"None\", \"None\", \"None\"]),\r\n```\r\n\r\nReplacing `int` with `long` in the return of the SimProcedure fixes the problem for me.\n", "before_files": [{"content": "import angr\n\n\nclass sigaction(angr.SimProcedure):\n def run(self, signum, act, oldact): # pylint:disable=arguments-differ,unused-argument\n # TODO: actually do something\n return self.state.solver.BVV(0, self.arch.sizeof[\"int\"])\n\n\nclass rt_sigaction(angr.SimProcedure):\n def run(self, signum, act, oldact, sigsetsize): # pylint:disable=arguments-differ,unused-argument\n # TODO: actually do something\n # ...hack\n if self.state.solver.is_true(signum == 33):\n return self.state.libc.ret_errno(\"EINVAL\")\n return self.state.solver.BVV(0, self.arch.sizeof[\"int\"])\n", "path": "angr/procedures/linux_kernel/sigaction.py"}], "after_files": [{"content": "import angr\n\n\nclass sigaction(angr.SimProcedure):\n def run(self, signum, act, oldact): # pylint:disable=arguments-differ,unused-argument\n # TODO: actually do something\n return self.state.solver.BVV(0, self.arch.sizeof[\"long\"])\n\n\nclass rt_sigaction(angr.SimProcedure):\n def run(self, signum, act, oldact, sigsetsize): # pylint:disable=arguments-differ,unused-argument\n # TODO: actually do something\n # ...hack\n if self.state.solver.is_true(signum == 33):\n return self.state.libc.ret_errno(\"EINVAL\")\n return self.state.solver.BVV(0, self.arch.sizeof[\"long\"])\n", "path": "angr/procedures/linux_kernel/sigaction.py"}]}
2,202
239
gh_patches_debug_22664
rasdani/github-patches
git_diff
akvo__akvo-rsr-2243
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Move to IATI currencies Currently, there's only 2 options for a currency in RSR: Euros and US Dollars. However, with the introduction of IATI fields we have also incorporated the IATI codelists for some fields that allow the list of IATI currencies including all possible currencies. We should extend the currencies of RSR, keeping in mind: - [x] Project editor should allow to select all possible currencies and display it for any 'currency' field. - [x] Show the currency code (e.g. EUR, USD, YEN) everywhere: - [x] Projects list - [x] Project page (summary, full report and finance tabs) - [x] Organisation page - [x] All fields in the project editor should respond to the currency selection. When you switch to a different currency, this should happen for all fields that use a currency in the editor. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `akvo/rsr/models/partnership.py` Content: ``` 1 # -*- coding: utf-8 -*- 2 3 # Akvo RSR is covered by the GNU Affero General Public License. 4 # See more details in the license.txt file located at the root folder of the Akvo RSR module. 5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >. 6 7 8 from django.core.exceptions import ValidationError 9 from django.db import models 10 from django.utils.translation import ugettext_lazy as _ 11 12 from ..fields import ValidXMLCharField 13 14 15 class Partnership(models.Model): 16 # the old way 17 FIELD_PARTNER = u'field' 18 FUNDING_PARTNER = u'funding' 19 SPONSOR_PARTNER = u'sponsor' 20 SUPPORT_PARTNER = u'support' 21 EXTENDING_PARTNER = u'extending' 22 23 PARTNER_TYPE_LIST = [ 24 FIELD_PARTNER, FUNDING_PARTNER, SPONSOR_PARTNER, SUPPORT_PARTNER, EXTENDING_PARTNER 25 ] 26 PARTNER_LABELS = [ 27 _(u'Implementing partner'), 28 _(u'Funding partner'), 29 _(u'Sponsor partner'), 30 _(u'Accountable partner'), 31 _(u'Extending partner'), 32 ] 33 PARTNER_TYPES = zip(PARTNER_TYPE_LIST, PARTNER_LABELS) 34 35 # the new way 36 IATI_FUNDING_PARTNER = 1 37 IATI_ACCOUNTABLE_PARTNER = 2 38 IATI_EXTENDING_PARTNER = 3 39 IATI_IMPLEMENTING_PARTNER = 4 40 AKVO_SPONSOR_PARTNER = 100 # not part of the IATI OrganisationRole codelist! 41 IATI_REPORTING_ORGANISATION = 101 42 43 # make sure the AKVO_SPONSOR_PARTNER is last in the list 44 IATI_ROLE_LIST = [ 45 IATI_FUNDING_PARTNER, IATI_ACCOUNTABLE_PARTNER, IATI_EXTENDING_PARTNER, 46 IATI_IMPLEMENTING_PARTNER, AKVO_SPONSOR_PARTNER, IATI_REPORTING_ORGANISATION 47 ] 48 IATI_ROLE_LABELS = [ 49 _(u'Funding partner'), 50 _(u'Accountable partner'), 51 _(u'Extending partner'), 52 _(u'Implementing partner'), 53 _(u'Sponsor partner'), 54 _(u'Reporting organisation'), 55 ] 56 IATI_ROLES = zip(IATI_ROLE_LIST, IATI_ROLE_LABELS) 57 58 # used when migrating 59 PARTNER_TYPES_TO_ROLES_MAP = { 60 FUNDING_PARTNER: IATI_FUNDING_PARTNER, 61 SUPPORT_PARTNER: IATI_ACCOUNTABLE_PARTNER, 62 FIELD_PARTNER: IATI_IMPLEMENTING_PARTNER, 63 SPONSOR_PARTNER: AKVO_SPONSOR_PARTNER, 64 } 65 66 # backwards compatibility 67 ROLES_TO_PARTNER_TYPES_MAP = { 68 IATI_FUNDING_PARTNER: FUNDING_PARTNER, 69 IATI_ACCOUNTABLE_PARTNER: SUPPORT_PARTNER, 70 IATI_EXTENDING_PARTNER: EXTENDING_PARTNER, 71 IATI_IMPLEMENTING_PARTNER: FIELD_PARTNER, 72 AKVO_SPONSOR_PARTNER: SPONSOR_PARTNER, 73 # TODO: not backwards compatible 74 IATI_REPORTING_ORGANISATION: u'' 75 } 76 77 ALLIANCE_PARTNER = u'alliance' 78 KNOWLEDGE_PARTNER = u'knowledge' 79 NETWORK_PARTNER = u'network' 80 81 PARTNER_TYPE_EXTRAS_LIST = (ALLIANCE_PARTNER, KNOWLEDGE_PARTNER, NETWORK_PARTNER) 82 PARTNER_TYPE_EXTRA_LABELS = ( 83 _(u'Alliance'), 84 _(u'Knowledge'), 85 _(u'Network') 86 ) 87 88 PARTNER_TYPE_EXTRAS = zip(PARTNER_TYPE_EXTRAS_LIST, PARTNER_TYPE_EXTRA_LABELS) 89 90 organisation = models.ForeignKey( 91 'Organisation', verbose_name=_(u'organisation'), related_name='partnerships', null=True, 92 blank=True, 93 help_text=_(u'Select an organisation that is taking an active role in the project.') 94 ) 95 project = models.ForeignKey('Project', verbose_name=_(u'project'), related_name='partnerships') 96 iati_organisation_role = models.PositiveSmallIntegerField( 97 _(u'organisation role'), choices=IATI_ROLES, db_index=True, null=True, blank=True, 98 help_text=_(u'Select the role of the organisation within the project:<br/>' 99 u'- Funding organisation: a government or organisation that provides funds to ' 100 u'the project<br/>' 101 u'- Implementing organisation: an organisation involved in carrying out the ' 102 u'activity or intervention<br/>' 103 u'- Accountable organisation: an organisation responsible for oversight of ' 104 u'the project and its outcomes<br/>' 105 u'- Extending organisation: an organisation that manages the budget and ' 106 u'direction of a project on behalf of the funding organisation<br/>' 107 u'- Reporting organisation: an organisation that will report this project in ' 108 u'an IATI file') 109 ) 110 # is_secondary_reporter is only used when the iati_organisation_role is set to 111 # IATI_REPORTING_ORGANISATION, thus the use of NullBooleanField 112 is_secondary_reporter = models.NullBooleanField( 113 _(u'secondary reporter'), 114 help_text=_( 115 u'This indicates whether the reporting organisation is a secondary publisher: ' 116 u'publishing data for which it is not directly responsible.' 117 ) 118 ) 119 funding_amount = models.DecimalField( 120 _(u'funding amount'), max_digits=14, decimal_places=2, blank=True, null=True, db_index=True, 121 help_text=_(u'It’s only possible to indicate a funding amount for funding partners. Use a ' 122 u'period to denote decimals.') 123 ) 124 partner_type_extra = ValidXMLCharField( 125 _(u'partner type extra'), max_length=30, blank=True, null=True, choices=PARTNER_TYPE_EXTRAS, 126 help_text=_(u'RSR specific partner type.') 127 ) 128 iati_activity_id = ValidXMLCharField( 129 _(u'IATI activity ID'), max_length=100, blank=True, null=True, db_index=True, 130 help_text=_(u'A valid activity identifier published by the participating organisation ' 131 u'which points to the activity that it has published to IATI that describes ' 132 u'its role in this activity.') 133 ) 134 internal_id = ValidXMLCharField( 135 _(u'Internal ID'), max_length=75, blank=True, null=True, db_index=True, 136 help_text=_(u'This field can be used to indicate an internal identifier that is used by ' 137 u'the organisation for this project. (75 characters)') 138 ) 139 iati_url = models.URLField( 140 blank=True, 141 help_text=_( 142 u'Please enter the URL for where the IATI Activity Id Funding details are published. ' 143 u'For projects directly or indirectly funded by the Dutch Government, this should ' 144 u'be the OpenAid.nl page. For other projects, an alternative URL can be used.' 145 ) 146 ) 147 related_activity_id = ValidXMLCharField( 148 _(u'related IATI activity ID'), max_length=100, blank=True 149 ) 150 151 def iati_organisation_role_label(self): 152 if self.iati_organisation_role: 153 return dict(self.IATI_ROLES).get(self.iati_organisation_role) 154 else: 155 return '' 156 157 def iati_role_to_partner_type(self): 158 if self.iati_organisation_role: 159 return dict(self.ROLES_TO_PARTNER_TYPES_MAP).get(int(self.iati_organisation_role)) 160 else: 161 return None 162 163 def organisation_show_link(self): 164 if self.organisation: 165 return u'<a href="{0}">{1}</a>'.format(self.organisation.get_absolute_url(), 166 self.organisation.long_name or 167 self.organisation.name) 168 return '' 169 170 class Meta: 171 app_label = 'rsr' 172 verbose_name = _(u'project partner') 173 verbose_name_plural = _(u'project partners') 174 ordering = ['iati_organisation_role'] 175 176 def __unicode__(self): 177 if self.organisation: 178 if self.organisation.name: 179 organisation_unicode = self.organisation.name 180 elif self.organisation.long_name: 181 organisation_unicode = self.organisation.long_name 182 else: 183 organisation_unicode = u'%s' % _(u'Organisation name not specified') 184 else: 185 organisation_unicode = u'%s' % _(u'Organisation not specified') 186 187 if self.iati_organisation_role: 188 organisation_unicode += u' ({})'.format( 189 unicode(dict(self.IATI_ROLES)[self.iati_organisation_role]) 190 ) 191 return organisation_unicode 192 193 def clean(self): 194 # Don't allow multiple reporting organisations 195 if self.iati_organisation_role == self.IATI_REPORTING_ORGANISATION: 196 reporting_orgs = self.project.partnerships.filter( 197 iati_organisation_role=self.IATI_REPORTING_ORGANISATION 198 ) 199 200 if reporting_orgs.count() > 1: 201 raise ValidationError( 202 {'iati_organisation_role': u'%s' % _(u'Project can only have one reporting ' 203 u'organisation')} 204 ) 205 206 def save(self, *args, **kwargs): 207 super(Partnership, self).save(*args, **kwargs) 208 self.set_primary_organisation() 209 210 def delete(self, *args, **kwargs): 211 super(Partnership, self).delete(*args, **kwargs) 212 self.set_primary_organisation() 213 214 def set_primary_organisation(self): 215 # Check which organisation should be set to the primary organisation of the project 216 # This is done to get better performance on the project list page 217 self.project.primary_organisation = self.project.find_primary_organisation() 218 self.project.save(update_fields=['primary_organisation']) 219 ``` Path: `akvo/rest/serializers/partnership.py` Content: ``` 1 # -*- coding: utf-8 -*- 2 3 # Akvo RSR is covered by the GNU Affero General Public License. 4 # See more details in the license.txt file located at the root folder of the Akvo RSR module. 5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >. 6 7 from rest_framework import serializers 8 9 from akvo.rsr.models import Partnership 10 11 from akvo.rest.serializers.organisation import OrganisationBasicSerializer 12 from akvo.rest.serializers.rsr_serializer import BaseRSRSerializer 13 14 15 class PartnershipRawSerializer(BaseRSRSerializer): 16 17 class Meta: 18 model = Partnership 19 20 21 class PartnershipSerializer(PartnershipRawSerializer): 22 23 organisation_show_link = serializers.Field(source='organisation_show_link') 24 partner_type = serializers.Field(source='iati_role_to_partner_type') 25 organisation_role_label = serializers.Field(source='iati_organisation_role_label') 26 27 28 class PartnershipBasicSerializer(BaseRSRSerializer): 29 30 organisation = OrganisationBasicSerializer(source='organisation') 31 iati_organisation_role_label = serializers.Field(source='iati_organisation_role_label') 32 33 class Meta: 34 model = Partnership 35 fields = ( 36 'id', 37 'project', 38 'organisation', 39 'iati_organisation_role', 40 'iati_organisation_role_label', 41 ) 42 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/akvo/rest/serializers/partnership.py b/akvo/rest/serializers/partnership.py --- a/akvo/rest/serializers/partnership.py +++ b/akvo/rest/serializers/partnership.py @@ -23,6 +23,7 @@ organisation_show_link = serializers.Field(source='organisation_show_link') partner_type = serializers.Field(source='iati_role_to_partner_type') organisation_role_label = serializers.Field(source='iati_organisation_role_label') + funding_amount_label = serializers.Field(source='funding_amount_with_currency') class PartnershipBasicSerializer(BaseRSRSerializer): diff --git a/akvo/rsr/models/partnership.py b/akvo/rsr/models/partnership.py --- a/akvo/rsr/models/partnership.py +++ b/akvo/rsr/models/partnership.py @@ -167,6 +167,12 @@ self.organisation.name) return '' + def funding_amount_with_currency(self): + """Returns the funding amount, prepended by the project's currency.""" + if self.funding_amount and self.project and self.project.currency: + return u'{0} {1}'.format(self.project.currency, self.funding_amount) + return self.funding_amount + class Meta: app_label = 'rsr' verbose_name = _(u'project partner')
{"golden_diff": "diff --git a/akvo/rest/serializers/partnership.py b/akvo/rest/serializers/partnership.py\n--- a/akvo/rest/serializers/partnership.py\n+++ b/akvo/rest/serializers/partnership.py\n@@ -23,6 +23,7 @@\n organisation_show_link = serializers.Field(source='organisation_show_link')\n partner_type = serializers.Field(source='iati_role_to_partner_type')\n organisation_role_label = serializers.Field(source='iati_organisation_role_label')\n+ funding_amount_label = serializers.Field(source='funding_amount_with_currency')\n \n \n class PartnershipBasicSerializer(BaseRSRSerializer):\ndiff --git a/akvo/rsr/models/partnership.py b/akvo/rsr/models/partnership.py\n--- a/akvo/rsr/models/partnership.py\n+++ b/akvo/rsr/models/partnership.py\n@@ -167,6 +167,12 @@\n self.organisation.name)\n return ''\n \n+ def funding_amount_with_currency(self):\n+ \"\"\"Returns the funding amount, prepended by the project's currency.\"\"\"\n+ if self.funding_amount and self.project and self.project.currency:\n+ return u'{0} {1}'.format(self.project.currency, self.funding_amount)\n+ return self.funding_amount\n+\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'project partner')\n", "issue": "Move to IATI currencies\nCurrently, there's only 2 options for a currency in RSR: Euros and US Dollars. However, with the introduction of IATI fields we have also incorporated the IATI codelists for some fields that allow the list of IATI currencies including all possible currencies. We should extend the currencies of RSR, keeping in mind:\n- [x] Project editor should allow to select all possible currencies and display it for any 'currency' field.\n- [x] Show the currency code (e.g. EUR, USD, YEN) everywhere:\n - [x] Projects list\n - [x] Project page (summary, full report and finance tabs)\n - [x] Organisation page\n- [x] All fields in the project editor should respond to the currency selection. When you switch to a different currency, this should happen for all fields that use a currency in the editor.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\n\nfrom django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom ..fields import ValidXMLCharField\n\n\nclass Partnership(models.Model):\n # the old way\n FIELD_PARTNER = u'field'\n FUNDING_PARTNER = u'funding'\n SPONSOR_PARTNER = u'sponsor'\n SUPPORT_PARTNER = u'support'\n EXTENDING_PARTNER = u'extending'\n\n PARTNER_TYPE_LIST = [\n FIELD_PARTNER, FUNDING_PARTNER, SPONSOR_PARTNER, SUPPORT_PARTNER, EXTENDING_PARTNER\n ]\n PARTNER_LABELS = [\n _(u'Implementing partner'),\n _(u'Funding partner'),\n _(u'Sponsor partner'),\n _(u'Accountable partner'),\n _(u'Extending partner'),\n ]\n PARTNER_TYPES = zip(PARTNER_TYPE_LIST, PARTNER_LABELS)\n\n # the new way\n IATI_FUNDING_PARTNER = 1\n IATI_ACCOUNTABLE_PARTNER = 2\n IATI_EXTENDING_PARTNER = 3\n IATI_IMPLEMENTING_PARTNER = 4\n AKVO_SPONSOR_PARTNER = 100 # not part of the IATI OrganisationRole codelist!\n IATI_REPORTING_ORGANISATION = 101\n\n # make sure the AKVO_SPONSOR_PARTNER is last in the list\n IATI_ROLE_LIST = [\n IATI_FUNDING_PARTNER, IATI_ACCOUNTABLE_PARTNER, IATI_EXTENDING_PARTNER,\n IATI_IMPLEMENTING_PARTNER, AKVO_SPONSOR_PARTNER, IATI_REPORTING_ORGANISATION\n ]\n IATI_ROLE_LABELS = [\n _(u'Funding partner'),\n _(u'Accountable partner'),\n _(u'Extending partner'),\n _(u'Implementing partner'),\n _(u'Sponsor partner'),\n _(u'Reporting organisation'),\n ]\n IATI_ROLES = zip(IATI_ROLE_LIST, IATI_ROLE_LABELS)\n\n # used when migrating\n PARTNER_TYPES_TO_ROLES_MAP = {\n FUNDING_PARTNER: IATI_FUNDING_PARTNER,\n SUPPORT_PARTNER: IATI_ACCOUNTABLE_PARTNER,\n FIELD_PARTNER: IATI_IMPLEMENTING_PARTNER,\n SPONSOR_PARTNER: AKVO_SPONSOR_PARTNER,\n }\n\n # backwards compatibility\n ROLES_TO_PARTNER_TYPES_MAP = {\n IATI_FUNDING_PARTNER: FUNDING_PARTNER,\n IATI_ACCOUNTABLE_PARTNER: SUPPORT_PARTNER,\n IATI_EXTENDING_PARTNER: EXTENDING_PARTNER,\n IATI_IMPLEMENTING_PARTNER: FIELD_PARTNER,\n AKVO_SPONSOR_PARTNER: SPONSOR_PARTNER,\n # TODO: not backwards compatible\n IATI_REPORTING_ORGANISATION: u''\n }\n\n ALLIANCE_PARTNER = u'alliance'\n KNOWLEDGE_PARTNER = u'knowledge'\n NETWORK_PARTNER = u'network'\n\n PARTNER_TYPE_EXTRAS_LIST = (ALLIANCE_PARTNER, KNOWLEDGE_PARTNER, NETWORK_PARTNER)\n PARTNER_TYPE_EXTRA_LABELS = (\n _(u'Alliance'),\n _(u'Knowledge'),\n _(u'Network')\n )\n\n PARTNER_TYPE_EXTRAS = zip(PARTNER_TYPE_EXTRAS_LIST, PARTNER_TYPE_EXTRA_LABELS)\n\n organisation = models.ForeignKey(\n 'Organisation', verbose_name=_(u'organisation'), related_name='partnerships', null=True,\n blank=True,\n help_text=_(u'Select an organisation that is taking an active role in the project.')\n )\n project = models.ForeignKey('Project', verbose_name=_(u'project'), related_name='partnerships')\n iati_organisation_role = models.PositiveSmallIntegerField(\n _(u'organisation role'), choices=IATI_ROLES, db_index=True, null=True, blank=True,\n help_text=_(u'Select the role of the organisation within the project:<br/>'\n u'- Funding organisation: a government or organisation that provides funds to '\n u'the project<br/>'\n u'- Implementing organisation: an organisation involved in carrying out the '\n u'activity or intervention<br/>'\n u'- Accountable organisation: an organisation responsible for oversight of '\n u'the project and its outcomes<br/>'\n u'- Extending organisation: an organisation that manages the budget and '\n u'direction of a project on behalf of the funding organisation<br/>'\n u'- Reporting organisation: an organisation that will report this project in '\n u'an IATI file')\n )\n # is_secondary_reporter is only used when the iati_organisation_role is set to\n # IATI_REPORTING_ORGANISATION, thus the use of NullBooleanField\n is_secondary_reporter = models.NullBooleanField(\n _(u'secondary reporter'),\n help_text=_(\n u'This indicates whether the reporting organisation is a secondary publisher: '\n u'publishing data for which it is not directly responsible.'\n )\n )\n funding_amount = models.DecimalField(\n _(u'funding amount'), max_digits=14, decimal_places=2, blank=True, null=True, db_index=True,\n help_text=_(u'It\u2019s only possible to indicate a funding amount for funding partners. Use a '\n u'period to denote decimals.')\n )\n partner_type_extra = ValidXMLCharField(\n _(u'partner type extra'), max_length=30, blank=True, null=True, choices=PARTNER_TYPE_EXTRAS,\n help_text=_(u'RSR specific partner type.')\n )\n iati_activity_id = ValidXMLCharField(\n _(u'IATI activity ID'), max_length=100, blank=True, null=True, db_index=True,\n help_text=_(u'A valid activity identifier published by the participating organisation '\n u'which points to the activity that it has published to IATI that describes '\n u'its role in this activity.')\n )\n internal_id = ValidXMLCharField(\n _(u'Internal ID'), max_length=75, blank=True, null=True, db_index=True,\n help_text=_(u'This field can be used to indicate an internal identifier that is used by '\n u'the organisation for this project. (75 characters)')\n )\n iati_url = models.URLField(\n blank=True,\n help_text=_(\n u'Please enter the URL for where the IATI Activity Id Funding details are published. '\n u'For projects directly or indirectly funded by the Dutch Government, this should '\n u'be the OpenAid.nl page. For other projects, an alternative URL can be used.'\n )\n )\n related_activity_id = ValidXMLCharField(\n _(u'related IATI activity ID'), max_length=100, blank=True\n )\n\n def iati_organisation_role_label(self):\n if self.iati_organisation_role:\n return dict(self.IATI_ROLES).get(self.iati_organisation_role)\n else:\n return ''\n\n def iati_role_to_partner_type(self):\n if self.iati_organisation_role:\n return dict(self.ROLES_TO_PARTNER_TYPES_MAP).get(int(self.iati_organisation_role))\n else:\n return None\n\n def organisation_show_link(self):\n if self.organisation:\n return u'<a href=\"{0}\">{1}</a>'.format(self.organisation.get_absolute_url(),\n self.organisation.long_name or\n self.organisation.name)\n return ''\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'project partner')\n verbose_name_plural = _(u'project partners')\n ordering = ['iati_organisation_role']\n\n def __unicode__(self):\n if self.organisation:\n if self.organisation.name:\n organisation_unicode = self.organisation.name\n elif self.organisation.long_name:\n organisation_unicode = self.organisation.long_name\n else:\n organisation_unicode = u'%s' % _(u'Organisation name not specified')\n else:\n organisation_unicode = u'%s' % _(u'Organisation not specified')\n\n if self.iati_organisation_role:\n organisation_unicode += u' ({})'.format(\n unicode(dict(self.IATI_ROLES)[self.iati_organisation_role])\n )\n return organisation_unicode\n\n def clean(self):\n # Don't allow multiple reporting organisations\n if self.iati_organisation_role == self.IATI_REPORTING_ORGANISATION:\n reporting_orgs = self.project.partnerships.filter(\n iati_organisation_role=self.IATI_REPORTING_ORGANISATION\n )\n\n if reporting_orgs.count() > 1:\n raise ValidationError(\n {'iati_organisation_role': u'%s' % _(u'Project can only have one reporting '\n u'organisation')}\n )\n\n def save(self, *args, **kwargs):\n super(Partnership, self).save(*args, **kwargs)\n self.set_primary_organisation()\n\n def delete(self, *args, **kwargs):\n super(Partnership, self).delete(*args, **kwargs)\n self.set_primary_organisation()\n\n def set_primary_organisation(self):\n # Check which organisation should be set to the primary organisation of the project\n # This is done to get better performance on the project list page\n self.project.primary_organisation = self.project.find_primary_organisation()\n self.project.save(update_fields=['primary_organisation'])\n", "path": "akvo/rsr/models/partnership.py"}, {"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\nfrom rest_framework import serializers\n\nfrom akvo.rsr.models import Partnership\n\nfrom akvo.rest.serializers.organisation import OrganisationBasicSerializer\nfrom akvo.rest.serializers.rsr_serializer import BaseRSRSerializer\n\n\nclass PartnershipRawSerializer(BaseRSRSerializer):\n\n class Meta:\n model = Partnership\n\n\nclass PartnershipSerializer(PartnershipRawSerializer):\n\n organisation_show_link = serializers.Field(source='organisation_show_link')\n partner_type = serializers.Field(source='iati_role_to_partner_type')\n organisation_role_label = serializers.Field(source='iati_organisation_role_label')\n\n\nclass PartnershipBasicSerializer(BaseRSRSerializer):\n\n organisation = OrganisationBasicSerializer(source='organisation')\n iati_organisation_role_label = serializers.Field(source='iati_organisation_role_label')\n\n class Meta:\n model = Partnership\n fields = (\n 'id',\n 'project',\n 'organisation',\n 'iati_organisation_role',\n 'iati_organisation_role_label',\n )\n", "path": "akvo/rest/serializers/partnership.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\n\nfrom django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom ..fields import ValidXMLCharField\n\n\nclass Partnership(models.Model):\n # the old way\n FIELD_PARTNER = u'field'\n FUNDING_PARTNER = u'funding'\n SPONSOR_PARTNER = u'sponsor'\n SUPPORT_PARTNER = u'support'\n EXTENDING_PARTNER = u'extending'\n\n PARTNER_TYPE_LIST = [\n FIELD_PARTNER, FUNDING_PARTNER, SPONSOR_PARTNER, SUPPORT_PARTNER, EXTENDING_PARTNER\n ]\n PARTNER_LABELS = [\n _(u'Implementing partner'),\n _(u'Funding partner'),\n _(u'Sponsor partner'),\n _(u'Accountable partner'),\n _(u'Extending partner'),\n ]\n PARTNER_TYPES = zip(PARTNER_TYPE_LIST, PARTNER_LABELS)\n\n # the new way\n IATI_FUNDING_PARTNER = 1\n IATI_ACCOUNTABLE_PARTNER = 2\n IATI_EXTENDING_PARTNER = 3\n IATI_IMPLEMENTING_PARTNER = 4\n AKVO_SPONSOR_PARTNER = 100 # not part of the IATI OrganisationRole codelist!\n IATI_REPORTING_ORGANISATION = 101\n\n # make sure the AKVO_SPONSOR_PARTNER is last in the list\n IATI_ROLE_LIST = [\n IATI_FUNDING_PARTNER, IATI_ACCOUNTABLE_PARTNER, IATI_EXTENDING_PARTNER,\n IATI_IMPLEMENTING_PARTNER, AKVO_SPONSOR_PARTNER, IATI_REPORTING_ORGANISATION\n ]\n IATI_ROLE_LABELS = [\n _(u'Funding partner'),\n _(u'Accountable partner'),\n _(u'Extending partner'),\n _(u'Implementing partner'),\n _(u'Sponsor partner'),\n _(u'Reporting organisation'),\n ]\n IATI_ROLES = zip(IATI_ROLE_LIST, IATI_ROLE_LABELS)\n\n # used when migrating\n PARTNER_TYPES_TO_ROLES_MAP = {\n FUNDING_PARTNER: IATI_FUNDING_PARTNER,\n SUPPORT_PARTNER: IATI_ACCOUNTABLE_PARTNER,\n FIELD_PARTNER: IATI_IMPLEMENTING_PARTNER,\n SPONSOR_PARTNER: AKVO_SPONSOR_PARTNER,\n }\n\n # backwards compatibility\n ROLES_TO_PARTNER_TYPES_MAP = {\n IATI_FUNDING_PARTNER: FUNDING_PARTNER,\n IATI_ACCOUNTABLE_PARTNER: SUPPORT_PARTNER,\n IATI_EXTENDING_PARTNER: EXTENDING_PARTNER,\n IATI_IMPLEMENTING_PARTNER: FIELD_PARTNER,\n AKVO_SPONSOR_PARTNER: SPONSOR_PARTNER,\n # TODO: not backwards compatible\n IATI_REPORTING_ORGANISATION: u''\n }\n\n ALLIANCE_PARTNER = u'alliance'\n KNOWLEDGE_PARTNER = u'knowledge'\n NETWORK_PARTNER = u'network'\n\n PARTNER_TYPE_EXTRAS_LIST = (ALLIANCE_PARTNER, KNOWLEDGE_PARTNER, NETWORK_PARTNER)\n PARTNER_TYPE_EXTRA_LABELS = (\n _(u'Alliance'),\n _(u'Knowledge'),\n _(u'Network')\n )\n\n PARTNER_TYPE_EXTRAS = zip(PARTNER_TYPE_EXTRAS_LIST, PARTNER_TYPE_EXTRA_LABELS)\n\n organisation = models.ForeignKey(\n 'Organisation', verbose_name=_(u'organisation'), related_name='partnerships', null=True,\n blank=True,\n help_text=_(u'Select an organisation that is taking an active role in the project.')\n )\n project = models.ForeignKey('Project', verbose_name=_(u'project'), related_name='partnerships')\n iati_organisation_role = models.PositiveSmallIntegerField(\n _(u'organisation role'), choices=IATI_ROLES, db_index=True, null=True, blank=True,\n help_text=_(u'Select the role of the organisation within the project:<br/>'\n u'- Funding organisation: a government or organisation that provides funds to '\n u'the project<br/>'\n u'- Implementing organisation: an organisation involved in carrying out the '\n u'activity or intervention<br/>'\n u'- Accountable organisation: an organisation responsible for oversight of '\n u'the project and its outcomes<br/>'\n u'- Extending organisation: an organisation that manages the budget and '\n u'direction of a project on behalf of the funding organisation<br/>'\n u'- Reporting organisation: an organisation that will report this project in '\n u'an IATI file')\n )\n # is_secondary_reporter is only used when the iati_organisation_role is set to\n # IATI_REPORTING_ORGANISATION, thus the use of NullBooleanField\n is_secondary_reporter = models.NullBooleanField(\n _(u'secondary reporter'),\n help_text=_(\n u'This indicates whether the reporting organisation is a secondary publisher: '\n u'publishing data for which it is not directly responsible.'\n )\n )\n funding_amount = models.DecimalField(\n _(u'funding amount'), max_digits=14, decimal_places=2, blank=True, null=True, db_index=True,\n help_text=_(u'It\u2019s only possible to indicate a funding amount for funding partners. Use a '\n u'period to denote decimals.')\n )\n partner_type_extra = ValidXMLCharField(\n _(u'partner type extra'), max_length=30, blank=True, null=True, choices=PARTNER_TYPE_EXTRAS,\n help_text=_(u'RSR specific partner type.')\n )\n iati_activity_id = ValidXMLCharField(\n _(u'IATI activity ID'), max_length=100, blank=True, null=True, db_index=True,\n help_text=_(u'A valid activity identifier published by the participating organisation '\n u'which points to the activity that it has published to IATI that describes '\n u'its role in this activity.')\n )\n internal_id = ValidXMLCharField(\n _(u'Internal ID'), max_length=75, blank=True, null=True, db_index=True,\n help_text=_(u'This field can be used to indicate an internal identifier that is used by '\n u'the organisation for this project. (75 characters)')\n )\n iati_url = models.URLField(\n blank=True,\n help_text=_(\n u'Please enter the URL for where the IATI Activity Id Funding details are published. '\n u'For projects directly or indirectly funded by the Dutch Government, this should '\n u'be the OpenAid.nl page. For other projects, an alternative URL can be used.'\n )\n )\n related_activity_id = ValidXMLCharField(\n _(u'related IATI activity ID'), max_length=100, blank=True\n )\n\n def iati_organisation_role_label(self):\n if self.iati_organisation_role:\n return dict(self.IATI_ROLES).get(self.iati_organisation_role)\n else:\n return ''\n\n def iati_role_to_partner_type(self):\n if self.iati_organisation_role:\n return dict(self.ROLES_TO_PARTNER_TYPES_MAP).get(int(self.iati_organisation_role))\n else:\n return None\n\n def organisation_show_link(self):\n if self.organisation:\n return u'<a href=\"{0}\">{1}</a>'.format(self.organisation.get_absolute_url(),\n self.organisation.long_name or\n self.organisation.name)\n return ''\n\n def funding_amount_with_currency(self):\n \"\"\"Returns the funding amount, prepended by the project's currency.\"\"\"\n if self.funding_amount and self.project and self.project.currency:\n return u'{0} {1}'.format(self.project.currency, self.funding_amount)\n return self.funding_amount\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'project partner')\n verbose_name_plural = _(u'project partners')\n ordering = ['iati_organisation_role']\n\n def __unicode__(self):\n if self.organisation:\n if self.organisation.name:\n organisation_unicode = self.organisation.name\n elif self.organisation.long_name:\n organisation_unicode = self.organisation.long_name\n else:\n organisation_unicode = u'%s' % _(u'Organisation name not specified')\n else:\n organisation_unicode = u'%s' % _(u'Organisation not specified')\n\n if self.iati_organisation_role:\n organisation_unicode += u' ({})'.format(\n unicode(dict(self.IATI_ROLES)[self.iati_organisation_role])\n )\n return organisation_unicode\n\n def clean(self):\n # Don't allow multiple reporting organisations\n if self.iati_organisation_role == self.IATI_REPORTING_ORGANISATION:\n reporting_orgs = self.project.partnerships.filter(\n iati_organisation_role=self.IATI_REPORTING_ORGANISATION\n )\n\n if reporting_orgs.count() > 1:\n raise ValidationError(\n {'iati_organisation_role': u'%s' % _(u'Project can only have one reporting '\n u'organisation')}\n )\n\n def save(self, *args, **kwargs):\n super(Partnership, self).save(*args, **kwargs)\n self.set_primary_organisation()\n\n def delete(self, *args, **kwargs):\n super(Partnership, self).delete(*args, **kwargs)\n self.set_primary_organisation()\n\n def set_primary_organisation(self):\n # Check which organisation should be set to the primary organisation of the project\n # This is done to get better performance on the project list page\n self.project.primary_organisation = self.project.find_primary_organisation()\n self.project.save(update_fields=['primary_organisation'])\n", "path": "akvo/rsr/models/partnership.py"}, {"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\nfrom rest_framework import serializers\n\nfrom akvo.rsr.models import Partnership\n\nfrom akvo.rest.serializers.organisation import OrganisationBasicSerializer\nfrom akvo.rest.serializers.rsr_serializer import BaseRSRSerializer\n\n\nclass PartnershipRawSerializer(BaseRSRSerializer):\n\n class Meta:\n model = Partnership\n\n\nclass PartnershipSerializer(PartnershipRawSerializer):\n\n organisation_show_link = serializers.Field(source='organisation_show_link')\n partner_type = serializers.Field(source='iati_role_to_partner_type')\n organisation_role_label = serializers.Field(source='iati_organisation_role_label')\n funding_amount_label = serializers.Field(source='funding_amount_with_currency')\n\n\nclass PartnershipBasicSerializer(BaseRSRSerializer):\n\n organisation = OrganisationBasicSerializer(source='organisation')\n iati_organisation_role_label = serializers.Field(source='iati_organisation_role_label')\n\n class Meta:\n model = Partnership\n fields = (\n 'id',\n 'project',\n 'organisation',\n 'iati_organisation_role',\n 'iati_organisation_role_label',\n )\n", "path": "akvo/rest/serializers/partnership.py"}]}
3,529
306
gh_patches_debug_27449
rasdani/github-patches
git_diff
mathesar-foundation__mathesar-321
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Function to handle deleting schemas **Problem** <!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.--> Users might want to delete schemas. We don't currently support this. **Proposed solution** <!-- A clear and concise description of your proposed solution or feature. --> A function that handles deleting of schemas in the database. We should raise an error if there is anything outside of the schema referencing the schema. **Additional context** <!-- Add any other context or screenshots about the feature request here.--> This should be in the `db` module. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `db/schemas.py` Content: ``` 1 import logging 2 import warnings 3 from sqlalchemy.schema import CreateSchema 4 from sqlalchemy import inspect, MetaData, select, and_, not_, or_, Table 5 6 from db import types 7 8 logger = logging.getLogger(__name__) 9 10 TYPES_SCHEMA = types.base.SCHEMA 11 12 EXCLUDED_SCHEMATA = [TYPES_SCHEMA, "information_schema"] 13 14 15 def get_schema_name_from_oid(oid, engine): 16 return reflect_schema(engine, oid=oid)["name"] 17 18 19 def get_schema_oid_from_name(name, engine): 20 return reflect_schema(engine, name=name)["oid"] 21 22 23 def reflect_schema(engine, name=None, oid=None): 24 # If we have both arguments, the behavior is undefined. 25 try: 26 assert name is None or oid is None 27 except AssertionError as e: 28 logger.error("ERROR: Only one of 'name' or 'oid' can be given!") 29 raise e 30 metadata = MetaData() 31 with warnings.catch_warnings(): 32 warnings.filterwarnings("ignore", message="Did not recognize type") 33 pg_namespace = Table("pg_namespace", metadata, autoload_with=engine) 34 sel = ( 35 select(pg_namespace.c.oid, pg_namespace.c.nspname.label("name")) 36 .where(or_(pg_namespace.c.nspname == name, pg_namespace.c.oid == oid)) 37 ) 38 with engine.begin() as conn: 39 schema_info = conn.execute(sel).fetchone() 40 return schema_info 41 42 43 def get_mathesar_schemas(engine): 44 return [schema for schema, _ in get_mathesar_schemas_with_oids(engine)] 45 46 47 def get_mathesar_schemas_with_oids(engine): 48 metadata = MetaData() 49 with warnings.catch_warnings(): 50 warnings.filterwarnings("ignore", message="Did not recognize type") 51 pg_namespace = Table("pg_namespace", metadata, autoload_with=engine) 52 sel = ( 53 select(pg_namespace.c.nspname.label('schema'), pg_namespace.c.oid) 54 .where( 55 and_( 56 *[pg_namespace.c.nspname != schema for schema in EXCLUDED_SCHEMATA], 57 not_(pg_namespace.c.nspname.like("pg_%")) 58 ) 59 ) 60 ) 61 with engine.begin() as conn: 62 result = conn.execute(sel).fetchall() 63 return result 64 65 66 def get_all_schemas(engine): 67 inspector = inspect(engine) 68 # We don't need to exclude system schemas (i.e., starting with "pg_") 69 # since Inspector.get_schema_names already excludes them. Thus, this 70 # function actually gets all non-pg-reserved schemas. 71 return inspector.get_schema_names() 72 73 74 def create_schema(schema, engine): 75 """ 76 This method creates a Postgres schema. 77 """ 78 if schema not in get_all_schemas(engine): 79 with engine.begin() as connection: 80 connection.execute(CreateSchema(schema)) 81 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/db/schemas.py b/db/schemas.py --- a/db/schemas.py +++ b/db/schemas.py @@ -1,7 +1,11 @@ import logging import warnings -from sqlalchemy.schema import CreateSchema +from sqlalchemy.schema import CreateSchema, DropSchema from sqlalchemy import inspect, MetaData, select, and_, not_, or_, Table +from sqlalchemy.exc import InternalError +from sqlalchemy.schema import DDLElement +from sqlalchemy.ext import compiler +from psycopg2.errors import DependentObjectsStillExist from db import types @@ -78,3 +82,42 @@ if schema not in get_all_schemas(engine): with engine.begin() as connection: connection.execute(CreateSchema(schema)) + + +def delete_schema(schema, engine, cascade=False, if_exists=False): + """ + This method deletes a Postgres schema. + """ + if if_exists and schema not in get_all_schemas(engine): + return + + with engine.begin() as connection: + try: + connection.execute(DropSchema(schema, cascade=cascade)) + except InternalError as e: + if isinstance(e.orig, DependentObjectsStillExist): + raise e.orig + else: + raise e + + +class RenameSchema(DDLElement): + def __init__(self, schema, rename_to): + self.schema = schema + self.rename_to = rename_to + + [email protected](RenameSchema) +def compile_rename_schema(element, compiler, **_): + return "ALTER SCHEMA %s RENAME TO %s" % ( + element.schema, + element.rename_to + ) + + +def rename_schema(schema, engine, rename_to): + """ + This method renames a Postgres schema. + """ + with engine.begin() as connection: + connection.execute(RenameSchema(schema, rename_to))
{"golden_diff": "diff --git a/db/schemas.py b/db/schemas.py\n--- a/db/schemas.py\n+++ b/db/schemas.py\n@@ -1,7 +1,11 @@\n import logging\n import warnings\n-from sqlalchemy.schema import CreateSchema\n+from sqlalchemy.schema import CreateSchema, DropSchema\n from sqlalchemy import inspect, MetaData, select, and_, not_, or_, Table\n+from sqlalchemy.exc import InternalError\n+from sqlalchemy.schema import DDLElement\n+from sqlalchemy.ext import compiler\n+from psycopg2.errors import DependentObjectsStillExist\n \n from db import types\n \n@@ -78,3 +82,42 @@\n if schema not in get_all_schemas(engine):\n with engine.begin() as connection:\n connection.execute(CreateSchema(schema))\n+\n+\n+def delete_schema(schema, engine, cascade=False, if_exists=False):\n+ \"\"\"\n+ This method deletes a Postgres schema.\n+ \"\"\"\n+ if if_exists and schema not in get_all_schemas(engine):\n+ return\n+\n+ with engine.begin() as connection:\n+ try:\n+ connection.execute(DropSchema(schema, cascade=cascade))\n+ except InternalError as e:\n+ if isinstance(e.orig, DependentObjectsStillExist):\n+ raise e.orig\n+ else:\n+ raise e\n+\n+\n+class RenameSchema(DDLElement):\n+ def __init__(self, schema, rename_to):\n+ self.schema = schema\n+ self.rename_to = rename_to\n+\n+\[email protected](RenameSchema)\n+def compile_rename_schema(element, compiler, **_):\n+ return \"ALTER SCHEMA %s RENAME TO %s\" % (\n+ element.schema,\n+ element.rename_to\n+ )\n+\n+\n+def rename_schema(schema, engine, rename_to):\n+ \"\"\"\n+ This method renames a Postgres schema.\n+ \"\"\"\n+ with engine.begin() as connection:\n+ connection.execute(RenameSchema(schema, rename_to))\n", "issue": "Function to handle deleting schemas\n**Problem**\r\n<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->\r\nUsers might want to delete schemas. We don't currently support this.\r\n\r\n**Proposed solution**\r\n<!-- A clear and concise description of your proposed solution or feature. -->\r\nA function that handles deleting of schemas in the database. We should raise an error if there is anything outside of the schema referencing the schema.\r\n\r\n**Additional context**\r\n<!-- Add any other context or screenshots about the feature request here.-->\r\nThis should be in the `db` module.\n", "before_files": [{"content": "import logging\nimport warnings\nfrom sqlalchemy.schema import CreateSchema\nfrom sqlalchemy import inspect, MetaData, select, and_, not_, or_, Table\n\nfrom db import types\n\nlogger = logging.getLogger(__name__)\n\nTYPES_SCHEMA = types.base.SCHEMA\n\nEXCLUDED_SCHEMATA = [TYPES_SCHEMA, \"information_schema\"]\n\n\ndef get_schema_name_from_oid(oid, engine):\n return reflect_schema(engine, oid=oid)[\"name\"]\n\n\ndef get_schema_oid_from_name(name, engine):\n return reflect_schema(engine, name=name)[\"oid\"]\n\n\ndef reflect_schema(engine, name=None, oid=None):\n # If we have both arguments, the behavior is undefined.\n try:\n assert name is None or oid is None\n except AssertionError as e:\n logger.error(\"ERROR: Only one of 'name' or 'oid' can be given!\")\n raise e\n metadata = MetaData()\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", message=\"Did not recognize type\")\n pg_namespace = Table(\"pg_namespace\", metadata, autoload_with=engine)\n sel = (\n select(pg_namespace.c.oid, pg_namespace.c.nspname.label(\"name\"))\n .where(or_(pg_namespace.c.nspname == name, pg_namespace.c.oid == oid))\n )\n with engine.begin() as conn:\n schema_info = conn.execute(sel).fetchone()\n return schema_info\n\n\ndef get_mathesar_schemas(engine):\n return [schema for schema, _ in get_mathesar_schemas_with_oids(engine)]\n\n\ndef get_mathesar_schemas_with_oids(engine):\n metadata = MetaData()\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", message=\"Did not recognize type\")\n pg_namespace = Table(\"pg_namespace\", metadata, autoload_with=engine)\n sel = (\n select(pg_namespace.c.nspname.label('schema'), pg_namespace.c.oid)\n .where(\n and_(\n *[pg_namespace.c.nspname != schema for schema in EXCLUDED_SCHEMATA],\n not_(pg_namespace.c.nspname.like(\"pg_%\"))\n )\n )\n )\n with engine.begin() as conn:\n result = conn.execute(sel).fetchall()\n return result\n\n\ndef get_all_schemas(engine):\n inspector = inspect(engine)\n # We don't need to exclude system schemas (i.e., starting with \"pg_\")\n # since Inspector.get_schema_names already excludes them. Thus, this\n # function actually gets all non-pg-reserved schemas.\n return inspector.get_schema_names()\n\n\ndef create_schema(schema, engine):\n \"\"\"\n This method creates a Postgres schema.\n \"\"\"\n if schema not in get_all_schemas(engine):\n with engine.begin() as connection:\n connection.execute(CreateSchema(schema))\n", "path": "db/schemas.py"}], "after_files": [{"content": "import logging\nimport warnings\nfrom sqlalchemy.schema import CreateSchema, DropSchema\nfrom sqlalchemy import inspect, MetaData, select, and_, not_, or_, Table\nfrom sqlalchemy.exc import InternalError\nfrom sqlalchemy.schema import DDLElement\nfrom sqlalchemy.ext import compiler\nfrom psycopg2.errors import DependentObjectsStillExist\n\nfrom db import types\n\nlogger = logging.getLogger(__name__)\n\nTYPES_SCHEMA = types.base.SCHEMA\n\nEXCLUDED_SCHEMATA = [TYPES_SCHEMA, \"information_schema\"]\n\n\ndef get_schema_name_from_oid(oid, engine):\n return reflect_schema(engine, oid=oid)[\"name\"]\n\n\ndef get_schema_oid_from_name(name, engine):\n return reflect_schema(engine, name=name)[\"oid\"]\n\n\ndef reflect_schema(engine, name=None, oid=None):\n # If we have both arguments, the behavior is undefined.\n try:\n assert name is None or oid is None\n except AssertionError as e:\n logger.error(\"ERROR: Only one of 'name' or 'oid' can be given!\")\n raise e\n metadata = MetaData()\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", message=\"Did not recognize type\")\n pg_namespace = Table(\"pg_namespace\", metadata, autoload_with=engine)\n sel = (\n select(pg_namespace.c.oid, pg_namespace.c.nspname.label(\"name\"))\n .where(or_(pg_namespace.c.nspname == name, pg_namespace.c.oid == oid))\n )\n with engine.begin() as conn:\n schema_info = conn.execute(sel).fetchone()\n return schema_info\n\n\ndef get_mathesar_schemas(engine):\n return [schema for schema, _ in get_mathesar_schemas_with_oids(engine)]\n\n\ndef get_mathesar_schemas_with_oids(engine):\n metadata = MetaData()\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", message=\"Did not recognize type\")\n pg_namespace = Table(\"pg_namespace\", metadata, autoload_with=engine)\n sel = (\n select(pg_namespace.c.nspname.label('schema'), pg_namespace.c.oid)\n .where(\n and_(\n *[pg_namespace.c.nspname != schema for schema in EXCLUDED_SCHEMATA],\n not_(pg_namespace.c.nspname.like(\"pg_%\"))\n )\n )\n )\n with engine.begin() as conn:\n result = conn.execute(sel).fetchall()\n return result\n\n\ndef get_all_schemas(engine):\n inspector = inspect(engine)\n # We don't need to exclude system schemas (i.e., starting with \"pg_\")\n # since Inspector.get_schema_names already excludes them. Thus, this\n # function actually gets all non-pg-reserved schemas.\n return inspector.get_schema_names()\n\n\ndef create_schema(schema, engine):\n \"\"\"\n This method creates a Postgres schema.\n \"\"\"\n if schema not in get_all_schemas(engine):\n with engine.begin() as connection:\n connection.execute(CreateSchema(schema))\n\n\ndef delete_schema(schema, engine, cascade=False, if_exists=False):\n \"\"\"\n This method deletes a Postgres schema.\n \"\"\"\n if if_exists and schema not in get_all_schemas(engine):\n return\n\n with engine.begin() as connection:\n try:\n connection.execute(DropSchema(schema, cascade=cascade))\n except InternalError as e:\n if isinstance(e.orig, DependentObjectsStillExist):\n raise e.orig\n else:\n raise e\n\n\nclass RenameSchema(DDLElement):\n def __init__(self, schema, rename_to):\n self.schema = schema\n self.rename_to = rename_to\n\n\[email protected](RenameSchema)\ndef compile_rename_schema(element, compiler, **_):\n return \"ALTER SCHEMA %s RENAME TO %s\" % (\n element.schema,\n element.rename_to\n )\n\n\ndef rename_schema(schema, engine, rename_to):\n \"\"\"\n This method renames a Postgres schema.\n \"\"\"\n with engine.begin() as connection:\n connection.execute(RenameSchema(schema, rename_to))\n", "path": "db/schemas.py"}]}
1,125
421
gh_patches_debug_20114
rasdani/github-patches
git_diff
kubeflow__pipelines-5165
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Problems upgrading to TFX 0.27.0 ### What steps did you take: Installed Kubeflow Pipelines on GCP via kustomize manifests. Tried to run the Taxi TFX Demo. ### What happened: On the first step, I got the error "No module named 'tfx.dsl.components'" ### What did you expect to happen: To successfully run the TFX Taxi Demo. ### Environment: How did you deploy Kubeflow Pipelines (KFP)? Via the kustomize manifests in GCP. KFP version: 1.4.0-rc.1 /kind bug /area backend --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py` Content: ``` 1 #!/usr/bin/env python3 2 # Copyright 2019 Google LLC 3 # 4 # Licensed under the Apache License, Version 2.0 (the "License"); 5 # you may not use this file except in compliance with the License. 6 # You may obtain a copy of the License at 7 # 8 # http://www.apache.org/licenses/LICENSE-2.0 9 # 10 # Unless required by applicable law or agreed to in writing, software 11 # distributed under the License is distributed on an "AS IS" BASIS, 12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 # See the License for the specific language governing permissions and 14 # limitations under the License. 15 16 import os 17 18 from typing import Text 19 20 import kfp 21 import tensorflow_model_analysis as tfma 22 from tfx.components.evaluator.component import Evaluator 23 from tfx.components.example_gen.csv_example_gen.component import CsvExampleGen 24 from tfx.components.example_validator.component import ExampleValidator 25 from tfx.components.pusher.component import Pusher 26 from tfx.components.schema_gen.component import SchemaGen 27 from tfx.components.statistics_gen.component import StatisticsGen 28 from tfx.components.trainer.component import Trainer 29 from tfx.components.transform.component import Transform 30 from tfx.orchestration import data_types 31 from tfx.orchestration import pipeline 32 from tfx.orchestration.kubeflow import kubeflow_dag_runner 33 from tfx.utils.dsl_utils import external_input 34 from tfx.proto import pusher_pb2 35 from tfx.proto import trainer_pb2 36 37 # Define pipeline params used for pipeline execution. 38 # Path to the module file, should be a GCS path, 39 # or a module file baked in the docker image used by the pipeline. 40 _taxi_module_file_param = data_types.RuntimeParameter( 41 name='module-file', 42 default='/tfx-src/tfx/examples/chicago_taxi_pipeline/taxi_utils.py', 43 ptype=Text, 44 ) 45 46 # Path to the CSV data file, under which their should be a data.csv file. 47 _data_root_param = data_types.RuntimeParameter( 48 name='data-root', 49 default='gs://ml-pipeline/sample-data/chicago-taxi/data', 50 ptype=Text, 51 ) 52 53 # Path of pipeline root, should be a GCS path. 54 pipeline_root = os.path.join( 55 'gs://{{kfp-default-bucket}}', 'tfx_taxi_simple', kfp.dsl.RUN_ID_PLACEHOLDER 56 ) 57 58 59 def _create_pipeline( 60 pipeline_root: Text, csv_input_location: data_types.RuntimeParameter, 61 taxi_module_file: data_types.RuntimeParameter, enable_cache: bool 62 ): 63 """Creates a simple Kubeflow-based Chicago Taxi TFX pipeline. 64 65 Args: 66 pipeline_root: The root of the pipeline output. 67 csv_input_location: The location of the input data directory. 68 taxi_module_file: The location of the module file for Transform/Trainer. 69 enable_cache: Whether to enable cache or not. 70 71 Returns: 72 A logical TFX pipeline.Pipeline object. 73 """ 74 examples = external_input(csv_input_location) 75 76 example_gen = CsvExampleGen(input=examples) 77 statistics_gen = StatisticsGen(examples=example_gen.outputs['examples']) 78 infer_schema = SchemaGen( 79 statistics=statistics_gen.outputs['statistics'], 80 infer_feature_shape=False, 81 ) 82 validate_stats = ExampleValidator( 83 statistics=statistics_gen.outputs['statistics'], 84 schema=infer_schema.outputs['schema'], 85 ) 86 transform = Transform( 87 examples=example_gen.outputs['examples'], 88 schema=infer_schema.outputs['schema'], 89 module_file=taxi_module_file, 90 ) 91 trainer = Trainer( 92 module_file=taxi_module_file, 93 transformed_examples=transform.outputs['transformed_examples'], 94 schema=infer_schema.outputs['schema'], 95 transform_graph=transform.outputs['transform_graph'], 96 train_args=trainer_pb2.TrainArgs(num_steps=10), 97 eval_args=trainer_pb2.EvalArgs(num_steps=5), 98 ) 99 # Set the TFMA config for Model Evaluation and Validation. 100 eval_config = tfma.EvalConfig( 101 model_specs=[ 102 # Using signature 'eval' implies the use of an EvalSavedModel. To use 103 # a serving model remove the signature to defaults to 'serving_default' 104 # and add a label_key. 105 tfma.ModelSpec(signature_name='eval') 106 ], 107 metrics_specs=[ 108 tfma.MetricsSpec( 109 # The metrics added here are in addition to those saved with the 110 # model (assuming either a keras model or EvalSavedModel is used). 111 # Any metrics added into the saved model (for example using 112 # model.compile(..., metrics=[...]), etc) will be computed 113 # automatically. 114 metrics=[tfma.MetricConfig(class_name='ExampleCount')], 115 # To add validation thresholds for metrics saved with the model, 116 # add them keyed by metric name to the thresholds map. 117 thresholds={ 118 'binary_accuracy': 119 tfma.MetricThreshold( 120 value_threshold=tfma.GenericValueThreshold( 121 lower_bound={'value': 0.5} 122 ), 123 change_threshold=tfma.GenericChangeThreshold( 124 direction=tfma.MetricDirection.HIGHER_IS_BETTER, 125 absolute={'value': -1e-10} 126 ) 127 ) 128 } 129 ) 130 ], 131 slicing_specs=[ 132 # An empty slice spec means the overall slice, i.e. the whole dataset. 133 tfma.SlicingSpec(), 134 # Data can be sliced along a feature column. In this case, data is 135 # sliced along feature column trip_start_hour. 136 tfma.SlicingSpec(feature_keys=['trip_start_hour']) 137 ] 138 ) 139 140 model_analyzer = Evaluator( 141 examples=example_gen.outputs['examples'], 142 model=trainer.outputs['model'], 143 eval_config=eval_config, 144 ) 145 146 pusher = Pusher( 147 model=trainer.outputs['model'], 148 model_blessing=model_analyzer.outputs['blessing'], 149 push_destination=pusher_pb2.PushDestination( 150 filesystem=pusher_pb2.PushDestination.Filesystem( 151 base_directory=os.path. 152 join(str(pipeline.ROOT_PARAMETER), 'model_serving') 153 ) 154 ), 155 ) 156 157 return pipeline.Pipeline( 158 pipeline_name='parameterized_tfx_oss', 159 pipeline_root=pipeline_root, 160 components=[ 161 example_gen, statistics_gen, infer_schema, validate_stats, transform, 162 trainer, model_analyzer, pusher 163 ], 164 enable_cache=enable_cache, 165 ) 166 167 168 if __name__ == '__main__': 169 enable_cache = True 170 pipeline = _create_pipeline( 171 pipeline_root, 172 _data_root_param, 173 _taxi_module_file_param, 174 enable_cache=enable_cache, 175 ) 176 # Make sure the version of TFX image used is consistent with the version of 177 # TFX SDK. 178 config = kubeflow_dag_runner.KubeflowDagRunnerConfig( 179 kubeflow_metadata_config=kubeflow_dag_runner. 180 get_default_kubeflow_metadata_config(), 181 tfx_image='gcr.io/tfx-oss-public/tfx:0.22.0', 182 ) 183 kfp_runner = kubeflow_dag_runner.KubeflowDagRunner( 184 output_filename=__file__ + '.yaml', config=config 185 ) 186 187 kfp_runner.run(pipeline) 188 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py b/samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py --- a/samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py +++ b/samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py @@ -39,7 +39,7 @@ # or a module file baked in the docker image used by the pipeline. _taxi_module_file_param = data_types.RuntimeParameter( name='module-file', - default='/tfx-src/tfx/examples/chicago_taxi_pipeline/taxi_utils.py', + default='/tfx/src/tfx/examples/chicago_taxi_pipeline/taxi_utils.py', ptype=Text, ) @@ -178,7 +178,7 @@ config = kubeflow_dag_runner.KubeflowDagRunnerConfig( kubeflow_metadata_config=kubeflow_dag_runner. get_default_kubeflow_metadata_config(), - tfx_image='gcr.io/tfx-oss-public/tfx:0.22.0', + tfx_image='gcr.io/tfx-oss-public/tfx:0.27.0', ) kfp_runner = kubeflow_dag_runner.KubeflowDagRunner( output_filename=__file__ + '.yaml', config=config
{"golden_diff": "diff --git a/samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py b/samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py\n--- a/samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py\n+++ b/samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py\n@@ -39,7 +39,7 @@\n # or a module file baked in the docker image used by the pipeline.\n _taxi_module_file_param = data_types.RuntimeParameter(\n name='module-file',\n- default='/tfx-src/tfx/examples/chicago_taxi_pipeline/taxi_utils.py',\n+ default='/tfx/src/tfx/examples/chicago_taxi_pipeline/taxi_utils.py',\n ptype=Text,\n )\n \n@@ -178,7 +178,7 @@\n config = kubeflow_dag_runner.KubeflowDagRunnerConfig(\n kubeflow_metadata_config=kubeflow_dag_runner.\n get_default_kubeflow_metadata_config(),\n- tfx_image='gcr.io/tfx-oss-public/tfx:0.22.0',\n+ tfx_image='gcr.io/tfx-oss-public/tfx:0.27.0',\n )\n kfp_runner = kubeflow_dag_runner.KubeflowDagRunner(\n output_filename=__file__ + '.yaml', config=config\n", "issue": "Problems upgrading to TFX 0.27.0\n### What steps did you take:\r\nInstalled Kubeflow Pipelines on GCP via kustomize manifests.\r\nTried to run the Taxi TFX Demo.\r\n\r\n### What happened:\r\nOn the first step, I got the error \"No module named 'tfx.dsl.components'\"\r\n\r\n### What did you expect to happen:\r\nTo successfully run the TFX Taxi Demo.\r\n\r\n### Environment:\r\nHow did you deploy Kubeflow Pipelines (KFP)?\r\nVia the kustomize manifests in GCP.\r\n\r\nKFP version: 1.4.0-rc.1\r\n\r\n/kind bug\r\n/area backend\n", "before_files": [{"content": "#!/usr/bin/env python3\n# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nfrom typing import Text\n\nimport kfp\nimport tensorflow_model_analysis as tfma\nfrom tfx.components.evaluator.component import Evaluator\nfrom tfx.components.example_gen.csv_example_gen.component import CsvExampleGen\nfrom tfx.components.example_validator.component import ExampleValidator\nfrom tfx.components.pusher.component import Pusher\nfrom tfx.components.schema_gen.component import SchemaGen\nfrom tfx.components.statistics_gen.component import StatisticsGen\nfrom tfx.components.trainer.component import Trainer\nfrom tfx.components.transform.component import Transform\nfrom tfx.orchestration import data_types\nfrom tfx.orchestration import pipeline\nfrom tfx.orchestration.kubeflow import kubeflow_dag_runner\nfrom tfx.utils.dsl_utils import external_input\nfrom tfx.proto import pusher_pb2\nfrom tfx.proto import trainer_pb2\n\n# Define pipeline params used for pipeline execution.\n# Path to the module file, should be a GCS path,\n# or a module file baked in the docker image used by the pipeline.\n_taxi_module_file_param = data_types.RuntimeParameter(\n name='module-file',\n default='/tfx-src/tfx/examples/chicago_taxi_pipeline/taxi_utils.py',\n ptype=Text,\n)\n\n# Path to the CSV data file, under which their should be a data.csv file.\n_data_root_param = data_types.RuntimeParameter(\n name='data-root',\n default='gs://ml-pipeline/sample-data/chicago-taxi/data',\n ptype=Text,\n)\n\n# Path of pipeline root, should be a GCS path.\npipeline_root = os.path.join(\n 'gs://{{kfp-default-bucket}}', 'tfx_taxi_simple', kfp.dsl.RUN_ID_PLACEHOLDER\n)\n\n\ndef _create_pipeline(\n pipeline_root: Text, csv_input_location: data_types.RuntimeParameter,\n taxi_module_file: data_types.RuntimeParameter, enable_cache: bool\n):\n \"\"\"Creates a simple Kubeflow-based Chicago Taxi TFX pipeline.\n\n Args:\n pipeline_root: The root of the pipeline output.\n csv_input_location: The location of the input data directory.\n taxi_module_file: The location of the module file for Transform/Trainer.\n enable_cache: Whether to enable cache or not.\n\n Returns:\n A logical TFX pipeline.Pipeline object.\n \"\"\"\n examples = external_input(csv_input_location)\n\n example_gen = CsvExampleGen(input=examples)\n statistics_gen = StatisticsGen(examples=example_gen.outputs['examples'])\n infer_schema = SchemaGen(\n statistics=statistics_gen.outputs['statistics'],\n infer_feature_shape=False,\n )\n validate_stats = ExampleValidator(\n statistics=statistics_gen.outputs['statistics'],\n schema=infer_schema.outputs['schema'],\n )\n transform = Transform(\n examples=example_gen.outputs['examples'],\n schema=infer_schema.outputs['schema'],\n module_file=taxi_module_file,\n )\n trainer = Trainer(\n module_file=taxi_module_file,\n transformed_examples=transform.outputs['transformed_examples'],\n schema=infer_schema.outputs['schema'],\n transform_graph=transform.outputs['transform_graph'],\n train_args=trainer_pb2.TrainArgs(num_steps=10),\n eval_args=trainer_pb2.EvalArgs(num_steps=5),\n )\n # Set the TFMA config for Model Evaluation and Validation.\n eval_config = tfma.EvalConfig(\n model_specs=[\n # Using signature 'eval' implies the use of an EvalSavedModel. To use\n # a serving model remove the signature to defaults to 'serving_default'\n # and add a label_key.\n tfma.ModelSpec(signature_name='eval')\n ],\n metrics_specs=[\n tfma.MetricsSpec(\n # The metrics added here are in addition to those saved with the\n # model (assuming either a keras model or EvalSavedModel is used).\n # Any metrics added into the saved model (for example using\n # model.compile(..., metrics=[...]), etc) will be computed\n # automatically.\n metrics=[tfma.MetricConfig(class_name='ExampleCount')],\n # To add validation thresholds for metrics saved with the model,\n # add them keyed by metric name to the thresholds map.\n thresholds={\n 'binary_accuracy':\n tfma.MetricThreshold(\n value_threshold=tfma.GenericValueThreshold(\n lower_bound={'value': 0.5}\n ),\n change_threshold=tfma.GenericChangeThreshold(\n direction=tfma.MetricDirection.HIGHER_IS_BETTER,\n absolute={'value': -1e-10}\n )\n )\n }\n )\n ],\n slicing_specs=[\n # An empty slice spec means the overall slice, i.e. the whole dataset.\n tfma.SlicingSpec(),\n # Data can be sliced along a feature column. In this case, data is\n # sliced along feature column trip_start_hour.\n tfma.SlicingSpec(feature_keys=['trip_start_hour'])\n ]\n )\n\n model_analyzer = Evaluator(\n examples=example_gen.outputs['examples'],\n model=trainer.outputs['model'],\n eval_config=eval_config,\n )\n\n pusher = Pusher(\n model=trainer.outputs['model'],\n model_blessing=model_analyzer.outputs['blessing'],\n push_destination=pusher_pb2.PushDestination(\n filesystem=pusher_pb2.PushDestination.Filesystem(\n base_directory=os.path.\n join(str(pipeline.ROOT_PARAMETER), 'model_serving')\n )\n ),\n )\n\n return pipeline.Pipeline(\n pipeline_name='parameterized_tfx_oss',\n pipeline_root=pipeline_root,\n components=[\n example_gen, statistics_gen, infer_schema, validate_stats, transform,\n trainer, model_analyzer, pusher\n ],\n enable_cache=enable_cache,\n )\n\n\nif __name__ == '__main__':\n enable_cache = True\n pipeline = _create_pipeline(\n pipeline_root,\n _data_root_param,\n _taxi_module_file_param,\n enable_cache=enable_cache,\n )\n # Make sure the version of TFX image used is consistent with the version of\n # TFX SDK.\n config = kubeflow_dag_runner.KubeflowDagRunnerConfig(\n kubeflow_metadata_config=kubeflow_dag_runner.\n get_default_kubeflow_metadata_config(),\n tfx_image='gcr.io/tfx-oss-public/tfx:0.22.0',\n )\n kfp_runner = kubeflow_dag_runner.KubeflowDagRunner(\n output_filename=__file__ + '.yaml', config=config\n )\n\n kfp_runner.run(pipeline)\n", "path": "samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nfrom typing import Text\n\nimport kfp\nimport tensorflow_model_analysis as tfma\nfrom tfx.components.evaluator.component import Evaluator\nfrom tfx.components.example_gen.csv_example_gen.component import CsvExampleGen\nfrom tfx.components.example_validator.component import ExampleValidator\nfrom tfx.components.pusher.component import Pusher\nfrom tfx.components.schema_gen.component import SchemaGen\nfrom tfx.components.statistics_gen.component import StatisticsGen\nfrom tfx.components.trainer.component import Trainer\nfrom tfx.components.transform.component import Transform\nfrom tfx.orchestration import data_types\nfrom tfx.orchestration import pipeline\nfrom tfx.orchestration.kubeflow import kubeflow_dag_runner\nfrom tfx.utils.dsl_utils import external_input\nfrom tfx.proto import pusher_pb2\nfrom tfx.proto import trainer_pb2\n\n# Define pipeline params used for pipeline execution.\n# Path to the module file, should be a GCS path,\n# or a module file baked in the docker image used by the pipeline.\n_taxi_module_file_param = data_types.RuntimeParameter(\n name='module-file',\n default='/tfx/src/tfx/examples/chicago_taxi_pipeline/taxi_utils.py',\n ptype=Text,\n)\n\n# Path to the CSV data file, under which their should be a data.csv file.\n_data_root_param = data_types.RuntimeParameter(\n name='data-root',\n default='gs://ml-pipeline/sample-data/chicago-taxi/data',\n ptype=Text,\n)\n\n# Path of pipeline root, should be a GCS path.\npipeline_root = os.path.join(\n 'gs://{{kfp-default-bucket}}', 'tfx_taxi_simple', kfp.dsl.RUN_ID_PLACEHOLDER\n)\n\n\ndef _create_pipeline(\n pipeline_root: Text, csv_input_location: data_types.RuntimeParameter,\n taxi_module_file: data_types.RuntimeParameter, enable_cache: bool\n):\n \"\"\"Creates a simple Kubeflow-based Chicago Taxi TFX pipeline.\n\n Args:\n pipeline_root: The root of the pipeline output.\n csv_input_location: The location of the input data directory.\n taxi_module_file: The location of the module file for Transform/Trainer.\n enable_cache: Whether to enable cache or not.\n\n Returns:\n A logical TFX pipeline.Pipeline object.\n \"\"\"\n examples = external_input(csv_input_location)\n\n example_gen = CsvExampleGen(input=examples)\n statistics_gen = StatisticsGen(examples=example_gen.outputs['examples'])\n infer_schema = SchemaGen(\n statistics=statistics_gen.outputs['statistics'],\n infer_feature_shape=False,\n )\n validate_stats = ExampleValidator(\n statistics=statistics_gen.outputs['statistics'],\n schema=infer_schema.outputs['schema'],\n )\n transform = Transform(\n examples=example_gen.outputs['examples'],\n schema=infer_schema.outputs['schema'],\n module_file=taxi_module_file,\n )\n trainer = Trainer(\n module_file=taxi_module_file,\n transformed_examples=transform.outputs['transformed_examples'],\n schema=infer_schema.outputs['schema'],\n transform_graph=transform.outputs['transform_graph'],\n train_args=trainer_pb2.TrainArgs(num_steps=10),\n eval_args=trainer_pb2.EvalArgs(num_steps=5),\n )\n # Set the TFMA config for Model Evaluation and Validation.\n eval_config = tfma.EvalConfig(\n model_specs=[\n # Using signature 'eval' implies the use of an EvalSavedModel. To use\n # a serving model remove the signature to defaults to 'serving_default'\n # and add a label_key.\n tfma.ModelSpec(signature_name='eval')\n ],\n metrics_specs=[\n tfma.MetricsSpec(\n # The metrics added here are in addition to those saved with the\n # model (assuming either a keras model or EvalSavedModel is used).\n # Any metrics added into the saved model (for example using\n # model.compile(..., metrics=[...]), etc) will be computed\n # automatically.\n metrics=[tfma.MetricConfig(class_name='ExampleCount')],\n # To add validation thresholds for metrics saved with the model,\n # add them keyed by metric name to the thresholds map.\n thresholds={\n 'binary_accuracy':\n tfma.MetricThreshold(\n value_threshold=tfma.GenericValueThreshold(\n lower_bound={'value': 0.5}\n ),\n change_threshold=tfma.GenericChangeThreshold(\n direction=tfma.MetricDirection.HIGHER_IS_BETTER,\n absolute={'value': -1e-10}\n )\n )\n }\n )\n ],\n slicing_specs=[\n # An empty slice spec means the overall slice, i.e. the whole dataset.\n tfma.SlicingSpec(),\n # Data can be sliced along a feature column. In this case, data is\n # sliced along feature column trip_start_hour.\n tfma.SlicingSpec(feature_keys=['trip_start_hour'])\n ]\n )\n\n model_analyzer = Evaluator(\n examples=example_gen.outputs['examples'],\n model=trainer.outputs['model'],\n eval_config=eval_config,\n )\n\n pusher = Pusher(\n model=trainer.outputs['model'],\n model_blessing=model_analyzer.outputs['blessing'],\n push_destination=pusher_pb2.PushDestination(\n filesystem=pusher_pb2.PushDestination.Filesystem(\n base_directory=os.path.\n join(str(pipeline.ROOT_PARAMETER), 'model_serving')\n )\n ),\n )\n\n return pipeline.Pipeline(\n pipeline_name='parameterized_tfx_oss',\n pipeline_root=pipeline_root,\n components=[\n example_gen, statistics_gen, infer_schema, validate_stats, transform,\n trainer, model_analyzer, pusher\n ],\n enable_cache=enable_cache,\n )\n\n\nif __name__ == '__main__':\n enable_cache = True\n pipeline = _create_pipeline(\n pipeline_root,\n _data_root_param,\n _taxi_module_file_param,\n enable_cache=enable_cache,\n )\n # Make sure the version of TFX image used is consistent with the version of\n # TFX SDK.\n config = kubeflow_dag_runner.KubeflowDagRunnerConfig(\n kubeflow_metadata_config=kubeflow_dag_runner.\n get_default_kubeflow_metadata_config(),\n tfx_image='gcr.io/tfx-oss-public/tfx:0.27.0',\n )\n kfp_runner = kubeflow_dag_runner.KubeflowDagRunner(\n output_filename=__file__ + '.yaml', config=config\n )\n\n kfp_runner.run(pipeline)\n", "path": "samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py"}]}
2,418
320
gh_patches_debug_2454
rasdani/github-patches
git_diff
mkdocs__mkdocs-904
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Error while executing gh-deploy I've successfully deployed a MkDocs site using the gh-deploy command. When I try to deploy some additional changes to my master branch, I get the following error: ``` c:\docs>mkdocs gh-deploy --clean INFO - Cleaning site directory INFO - Building documentation to directory: c:\docs\site INFO - Copying 'c:\docs\site' to 'gh-pages' branch and pushing to GitHub. Traceback (most recent call last): File "C:\Python34\lib\runpy.py", line 170, in _run_module_as_main "__main__", mod_spec) File "C:\Python34\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "c:\Python34\Scripts\mkdocs.exe\__main__.py", line 9, in <module> File "C:\Python34\lib\site-packages\click\core.py", line 664, in __call__ return self.main(*args, **kwargs) File "C:\Python34\lib\site-packages\click\core.py", line 644, in main rv = self.invoke(ctx) File "C:\Python34\lib\site-packages\click\core.py", line 991, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "C:\Python34\lib\site-packages\click\core.py", line 837, in invoke return ctx.invoke(self.callback, **ctx.params) File "C:\Python34\lib\site-packages\click\core.py", line 464, in invoke return callback(*args, **kwargs) File "C:\Python34\lib\site-packages\mkdocs\cli.py", line 186, in gh_deploy_command gh_deploy.gh_deploy(config, message=message) File "C:\Python34\lib\site-packages\mkdocs\gh_deploy.py", line 69, in gh_deploy remote_branch) File "C:\Python34\lib\site-packages\mkdocs\utils\ghp_import.py", line 163, in ghp_import if not try_rebase(remote, branch): File "C:\Python34\lib\site-packages\mkdocs\utils\ghp_import.py", line 78, in try_rebase if sp.call(cmd) != 0: File "C:\Python34\lib\subprocess.py", line 537, in call with Popen(*popenargs, **kwargs) as p: File "C:\Python34\lib\subprocess.py", line 859, in __init__ restore_signals, start_new_session) File "C:\Python34\lib\subprocess.py", line 1086, in _execute_child args = list2cmdline(args) File "C:\Python34\lib\subprocess.py", line 663, in list2cmdline needquote = (" " in arg) or ("\t" in arg) or not arg TypeError: 'str' does not support the buffer interface ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `mkdocs/utils/ghp_import.py` Content: ``` 1 #! /usr/bin/env python 2 # 3 # This file is part of the ghp-import package released under 4 # the Tumbolia Public License. 5 6 # Tumbolia Public License 7 8 # Copyright 2013, Paul Davis <[email protected]> 9 10 # Copying and distribution of this file, with or without modification, are 11 # permitted in any medium without royalty provided the copyright notice and this 12 # notice are preserved. 13 14 # TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 15 16 # 0. opan saurce LOL 17 18 from __future__ import unicode_literals 19 20 import errno 21 import logging 22 import os 23 import subprocess as sp 24 import sys 25 import time 26 import unicodedata 27 28 log = logging.getLogger(__name__) 29 30 31 if sys.version_info[0] == 3: 32 def enc(text): 33 if isinstance(text, bytes): 34 return text 35 return text.encode() 36 37 def dec(text): 38 if isinstance(text, bytes): 39 return text.decode('utf-8') 40 return text 41 42 def write(pipe, data): 43 try: 44 pipe.stdin.write(data) 45 except IOError as e: 46 if e.errno != errno.EPIPE: 47 raise 48 else: 49 def enc(text): 50 if isinstance(text, unicode): 51 return text.encode('utf-8') 52 return text 53 54 def dec(text): 55 if isinstance(text, unicode): 56 return text 57 return text.decode('utf-8') 58 59 def write(pipe, data): 60 pipe.stdin.write(data) 61 62 63 def normalize_path(path): 64 # Fix unicode pathnames on OS X 65 # See: http://stackoverflow.com/a/5582439/44289 66 if sys.platform == "darwin": 67 return unicodedata.normalize("NFKC", dec(path)) 68 return path 69 70 71 def try_rebase(remote, branch): 72 cmd = ['git', 'rev-list', '--max-count=1', '%s/%s' % (remote, branch)] 73 p = sp.Popen(cmd, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE) 74 (rev, _) = p.communicate() 75 if p.wait() != 0: 76 return True 77 cmd = ['git', 'update-ref', 'refs/heads/%s' % branch, rev.strip()] 78 if sp.call(cmd) != 0: 79 return False 80 return True 81 82 83 def get_config(key): 84 p = sp.Popen(['git', 'config', key], stdin=sp.PIPE, stdout=sp.PIPE) 85 (value, _) = p.communicate() 86 return value.decode('utf-8').strip() 87 88 89 def get_prev_commit(branch): 90 cmd = ['git', 'rev-list', '--max-count=1', branch, '--'] 91 p = sp.Popen(cmd, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE) 92 (rev, _) = p.communicate() 93 if p.wait() != 0: 94 return None 95 return rev.decode('utf-8').strip() 96 97 98 def mk_when(timestamp=None): 99 if timestamp is None: 100 timestamp = int(time.time()) 101 currtz = "%+05d" % (-1 * time.timezone / 36) # / 3600 * 100 102 return "%s %s" % (timestamp, currtz) 103 104 105 def start_commit(pipe, branch, message): 106 uname = dec(get_config("user.name")) 107 email = dec(get_config("user.email")) 108 write(pipe, enc('commit refs/heads/%s\n' % branch)) 109 write(pipe, enc('committer %s <%s> %s\n' % (uname, email, mk_when()))) 110 write(pipe, enc('data %d\n%s\n' % (len(message), message))) 111 head = get_prev_commit(branch) 112 if head: 113 write(pipe, enc('from %s\n' % head)) 114 write(pipe, enc('deleteall\n')) 115 116 117 def add_file(pipe, srcpath, tgtpath): 118 with open(srcpath, "rb") as handle: 119 if os.access(srcpath, os.X_OK): 120 write(pipe, enc('M 100755 inline %s\n' % tgtpath)) 121 else: 122 write(pipe, enc('M 100644 inline %s\n' % tgtpath)) 123 data = handle.read() 124 write(pipe, enc('data %d\n' % len(data))) 125 write(pipe, enc(data)) 126 write(pipe, enc('\n')) 127 128 129 def add_nojekyll(pipe): 130 write(pipe, enc('M 100644 inline .nojekyll\n')) 131 write(pipe, enc('data 0\n')) 132 write(pipe, enc('\n')) 133 134 135 def gitpath(fname): 136 norm = os.path.normpath(fname) 137 return "/".join(norm.split(os.path.sep)) 138 139 140 def run_import(srcdir, branch, message, nojekyll): 141 cmd = ['git', 'fast-import', '--date-format=raw', '--quiet'] 142 kwargs = {"stdin": sp.PIPE} 143 if sys.version_info >= (3, 2, 0): 144 kwargs["universal_newlines"] = False 145 pipe = sp.Popen(cmd, **kwargs) 146 start_commit(pipe, branch, message) 147 for path, _, fnames in os.walk(srcdir): 148 for fn in fnames: 149 fpath = os.path.join(path, fn) 150 fpath = normalize_path(fpath) 151 gpath = gitpath(os.path.relpath(fpath, start=srcdir)) 152 add_file(pipe, fpath, gpath) 153 if nojekyll: 154 add_nojekyll(pipe) 155 write(pipe, enc('\n')) 156 pipe.stdin.close() 157 if pipe.wait() != 0: 158 sys.stdout.write(enc("Failed to process commit.\n")) 159 160 161 def ghp_import(directory, message, remote='origin', branch='gh-pages'): 162 163 if not try_rebase(remote, branch): 164 log.error("Failed to rebase %s branch.", branch) 165 166 nojekyll = True 167 168 run_import(directory, branch, message, nojekyll) 169 170 proc = sp.Popen(['git', 'push', remote, branch], 171 stdout=sp.PIPE, stderr=sp.PIPE) 172 proc.communicate() 173 return proc.wait() == 0 174 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/mkdocs/utils/ghp_import.py b/mkdocs/utils/ghp_import.py --- a/mkdocs/utils/ghp_import.py +++ b/mkdocs/utils/ghp_import.py @@ -74,7 +74,7 @@ (rev, _) = p.communicate() if p.wait() != 0: return True - cmd = ['git', 'update-ref', 'refs/heads/%s' % branch, rev.strip()] + cmd = ['git', 'update-ref', 'refs/heads/%s' % branch, dec(rev.strip())] if sp.call(cmd) != 0: return False return True
{"golden_diff": "diff --git a/mkdocs/utils/ghp_import.py b/mkdocs/utils/ghp_import.py\n--- a/mkdocs/utils/ghp_import.py\n+++ b/mkdocs/utils/ghp_import.py\n@@ -74,7 +74,7 @@\n (rev, _) = p.communicate()\n if p.wait() != 0:\n return True\n- cmd = ['git', 'update-ref', 'refs/heads/%s' % branch, rev.strip()]\n+ cmd = ['git', 'update-ref', 'refs/heads/%s' % branch, dec(rev.strip())]\n if sp.call(cmd) != 0:\n return False\n return True\n", "issue": "Error while executing gh-deploy\nI've successfully deployed a MkDocs site using the gh-deploy command. When I try to deploy some additional changes to my master branch, I get the following error:\n\n```\nc:\\docs>mkdocs gh-deploy --clean\nINFO - Cleaning site directory\nINFO - Building documentation to directory: c:\\docs\\site\nINFO - Copying 'c:\\docs\\site' to 'gh-pages' branch and pushing to GitHub.\nTraceback (most recent call last):\n File \"C:\\Python34\\lib\\runpy.py\", line 170, in _run_module_as_main\n \"__main__\", mod_spec)\n File \"C:\\Python34\\lib\\runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"c:\\Python34\\Scripts\\mkdocs.exe\\__main__.py\", line 9, in <module>\n File \"C:\\Python34\\lib\\site-packages\\click\\core.py\", line 664, in __call__\n return self.main(*args, **kwargs)\n File \"C:\\Python34\\lib\\site-packages\\click\\core.py\", line 644, in main\n rv = self.invoke(ctx)\n File \"C:\\Python34\\lib\\site-packages\\click\\core.py\", line 991, in invoke\n return _process_result(sub_ctx.command.invoke(sub_ctx))\n File \"C:\\Python34\\lib\\site-packages\\click\\core.py\", line 837, in invoke\n return ctx.invoke(self.callback, **ctx.params)\n File \"C:\\Python34\\lib\\site-packages\\click\\core.py\", line 464, in invoke\n return callback(*args, **kwargs)\n File \"C:\\Python34\\lib\\site-packages\\mkdocs\\cli.py\", line 186, in gh_deploy_command\n gh_deploy.gh_deploy(config, message=message)\n File \"C:\\Python34\\lib\\site-packages\\mkdocs\\gh_deploy.py\", line 69, in gh_deploy\n remote_branch)\n File \"C:\\Python34\\lib\\site-packages\\mkdocs\\utils\\ghp_import.py\", line 163, in ghp_import\n if not try_rebase(remote, branch):\n File \"C:\\Python34\\lib\\site-packages\\mkdocs\\utils\\ghp_import.py\", line 78, in try_rebase\n if sp.call(cmd) != 0:\n File \"C:\\Python34\\lib\\subprocess.py\", line 537, in call\n with Popen(*popenargs, **kwargs) as p:\n File \"C:\\Python34\\lib\\subprocess.py\", line 859, in __init__\n restore_signals, start_new_session)\n File \"C:\\Python34\\lib\\subprocess.py\", line 1086, in _execute_child\n args = list2cmdline(args)\n File \"C:\\Python34\\lib\\subprocess.py\", line 663, in list2cmdline\n needquote = (\" \" in arg) or (\"\\t\" in arg) or not arg\nTypeError: 'str' does not support the buffer interface\n```\n\n", "before_files": [{"content": "#! /usr/bin/env python\n#\n# This file is part of the ghp-import package released under\n# the Tumbolia Public License.\n\n# Tumbolia Public License\n\n# Copyright 2013, Paul Davis <[email protected]>\n\n# Copying and distribution of this file, with or without modification, are\n# permitted in any medium without royalty provided the copyright notice and this\n# notice are preserved.\n\n# TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION\n\n# 0. opan saurce LOL\n\nfrom __future__ import unicode_literals\n\nimport errno\nimport logging\nimport os\nimport subprocess as sp\nimport sys\nimport time\nimport unicodedata\n\nlog = logging.getLogger(__name__)\n\n\nif sys.version_info[0] == 3:\n def enc(text):\n if isinstance(text, bytes):\n return text\n return text.encode()\n\n def dec(text):\n if isinstance(text, bytes):\n return text.decode('utf-8')\n return text\n\n def write(pipe, data):\n try:\n pipe.stdin.write(data)\n except IOError as e:\n if e.errno != errno.EPIPE:\n raise\nelse:\n def enc(text):\n if isinstance(text, unicode):\n return text.encode('utf-8')\n return text\n\n def dec(text):\n if isinstance(text, unicode):\n return text\n return text.decode('utf-8')\n\n def write(pipe, data):\n pipe.stdin.write(data)\n\n\ndef normalize_path(path):\n # Fix unicode pathnames on OS X\n # See: http://stackoverflow.com/a/5582439/44289\n if sys.platform == \"darwin\":\n return unicodedata.normalize(\"NFKC\", dec(path))\n return path\n\n\ndef try_rebase(remote, branch):\n cmd = ['git', 'rev-list', '--max-count=1', '%s/%s' % (remote, branch)]\n p = sp.Popen(cmd, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE)\n (rev, _) = p.communicate()\n if p.wait() != 0:\n return True\n cmd = ['git', 'update-ref', 'refs/heads/%s' % branch, rev.strip()]\n if sp.call(cmd) != 0:\n return False\n return True\n\n\ndef get_config(key):\n p = sp.Popen(['git', 'config', key], stdin=sp.PIPE, stdout=sp.PIPE)\n (value, _) = p.communicate()\n return value.decode('utf-8').strip()\n\n\ndef get_prev_commit(branch):\n cmd = ['git', 'rev-list', '--max-count=1', branch, '--']\n p = sp.Popen(cmd, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE)\n (rev, _) = p.communicate()\n if p.wait() != 0:\n return None\n return rev.decode('utf-8').strip()\n\n\ndef mk_when(timestamp=None):\n if timestamp is None:\n timestamp = int(time.time())\n currtz = \"%+05d\" % (-1 * time.timezone / 36) # / 3600 * 100\n return \"%s %s\" % (timestamp, currtz)\n\n\ndef start_commit(pipe, branch, message):\n uname = dec(get_config(\"user.name\"))\n email = dec(get_config(\"user.email\"))\n write(pipe, enc('commit refs/heads/%s\\n' % branch))\n write(pipe, enc('committer %s <%s> %s\\n' % (uname, email, mk_when())))\n write(pipe, enc('data %d\\n%s\\n' % (len(message), message)))\n head = get_prev_commit(branch)\n if head:\n write(pipe, enc('from %s\\n' % head))\n write(pipe, enc('deleteall\\n'))\n\n\ndef add_file(pipe, srcpath, tgtpath):\n with open(srcpath, \"rb\") as handle:\n if os.access(srcpath, os.X_OK):\n write(pipe, enc('M 100755 inline %s\\n' % tgtpath))\n else:\n write(pipe, enc('M 100644 inline %s\\n' % tgtpath))\n data = handle.read()\n write(pipe, enc('data %d\\n' % len(data)))\n write(pipe, enc(data))\n write(pipe, enc('\\n'))\n\n\ndef add_nojekyll(pipe):\n write(pipe, enc('M 100644 inline .nojekyll\\n'))\n write(pipe, enc('data 0\\n'))\n write(pipe, enc('\\n'))\n\n\ndef gitpath(fname):\n norm = os.path.normpath(fname)\n return \"/\".join(norm.split(os.path.sep))\n\n\ndef run_import(srcdir, branch, message, nojekyll):\n cmd = ['git', 'fast-import', '--date-format=raw', '--quiet']\n kwargs = {\"stdin\": sp.PIPE}\n if sys.version_info >= (3, 2, 0):\n kwargs[\"universal_newlines\"] = False\n pipe = sp.Popen(cmd, **kwargs)\n start_commit(pipe, branch, message)\n for path, _, fnames in os.walk(srcdir):\n for fn in fnames:\n fpath = os.path.join(path, fn)\n fpath = normalize_path(fpath)\n gpath = gitpath(os.path.relpath(fpath, start=srcdir))\n add_file(pipe, fpath, gpath)\n if nojekyll:\n add_nojekyll(pipe)\n write(pipe, enc('\\n'))\n pipe.stdin.close()\n if pipe.wait() != 0:\n sys.stdout.write(enc(\"Failed to process commit.\\n\"))\n\n\ndef ghp_import(directory, message, remote='origin', branch='gh-pages'):\n\n if not try_rebase(remote, branch):\n log.error(\"Failed to rebase %s branch.\", branch)\n\n nojekyll = True\n\n run_import(directory, branch, message, nojekyll)\n\n proc = sp.Popen(['git', 'push', remote, branch],\n stdout=sp.PIPE, stderr=sp.PIPE)\n proc.communicate()\n return proc.wait() == 0\n", "path": "mkdocs/utils/ghp_import.py"}], "after_files": [{"content": "#! /usr/bin/env python\n#\n# This file is part of the ghp-import package released under\n# the Tumbolia Public License.\n\n# Tumbolia Public License\n\n# Copyright 2013, Paul Davis <[email protected]>\n\n# Copying and distribution of this file, with or without modification, are\n# permitted in any medium without royalty provided the copyright notice and this\n# notice are preserved.\n\n# TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION\n\n# 0. opan saurce LOL\n\nfrom __future__ import unicode_literals\n\nimport errno\nimport logging\nimport os\nimport subprocess as sp\nimport sys\nimport time\nimport unicodedata\n\nlog = logging.getLogger(__name__)\n\n\nif sys.version_info[0] == 3:\n def enc(text):\n if isinstance(text, bytes):\n return text\n return text.encode()\n\n def dec(text):\n if isinstance(text, bytes):\n return text.decode('utf-8')\n return text\n\n def write(pipe, data):\n try:\n pipe.stdin.write(data)\n except IOError as e:\n if e.errno != errno.EPIPE:\n raise\nelse:\n def enc(text):\n if isinstance(text, unicode):\n return text.encode('utf-8')\n return text\n\n def dec(text):\n if isinstance(text, unicode):\n return text\n return text.decode('utf-8')\n\n def write(pipe, data):\n pipe.stdin.write(data)\n\n\ndef normalize_path(path):\n # Fix unicode pathnames on OS X\n # See: http://stackoverflow.com/a/5582439/44289\n if sys.platform == \"darwin\":\n return unicodedata.normalize(\"NFKC\", dec(path))\n return path\n\n\ndef try_rebase(remote, branch):\n cmd = ['git', 'rev-list', '--max-count=1', '%s/%s' % (remote, branch)]\n p = sp.Popen(cmd, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE)\n (rev, _) = p.communicate()\n if p.wait() != 0:\n return True\n cmd = ['git', 'update-ref', 'refs/heads/%s' % branch, dec(rev.strip())]\n if sp.call(cmd) != 0:\n return False\n return True\n\n\ndef get_config(key):\n p = sp.Popen(['git', 'config', key], stdin=sp.PIPE, stdout=sp.PIPE)\n (value, _) = p.communicate()\n return value.decode('utf-8').strip()\n\n\ndef get_prev_commit(branch):\n cmd = ['git', 'rev-list', '--max-count=1', branch, '--']\n p = sp.Popen(cmd, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE)\n (rev, _) = p.communicate()\n if p.wait() != 0:\n return None\n return rev.decode('utf-8').strip()\n\n\ndef mk_when(timestamp=None):\n if timestamp is None:\n timestamp = int(time.time())\n currtz = \"%+05d\" % (-1 * time.timezone / 36) # / 3600 * 100\n return \"%s %s\" % (timestamp, currtz)\n\n\ndef start_commit(pipe, branch, message):\n uname = dec(get_config(\"user.name\"))\n email = dec(get_config(\"user.email\"))\n write(pipe, enc('commit refs/heads/%s\\n' % branch))\n write(pipe, enc('committer %s <%s> %s\\n' % (uname, email, mk_when())))\n write(pipe, enc('data %d\\n%s\\n' % (len(message), message)))\n head = get_prev_commit(branch)\n if head:\n write(pipe, enc('from %s\\n' % head))\n write(pipe, enc('deleteall\\n'))\n\n\ndef add_file(pipe, srcpath, tgtpath):\n with open(srcpath, \"rb\") as handle:\n if os.access(srcpath, os.X_OK):\n write(pipe, enc('M 100755 inline %s\\n' % tgtpath))\n else:\n write(pipe, enc('M 100644 inline %s\\n' % tgtpath))\n data = handle.read()\n write(pipe, enc('data %d\\n' % len(data)))\n write(pipe, enc(data))\n write(pipe, enc('\\n'))\n\n\ndef add_nojekyll(pipe):\n write(pipe, enc('M 100644 inline .nojekyll\\n'))\n write(pipe, enc('data 0\\n'))\n write(pipe, enc('\\n'))\n\n\ndef gitpath(fname):\n norm = os.path.normpath(fname)\n return \"/\".join(norm.split(os.path.sep))\n\n\ndef run_import(srcdir, branch, message, nojekyll):\n cmd = ['git', 'fast-import', '--date-format=raw', '--quiet']\n kwargs = {\"stdin\": sp.PIPE}\n if sys.version_info >= (3, 2, 0):\n kwargs[\"universal_newlines\"] = False\n pipe = sp.Popen(cmd, **kwargs)\n start_commit(pipe, branch, message)\n for path, _, fnames in os.walk(srcdir):\n for fn in fnames:\n fpath = os.path.join(path, fn)\n fpath = normalize_path(fpath)\n gpath = gitpath(os.path.relpath(fpath, start=srcdir))\n add_file(pipe, fpath, gpath)\n if nojekyll:\n add_nojekyll(pipe)\n write(pipe, enc('\\n'))\n pipe.stdin.close()\n if pipe.wait() != 0:\n sys.stdout.write(enc(\"Failed to process commit.\\n\"))\n\n\ndef ghp_import(directory, message, remote='origin', branch='gh-pages'):\n\n if not try_rebase(remote, branch):\n log.error(\"Failed to rebase %s branch.\", branch)\n\n nojekyll = True\n\n run_import(directory, branch, message, nojekyll)\n\n proc = sp.Popen(['git', 'push', remote, branch],\n stdout=sp.PIPE, stderr=sp.PIPE)\n proc.communicate()\n return proc.wait() == 0\n", "path": "mkdocs/utils/ghp_import.py"}]}
2,771
152
gh_patches_debug_17903
rasdani/github-patches
git_diff
ipython__ipython-9820
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- IPython 5 shell does not react to SIGQUIT (CTRL + \) In previous IPython versions it was possible to terminate an IPython session quickly by sending an `SIGQUIT`, i.e., by pressing <kbd>CTRL</kbd>+<kbd> \ </kbd>. This is useful when having an `embed()` in a loop with a large number of iterations. Pressing <kbd>CTRL</kbd>+<kbd>d</kbd> followed by <kbd>y</kbd> is not practical. Since `%kill_embedded` is currently also broken there is no convenient way to terminate the process. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `IPython/terminal/shortcuts.py` Content: ``` 1 import signal 2 import sys 3 4 from prompt_toolkit.enums import DEFAULT_BUFFER, SEARCH_BUFFER 5 from prompt_toolkit.filters import (HasFocus, HasSelection, Condition, 6 ViInsertMode, EmacsInsertMode, HasCompletions) 7 from prompt_toolkit.filters.cli import ViMode 8 from prompt_toolkit.keys import Keys 9 from prompt_toolkit.key_binding.bindings.completion import display_completions_like_readline 10 11 from IPython.utils.decorators import undoc 12 13 @Condition 14 def cursor_in_leading_ws(cli): 15 before = cli.application.buffer.document.current_line_before_cursor 16 return (not before) or before.isspace() 17 18 def register_ipython_shortcuts(registry, shell): 19 """Set up the prompt_toolkit keyboard shortcuts for IPython""" 20 insert_mode = ViInsertMode() | EmacsInsertMode() 21 22 # Ctrl+J == Enter, seemingly 23 registry.add_binding(Keys.ControlJ, 24 filter=(HasFocus(DEFAULT_BUFFER) 25 & ~HasSelection() 26 & insert_mode 27 ))(newline_or_execute_outer(shell)) 28 29 registry.add_binding(Keys.ControlP, 30 filter=(ViInsertMode() & HasFocus(DEFAULT_BUFFER) 31 ))(previous_history_or_previous_completion) 32 33 registry.add_binding(Keys.ControlN, 34 filter=(ViInsertMode() & HasFocus(DEFAULT_BUFFER) 35 ))(next_history_or_next_completion) 36 37 registry.add_binding(Keys.ControlG, 38 filter=(HasFocus(DEFAULT_BUFFER) & HasCompletions() 39 ))(dismiss_completion) 40 41 registry.add_binding(Keys.ControlC, filter=HasFocus(DEFAULT_BUFFER) 42 )(reset_buffer) 43 44 registry.add_binding(Keys.ControlC, filter=HasFocus(SEARCH_BUFFER) 45 )(reset_search_buffer) 46 47 supports_suspend = Condition(lambda cli: hasattr(signal, 'SIGTSTP')) 48 registry.add_binding(Keys.ControlZ, filter=supports_suspend 49 )(suspend_to_bg) 50 51 # Ctrl+I == Tab 52 registry.add_binding(Keys.ControlI, 53 filter=(HasFocus(DEFAULT_BUFFER) 54 & ~HasSelection() 55 & insert_mode 56 & cursor_in_leading_ws 57 ))(indent_buffer) 58 59 registry.add_binding(Keys.ControlO, 60 filter=(HasFocus(DEFAULT_BUFFER) 61 & EmacsInsertMode()))(newline_with_copy_margin) 62 63 if shell.display_completions == 'readlinelike': 64 registry.add_binding(Keys.ControlI, 65 filter=(HasFocus(DEFAULT_BUFFER) 66 & ~HasSelection() 67 & insert_mode 68 & ~cursor_in_leading_ws 69 ))(display_completions_like_readline) 70 71 if sys.platform == 'win32': 72 registry.add_binding(Keys.ControlV, 73 filter=( 74 HasFocus( 75 DEFAULT_BUFFER) & ~ViMode() 76 ))(win_paste) 77 78 79 def newline_or_execute_outer(shell): 80 def newline_or_execute(event): 81 """When the user presses return, insert a newline or execute the code.""" 82 b = event.current_buffer 83 d = b.document 84 85 if b.complete_state: 86 cc = b.complete_state.current_completion 87 if cc: 88 b.apply_completion(cc) 89 else: 90 b.cancel_completion() 91 return 92 93 if not (d.on_last_line or d.cursor_position_row >= d.line_count 94 - d.empty_line_count_at_the_end()): 95 b.newline() 96 return 97 98 status, indent = shell.input_splitter.check_complete(d.text + '\n') 99 100 if (status != 'incomplete') and b.accept_action.is_returnable: 101 b.accept_action.validate_and_handle(event.cli, b) 102 else: 103 b.insert_text('\n' + (' ' * (indent or 0))) 104 return newline_or_execute 105 106 107 def previous_history_or_previous_completion(event): 108 """ 109 Control-P in vi edit mode on readline is history next, unlike default prompt toolkit. 110 111 If completer is open this still select previous completion. 112 """ 113 event.current_buffer.auto_up() 114 115 116 def next_history_or_next_completion(event): 117 """ 118 Control-N in vi edit mode on readline is history previous, unlike default prompt toolkit. 119 120 If completer is open this still select next completion. 121 """ 122 event.current_buffer.auto_down() 123 124 125 def dismiss_completion(event): 126 b = event.current_buffer 127 if b.complete_state: 128 b.cancel_completion() 129 130 131 def reset_buffer(event): 132 b = event.current_buffer 133 if b.complete_state: 134 b.cancel_completion() 135 else: 136 b.reset() 137 138 139 def reset_search_buffer(event): 140 if event.current_buffer.document.text: 141 event.current_buffer.reset() 142 else: 143 event.cli.push_focus(DEFAULT_BUFFER) 144 145 def suspend_to_bg(event): 146 event.cli.suspend_to_background() 147 148 def indent_buffer(event): 149 event.current_buffer.insert_text(' ' * 4) 150 151 def newline_with_copy_margin(event): 152 """ 153 Preserve margin and cursor position when using 154 Control-O to insert a newline in EMACS mode 155 """ 156 b = event.current_buffer 157 cursor_start_pos = b.document.cursor_position_col 158 b.newline(copy_margin=True) 159 b.cursor_up(count=1) 160 cursor_end_pos = b.document.cursor_position_col 161 if cursor_start_pos != cursor_end_pos: 162 pos_diff = cursor_start_pos - cursor_end_pos 163 b.cursor_right(count=pos_diff) 164 165 166 167 168 if sys.platform == 'win32': 169 from IPython.core.error import TryNext 170 from IPython.lib.clipboard import (ClipboardEmpty, 171 win32_clipboard_get, 172 tkinter_clipboard_get) 173 174 @undoc 175 def win_paste(event): 176 try: 177 text = win32_clipboard_get() 178 except TryNext: 179 try: 180 text = tkinter_clipboard_get() 181 except (TryNext, ClipboardEmpty): 182 return 183 except ClipboardEmpty: 184 return 185 event.current_buffer.insert_text(text.replace('\t', ' ' * 4)) 186 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/IPython/terminal/shortcuts.py b/IPython/terminal/shortcuts.py --- a/IPython/terminal/shortcuts.py +++ b/IPython/terminal/shortcuts.py @@ -26,6 +26,8 @@ & insert_mode ))(newline_or_execute_outer(shell)) + registry.add_binding(Keys.ControlBackslash)(force_exit) + registry.add_binding(Keys.ControlP, filter=(ViInsertMode() & HasFocus(DEFAULT_BUFFER) ))(previous_history_or_previous_completion) @@ -145,6 +147,12 @@ def suspend_to_bg(event): event.cli.suspend_to_background() +def force_exit(event): + """ + Force exit (with a non-zero return value) + """ + sys.exit("Quit") + def indent_buffer(event): event.current_buffer.insert_text(' ' * 4)
{"golden_diff": "diff --git a/IPython/terminal/shortcuts.py b/IPython/terminal/shortcuts.py\n--- a/IPython/terminal/shortcuts.py\n+++ b/IPython/terminal/shortcuts.py\n@@ -26,6 +26,8 @@\n & insert_mode\n ))(newline_or_execute_outer(shell))\n \n+ registry.add_binding(Keys.ControlBackslash)(force_exit)\n+\n registry.add_binding(Keys.ControlP,\n filter=(ViInsertMode() & HasFocus(DEFAULT_BUFFER)\n ))(previous_history_or_previous_completion)\n@@ -145,6 +147,12 @@\n def suspend_to_bg(event):\n event.cli.suspend_to_background()\n \n+def force_exit(event):\n+ \"\"\"\n+ Force exit (with a non-zero return value)\n+ \"\"\"\n+ sys.exit(\"Quit\")\n+\n def indent_buffer(event):\n event.current_buffer.insert_text(' ' * 4)\n", "issue": "IPython 5 shell does not react to SIGQUIT (CTRL + \\)\nIn previous IPython versions it was possible to terminate an IPython session quickly by sending an `SIGQUIT`, i.e., by pressing <kbd>CTRL</kbd>+<kbd> \\ </kbd>. This is useful when having an `embed()` in a loop with a large number of iterations. Pressing <kbd>CTRL</kbd>+<kbd>d</kbd> followed by <kbd>y</kbd> is not practical. Since `%kill_embedded` is currently also broken there is no convenient way to terminate the process.\n\n", "before_files": [{"content": "import signal\nimport sys\n\nfrom prompt_toolkit.enums import DEFAULT_BUFFER, SEARCH_BUFFER\nfrom prompt_toolkit.filters import (HasFocus, HasSelection, Condition,\n ViInsertMode, EmacsInsertMode, HasCompletions)\nfrom prompt_toolkit.filters.cli import ViMode\nfrom prompt_toolkit.keys import Keys\nfrom prompt_toolkit.key_binding.bindings.completion import display_completions_like_readline\n\nfrom IPython.utils.decorators import undoc\n\n@Condition\ndef cursor_in_leading_ws(cli):\n before = cli.application.buffer.document.current_line_before_cursor\n return (not before) or before.isspace()\n\ndef register_ipython_shortcuts(registry, shell):\n \"\"\"Set up the prompt_toolkit keyboard shortcuts for IPython\"\"\"\n insert_mode = ViInsertMode() | EmacsInsertMode()\n\n # Ctrl+J == Enter, seemingly\n registry.add_binding(Keys.ControlJ,\n filter=(HasFocus(DEFAULT_BUFFER)\n & ~HasSelection()\n & insert_mode\n ))(newline_or_execute_outer(shell))\n\n registry.add_binding(Keys.ControlP,\n filter=(ViInsertMode() & HasFocus(DEFAULT_BUFFER)\n ))(previous_history_or_previous_completion)\n\n registry.add_binding(Keys.ControlN,\n filter=(ViInsertMode() & HasFocus(DEFAULT_BUFFER)\n ))(next_history_or_next_completion)\n\n registry.add_binding(Keys.ControlG,\n filter=(HasFocus(DEFAULT_BUFFER) & HasCompletions()\n ))(dismiss_completion)\n\n registry.add_binding(Keys.ControlC, filter=HasFocus(DEFAULT_BUFFER)\n )(reset_buffer)\n\n registry.add_binding(Keys.ControlC, filter=HasFocus(SEARCH_BUFFER)\n )(reset_search_buffer)\n\n supports_suspend = Condition(lambda cli: hasattr(signal, 'SIGTSTP'))\n registry.add_binding(Keys.ControlZ, filter=supports_suspend\n )(suspend_to_bg)\n\n # Ctrl+I == Tab\n registry.add_binding(Keys.ControlI,\n filter=(HasFocus(DEFAULT_BUFFER)\n & ~HasSelection()\n & insert_mode\n & cursor_in_leading_ws\n ))(indent_buffer)\n\n registry.add_binding(Keys.ControlO,\n filter=(HasFocus(DEFAULT_BUFFER)\n & EmacsInsertMode()))(newline_with_copy_margin)\n\n if shell.display_completions == 'readlinelike':\n registry.add_binding(Keys.ControlI,\n filter=(HasFocus(DEFAULT_BUFFER)\n & ~HasSelection()\n & insert_mode\n & ~cursor_in_leading_ws\n ))(display_completions_like_readline)\n\n if sys.platform == 'win32':\n registry.add_binding(Keys.ControlV,\n filter=(\n HasFocus(\n DEFAULT_BUFFER) & ~ViMode()\n ))(win_paste)\n\n\ndef newline_or_execute_outer(shell):\n def newline_or_execute(event):\n \"\"\"When the user presses return, insert a newline or execute the code.\"\"\"\n b = event.current_buffer\n d = b.document\n\n if b.complete_state:\n cc = b.complete_state.current_completion\n if cc:\n b.apply_completion(cc)\n else:\n b.cancel_completion()\n return\n\n if not (d.on_last_line or d.cursor_position_row >= d.line_count\n - d.empty_line_count_at_the_end()):\n b.newline()\n return\n\n status, indent = shell.input_splitter.check_complete(d.text + '\\n')\n\n if (status != 'incomplete') and b.accept_action.is_returnable:\n b.accept_action.validate_and_handle(event.cli, b)\n else:\n b.insert_text('\\n' + (' ' * (indent or 0)))\n return newline_or_execute\n\n\ndef previous_history_or_previous_completion(event):\n \"\"\"\n Control-P in vi edit mode on readline is history next, unlike default prompt toolkit.\n\n If completer is open this still select previous completion.\n \"\"\"\n event.current_buffer.auto_up()\n\n\ndef next_history_or_next_completion(event):\n \"\"\"\n Control-N in vi edit mode on readline is history previous, unlike default prompt toolkit.\n\n If completer is open this still select next completion.\n \"\"\"\n event.current_buffer.auto_down()\n\n\ndef dismiss_completion(event):\n b = event.current_buffer\n if b.complete_state:\n b.cancel_completion()\n\n\ndef reset_buffer(event):\n b = event.current_buffer\n if b.complete_state:\n b.cancel_completion()\n else:\n b.reset()\n\n\ndef reset_search_buffer(event):\n if event.current_buffer.document.text:\n event.current_buffer.reset()\n else:\n event.cli.push_focus(DEFAULT_BUFFER)\n\ndef suspend_to_bg(event):\n event.cli.suspend_to_background()\n\ndef indent_buffer(event):\n event.current_buffer.insert_text(' ' * 4)\n\ndef newline_with_copy_margin(event):\n \"\"\"\n Preserve margin and cursor position when using\n Control-O to insert a newline in EMACS mode\n \"\"\"\n b = event.current_buffer\n cursor_start_pos = b.document.cursor_position_col\n b.newline(copy_margin=True)\n b.cursor_up(count=1)\n cursor_end_pos = b.document.cursor_position_col\n if cursor_start_pos != cursor_end_pos:\n pos_diff = cursor_start_pos - cursor_end_pos\n b.cursor_right(count=pos_diff)\n\n\n\n\nif sys.platform == 'win32':\n from IPython.core.error import TryNext\n from IPython.lib.clipboard import (ClipboardEmpty,\n win32_clipboard_get,\n tkinter_clipboard_get)\n\n @undoc\n def win_paste(event):\n try:\n text = win32_clipboard_get()\n except TryNext:\n try:\n text = tkinter_clipboard_get()\n except (TryNext, ClipboardEmpty):\n return\n except ClipboardEmpty:\n return\n event.current_buffer.insert_text(text.replace('\\t', ' ' * 4))\n", "path": "IPython/terminal/shortcuts.py"}], "after_files": [{"content": "import signal\nimport sys\n\nfrom prompt_toolkit.enums import DEFAULT_BUFFER, SEARCH_BUFFER\nfrom prompt_toolkit.filters import (HasFocus, HasSelection, Condition,\n ViInsertMode, EmacsInsertMode, HasCompletions)\nfrom prompt_toolkit.filters.cli import ViMode\nfrom prompt_toolkit.keys import Keys\nfrom prompt_toolkit.key_binding.bindings.completion import display_completions_like_readline\n\nfrom IPython.utils.decorators import undoc\n\n@Condition\ndef cursor_in_leading_ws(cli):\n before = cli.application.buffer.document.current_line_before_cursor\n return (not before) or before.isspace()\n\ndef register_ipython_shortcuts(registry, shell):\n \"\"\"Set up the prompt_toolkit keyboard shortcuts for IPython\"\"\"\n insert_mode = ViInsertMode() | EmacsInsertMode()\n\n # Ctrl+J == Enter, seemingly\n registry.add_binding(Keys.ControlJ,\n filter=(HasFocus(DEFAULT_BUFFER)\n & ~HasSelection()\n & insert_mode\n ))(newline_or_execute_outer(shell))\n\n registry.add_binding(Keys.ControlBackslash)(force_exit)\n\n registry.add_binding(Keys.ControlP,\n filter=(ViInsertMode() & HasFocus(DEFAULT_BUFFER)\n ))(previous_history_or_previous_completion)\n\n registry.add_binding(Keys.ControlN,\n filter=(ViInsertMode() & HasFocus(DEFAULT_BUFFER)\n ))(next_history_or_next_completion)\n\n registry.add_binding(Keys.ControlG,\n filter=(HasFocus(DEFAULT_BUFFER) & HasCompletions()\n ))(dismiss_completion)\n\n registry.add_binding(Keys.ControlC, filter=HasFocus(DEFAULT_BUFFER)\n )(reset_buffer)\n\n registry.add_binding(Keys.ControlC, filter=HasFocus(SEARCH_BUFFER)\n )(reset_search_buffer)\n\n supports_suspend = Condition(lambda cli: hasattr(signal, 'SIGTSTP'))\n registry.add_binding(Keys.ControlZ, filter=supports_suspend\n )(suspend_to_bg)\n\n # Ctrl+I == Tab\n registry.add_binding(Keys.ControlI,\n filter=(HasFocus(DEFAULT_BUFFER)\n & ~HasSelection()\n & insert_mode\n & cursor_in_leading_ws\n ))(indent_buffer)\n\n registry.add_binding(Keys.ControlO,\n filter=(HasFocus(DEFAULT_BUFFER)\n & EmacsInsertMode()))(newline_with_copy_margin)\n\n if shell.display_completions == 'readlinelike':\n registry.add_binding(Keys.ControlI,\n filter=(HasFocus(DEFAULT_BUFFER)\n & ~HasSelection()\n & insert_mode\n & ~cursor_in_leading_ws\n ))(display_completions_like_readline)\n\n if sys.platform == 'win32':\n registry.add_binding(Keys.ControlV,\n filter=(\n HasFocus(\n DEFAULT_BUFFER) & ~ViMode()\n ))(win_paste)\n\n\ndef newline_or_execute_outer(shell):\n def newline_or_execute(event):\n \"\"\"When the user presses return, insert a newline or execute the code.\"\"\"\n b = event.current_buffer\n d = b.document\n\n if b.complete_state:\n cc = b.complete_state.current_completion\n if cc:\n b.apply_completion(cc)\n else:\n b.cancel_completion()\n return\n\n if not (d.on_last_line or d.cursor_position_row >= d.line_count\n - d.empty_line_count_at_the_end()):\n b.newline()\n return\n\n status, indent = shell.input_splitter.check_complete(d.text + '\\n')\n\n if (status != 'incomplete') and b.accept_action.is_returnable:\n b.accept_action.validate_and_handle(event.cli, b)\n else:\n b.insert_text('\\n' + (' ' * (indent or 0)))\n return newline_or_execute\n\n\ndef previous_history_or_previous_completion(event):\n \"\"\"\n Control-P in vi edit mode on readline is history next, unlike default prompt toolkit.\n\n If completer is open this still select previous completion.\n \"\"\"\n event.current_buffer.auto_up()\n\n\ndef next_history_or_next_completion(event):\n \"\"\"\n Control-N in vi edit mode on readline is history previous, unlike default prompt toolkit.\n\n If completer is open this still select next completion.\n \"\"\"\n event.current_buffer.auto_down()\n\n\ndef dismiss_completion(event):\n b = event.current_buffer\n if b.complete_state:\n b.cancel_completion()\n\n\ndef reset_buffer(event):\n b = event.current_buffer\n if b.complete_state:\n b.cancel_completion()\n else:\n b.reset()\n\n\ndef reset_search_buffer(event):\n if event.current_buffer.document.text:\n event.current_buffer.reset()\n else:\n event.cli.push_focus(DEFAULT_BUFFER)\n\ndef suspend_to_bg(event):\n event.cli.suspend_to_background()\n\ndef force_exit(event):\n \"\"\"\n Force exit (with a non-zero return value)\n \"\"\"\n sys.exit(\"Quit\")\n\ndef indent_buffer(event):\n event.current_buffer.insert_text(' ' * 4)\n\ndef newline_with_copy_margin(event):\n \"\"\"\n Preserve margin and cursor position when using\n Control-O to insert a newline in EMACS mode\n \"\"\"\n b = event.current_buffer\n cursor_start_pos = b.document.cursor_position_col\n b.newline(copy_margin=True)\n b.cursor_up(count=1)\n cursor_end_pos = b.document.cursor_position_col\n if cursor_start_pos != cursor_end_pos:\n pos_diff = cursor_start_pos - cursor_end_pos\n b.cursor_right(count=pos_diff)\n\n\n\n\nif sys.platform == 'win32':\n from IPython.core.error import TryNext\n from IPython.lib.clipboard import (ClipboardEmpty,\n win32_clipboard_get,\n tkinter_clipboard_get)\n\n @undoc\n def win_paste(event):\n try:\n text = win32_clipboard_get()\n except TryNext:\n try:\n text = tkinter_clipboard_get()\n except (TryNext, ClipboardEmpty):\n return\n except ClipboardEmpty:\n return\n event.current_buffer.insert_text(text.replace('\\t', ' ' * 4))\n", "path": "IPython/terminal/shortcuts.py"}]}
2,063
194
gh_patches_debug_20625
rasdani/github-patches
git_diff
boto__botocore-66
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Fix output in multi result pagination (build_full_result) Because we use izip_longest you can get a response like this: ``` {"CommonPrefixes": [null, null, null, null], "Content": [{...}, {...}, {...}, {...} } ``` When really if the null we shouldn't add it to the list. Then our response _should_ look like: ``` {"CommonPrefixes": [], "Content": [{...}, {...}, {...}, {...} } ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `botocore/paginate.py` Content: ``` 1 # Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved 2 # 3 # Permission is hereby granted, free of charge, to any person obtaining a 4 # copy of this software and associated documentation files (the 5 # "Software"), to deal in the Software without restriction, including 6 # without limitation the rights to use, copy, modify, merge, publish, dis- 7 # tribute, sublicense, and/or sell copies of the Software, and to permit 8 # persons to whom the Software is furnished to do so, subject to the fol- 9 # lowing conditions: 10 # 11 # The above copyright notice and this permission notice shall be included 12 # in all copies or substantial portions of the Software. 13 # 14 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS 15 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- 16 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT 17 # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 18 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 19 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS 20 # IN THE SOFTWARE. 21 # 22 from itertools import tee 23 from collections import defaultdict 24 try: 25 from itertools import zip_longest 26 except ImportError: 27 # Python2.x is izip_longest. 28 from itertools import izip_longest as zip_longest 29 30 try: 31 zip 32 except NameError: 33 # Python2.x is izip. 34 from itertools import izip as zip 35 36 import jmespath 37 from botocore.exceptions import PaginationError 38 39 40 class Paginator(object): 41 def __init__(self, operation): 42 self._operation = operation 43 self._pagination_cfg = operation.pagination 44 self._output_token = self._get_output_tokens(self._pagination_cfg) 45 self._input_token = self._get_input_tokens(self._pagination_cfg) 46 self._more_results = self._get_more_results_token(self._pagination_cfg) 47 self._result_key = self._get_result_key(self._pagination_cfg) 48 49 def _get_output_tokens(self, config): 50 output = [] 51 output_token = config['output_token'] 52 if not isinstance(output_token, list): 53 output_token = [output_token] 54 for config in output_token: 55 output.append(jmespath.compile(config)) 56 return output 57 58 def _get_input_tokens(self, config): 59 input_token = self._pagination_cfg['py_input_token'] 60 if not isinstance(input_token, list): 61 input_token = [input_token] 62 return input_token 63 64 def _get_more_results_token(self, config): 65 more_results = config.get('more_results') 66 if more_results is not None: 67 return jmespath.compile(more_results) 68 69 def _get_result_key(self, config): 70 result_key = config.get('result_key') 71 if result_key is not None: 72 if not isinstance(result_key, list): 73 result_key = [result_key] 74 return result_key 75 76 def paginate(self, endpoint, **kwargs): 77 """Paginate responses to an operation. 78 79 The responses to some operations are too large for a single response. 80 When this happens, the service will indicate that there are more 81 results in its response. This method handles the details of how 82 to detect when this happens and how to retrieve more results. 83 84 This method returns an iterator. Each element in the iterator 85 is the result of an ``Operation.call`` call, so each element is 86 a tuple of (``http_response``, ``parsed_result``). 87 88 """ 89 return PageIterator(self._operation, self._input_token, 90 self._output_token, self._more_results, 91 self._result_key, endpoint, kwargs) 92 93 94 95 class PageIterator(object): 96 def __init__(self, operation, input_token, output_token, more_results, 97 result_key, endpoint, op_kwargs): 98 self._operation = operation 99 self._input_token = input_token 100 self._output_token = output_token 101 self._more_results = more_results 102 self._result_key = result_key 103 self._endpoint = endpoint 104 self._op_kwargs = op_kwargs 105 self._http_responses = [] 106 107 @property 108 def http_responses(self): 109 return self._http_responses 110 111 def __iter__(self): 112 current_kwargs = self._op_kwargs 113 endpoint = self._endpoint 114 previous_next_token = None 115 while True: 116 http_response, parsed = self._operation.call(endpoint, 117 **current_kwargs) 118 self._http_responses.append(http_response) 119 yield http_response, parsed 120 next_token = self._get_next_token(parsed) 121 if all(t is None for t in next_token): 122 break 123 if previous_next_token is not None and \ 124 previous_next_token == next_token: 125 message = ("The same next token was received " 126 "twice: %s" % next_token) 127 raise PaginationError(message=message) 128 for name, token in zip(self._input_token, next_token): 129 current_kwargs[name] = token 130 previous_next_token = next_token 131 132 def _get_next_token(self, parsed): 133 if self._more_results is not None: 134 if not self._more_results.search(parsed): 135 return [None] 136 next_tokens = [] 137 for token in self._output_token: 138 next_tokens.append(token.search(parsed)) 139 return next_tokens 140 141 def result_key_iters(self): 142 teed_results = tee(self, len(self._result_key)) 143 return [ResultKeyIterator(i, result_key) for i, result_key 144 in zip(teed_results, self._result_key)] 145 146 def build_full_result(self): 147 iterators = self.result_key_iters() 148 if len(iterators) > 1: 149 response = defaultdict(list) 150 key_names = [i.result_key for i in iterators] 151 for vals in zip_longest(*iterators): 152 for k, val in zip(key_names, vals): 153 response[k].append(val) 154 else: 155 response = list(iterators[0]) 156 return response 157 158 159 class ResultKeyIterator(object): 160 """Iterates over the results of paginated responses.""" 161 def __init__(self, pages_iterator, result_key): 162 self._pages_iterator = pages_iterator 163 self.result_key = result_key 164 165 def __iter__(self): 166 for _, page in self._pages_iterator: 167 for result in page.get(self.result_key, []): 168 yield result 169 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/botocore/paginate.py b/botocore/paginate.py --- a/botocore/paginate.py +++ b/botocore/paginate.py @@ -20,7 +20,6 @@ # IN THE SOFTWARE. # from itertools import tee -from collections import defaultdict try: from itertools import zip_longest except ImportError: @@ -146,11 +145,14 @@ def build_full_result(self): iterators = self.result_key_iters() if len(iterators) > 1: - response = defaultdict(list) + response = {} key_names = [i.result_key for i in iterators] + for key in key_names: + response[key] = [] for vals in zip_longest(*iterators): for k, val in zip(key_names, vals): - response[k].append(val) + if val is not None: + response[k].append(val) else: response = list(iterators[0]) return response
{"golden_diff": "diff --git a/botocore/paginate.py b/botocore/paginate.py\n--- a/botocore/paginate.py\n+++ b/botocore/paginate.py\n@@ -20,7 +20,6 @@\n # IN THE SOFTWARE.\n #\n from itertools import tee\n-from collections import defaultdict\n try:\n from itertools import zip_longest\n except ImportError:\n@@ -146,11 +145,14 @@\n def build_full_result(self):\n iterators = self.result_key_iters()\n if len(iterators) > 1:\n- response = defaultdict(list)\n+ response = {}\n key_names = [i.result_key for i in iterators]\n+ for key in key_names:\n+ response[key] = []\n for vals in zip_longest(*iterators):\n for k, val in zip(key_names, vals):\n- response[k].append(val)\n+ if val is not None:\n+ response[k].append(val)\n else:\n response = list(iterators[0])\n return response\n", "issue": "Fix output in multi result pagination (build_full_result)\nBecause we use izip_longest you can get a response like this:\n\n```\n{\"CommonPrefixes\": [null, null, null, null],\n \"Content\": [{...}, {...}, {...}, {...}\n}\n```\n\nWhen really if the null we shouldn't add it to the list. Then our response _should_ look like:\n\n```\n{\"CommonPrefixes\": [],\n \"Content\": [{...}, {...}, {...}, {...}\n}\n```\n\n", "before_files": [{"content": "# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish, dis-\n# tribute, sublicense, and/or sell copies of the Software, and to permit\n# persons to whom the Software is furnished to do so, subject to the fol-\n# lowing conditions:\n#\n# The above copyright notice and this permission notice shall be included\n# in all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\n# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-\n# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT\n# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,\n# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n# IN THE SOFTWARE.\n#\nfrom itertools import tee\nfrom collections import defaultdict\ntry:\n from itertools import zip_longest\nexcept ImportError:\n # Python2.x is izip_longest.\n from itertools import izip_longest as zip_longest\n\ntry:\n zip\nexcept NameError:\n # Python2.x is izip.\n from itertools import izip as zip\n\nimport jmespath\nfrom botocore.exceptions import PaginationError\n\n\nclass Paginator(object):\n def __init__(self, operation):\n self._operation = operation\n self._pagination_cfg = operation.pagination\n self._output_token = self._get_output_tokens(self._pagination_cfg)\n self._input_token = self._get_input_tokens(self._pagination_cfg)\n self._more_results = self._get_more_results_token(self._pagination_cfg)\n self._result_key = self._get_result_key(self._pagination_cfg)\n\n def _get_output_tokens(self, config):\n output = []\n output_token = config['output_token']\n if not isinstance(output_token, list):\n output_token = [output_token]\n for config in output_token:\n output.append(jmespath.compile(config))\n return output\n\n def _get_input_tokens(self, config):\n input_token = self._pagination_cfg['py_input_token']\n if not isinstance(input_token, list):\n input_token = [input_token]\n return input_token\n\n def _get_more_results_token(self, config):\n more_results = config.get('more_results')\n if more_results is not None:\n return jmespath.compile(more_results)\n\n def _get_result_key(self, config):\n result_key = config.get('result_key')\n if result_key is not None:\n if not isinstance(result_key, list):\n result_key = [result_key]\n return result_key\n\n def paginate(self, endpoint, **kwargs):\n \"\"\"Paginate responses to an operation.\n\n The responses to some operations are too large for a single response.\n When this happens, the service will indicate that there are more\n results in its response. This method handles the details of how\n to detect when this happens and how to retrieve more results.\n\n This method returns an iterator. Each element in the iterator\n is the result of an ``Operation.call`` call, so each element is\n a tuple of (``http_response``, ``parsed_result``).\n\n \"\"\"\n return PageIterator(self._operation, self._input_token,\n self._output_token, self._more_results,\n self._result_key, endpoint, kwargs)\n\n\n\nclass PageIterator(object):\n def __init__(self, operation, input_token, output_token, more_results,\n result_key, endpoint, op_kwargs):\n self._operation = operation\n self._input_token = input_token\n self._output_token = output_token\n self._more_results = more_results\n self._result_key = result_key\n self._endpoint = endpoint\n self._op_kwargs = op_kwargs\n self._http_responses = []\n\n @property\n def http_responses(self):\n return self._http_responses\n\n def __iter__(self):\n current_kwargs = self._op_kwargs\n endpoint = self._endpoint\n previous_next_token = None\n while True:\n http_response, parsed = self._operation.call(endpoint,\n **current_kwargs)\n self._http_responses.append(http_response)\n yield http_response, parsed\n next_token = self._get_next_token(parsed)\n if all(t is None for t in next_token):\n break\n if previous_next_token is not None and \\\n previous_next_token == next_token:\n message = (\"The same next token was received \"\n \"twice: %s\" % next_token)\n raise PaginationError(message=message)\n for name, token in zip(self._input_token, next_token):\n current_kwargs[name] = token\n previous_next_token = next_token\n\n def _get_next_token(self, parsed):\n if self._more_results is not None:\n if not self._more_results.search(parsed):\n return [None]\n next_tokens = []\n for token in self._output_token:\n next_tokens.append(token.search(parsed))\n return next_tokens\n\n def result_key_iters(self):\n teed_results = tee(self, len(self._result_key))\n return [ResultKeyIterator(i, result_key) for i, result_key\n in zip(teed_results, self._result_key)]\n\n def build_full_result(self):\n iterators = self.result_key_iters()\n if len(iterators) > 1:\n response = defaultdict(list)\n key_names = [i.result_key for i in iterators]\n for vals in zip_longest(*iterators):\n for k, val in zip(key_names, vals):\n response[k].append(val)\n else:\n response = list(iterators[0])\n return response\n\n\nclass ResultKeyIterator(object):\n \"\"\"Iterates over the results of paginated responses.\"\"\"\n def __init__(self, pages_iterator, result_key):\n self._pages_iterator = pages_iterator\n self.result_key = result_key\n\n def __iter__(self):\n for _, page in self._pages_iterator:\n for result in page.get(self.result_key, []):\n yield result\n", "path": "botocore/paginate.py"}], "after_files": [{"content": "# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish, dis-\n# tribute, sublicense, and/or sell copies of the Software, and to permit\n# persons to whom the Software is furnished to do so, subject to the fol-\n# lowing conditions:\n#\n# The above copyright notice and this permission notice shall be included\n# in all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\n# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-\n# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT\n# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,\n# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n# IN THE SOFTWARE.\n#\nfrom itertools import tee\ntry:\n from itertools import zip_longest\nexcept ImportError:\n # Python2.x is izip_longest.\n from itertools import izip_longest as zip_longest\n\ntry:\n zip\nexcept NameError:\n # Python2.x is izip.\n from itertools import izip as zip\n\nimport jmespath\nfrom botocore.exceptions import PaginationError\n\n\nclass Paginator(object):\n def __init__(self, operation):\n self._operation = operation\n self._pagination_cfg = operation.pagination\n self._output_token = self._get_output_tokens(self._pagination_cfg)\n self._input_token = self._get_input_tokens(self._pagination_cfg)\n self._more_results = self._get_more_results_token(self._pagination_cfg)\n self._result_key = self._get_result_key(self._pagination_cfg)\n\n def _get_output_tokens(self, config):\n output = []\n output_token = config['output_token']\n if not isinstance(output_token, list):\n output_token = [output_token]\n for config in output_token:\n output.append(jmespath.compile(config))\n return output\n\n def _get_input_tokens(self, config):\n input_token = self._pagination_cfg['py_input_token']\n if not isinstance(input_token, list):\n input_token = [input_token]\n return input_token\n\n def _get_more_results_token(self, config):\n more_results = config.get('more_results')\n if more_results is not None:\n return jmespath.compile(more_results)\n\n def _get_result_key(self, config):\n result_key = config.get('result_key')\n if result_key is not None:\n if not isinstance(result_key, list):\n result_key = [result_key]\n return result_key\n\n def paginate(self, endpoint, **kwargs):\n \"\"\"Paginate responses to an operation.\n\n The responses to some operations are too large for a single response.\n When this happens, the service will indicate that there are more\n results in its response. This method handles the details of how\n to detect when this happens and how to retrieve more results.\n\n This method returns an iterator. Each element in the iterator\n is the result of an ``Operation.call`` call, so each element is\n a tuple of (``http_response``, ``parsed_result``).\n\n \"\"\"\n return PageIterator(self._operation, self._input_token,\n self._output_token, self._more_results,\n self._result_key, endpoint, kwargs)\n\n\n\nclass PageIterator(object):\n def __init__(self, operation, input_token, output_token, more_results,\n result_key, endpoint, op_kwargs):\n self._operation = operation\n self._input_token = input_token\n self._output_token = output_token\n self._more_results = more_results\n self._result_key = result_key\n self._endpoint = endpoint\n self._op_kwargs = op_kwargs\n self._http_responses = []\n\n @property\n def http_responses(self):\n return self._http_responses\n\n def __iter__(self):\n current_kwargs = self._op_kwargs\n endpoint = self._endpoint\n previous_next_token = None\n while True:\n http_response, parsed = self._operation.call(endpoint,\n **current_kwargs)\n self._http_responses.append(http_response)\n yield http_response, parsed\n next_token = self._get_next_token(parsed)\n if all(t is None for t in next_token):\n break\n if previous_next_token is not None and \\\n previous_next_token == next_token:\n message = (\"The same next token was received \"\n \"twice: %s\" % next_token)\n raise PaginationError(message=message)\n for name, token in zip(self._input_token, next_token):\n current_kwargs[name] = token\n previous_next_token = next_token\n\n def _get_next_token(self, parsed):\n if self._more_results is not None:\n if not self._more_results.search(parsed):\n return [None]\n next_tokens = []\n for token in self._output_token:\n next_tokens.append(token.search(parsed))\n return next_tokens\n\n def result_key_iters(self):\n teed_results = tee(self, len(self._result_key))\n return [ResultKeyIterator(i, result_key) for i, result_key\n in zip(teed_results, self._result_key)]\n\n def build_full_result(self):\n iterators = self.result_key_iters()\n if len(iterators) > 1:\n response = {}\n key_names = [i.result_key for i in iterators]\n for key in key_names:\n response[key] = []\n for vals in zip_longest(*iterators):\n for k, val in zip(key_names, vals):\n if val is not None:\n response[k].append(val)\n else:\n response = list(iterators[0])\n return response\n\n\nclass ResultKeyIterator(object):\n \"\"\"Iterates over the results of paginated responses.\"\"\"\n def __init__(self, pages_iterator, result_key):\n self._pages_iterator = pages_iterator\n self.result_key = result_key\n\n def __iter__(self):\n for _, page in self._pages_iterator:\n for result in page.get(self.result_key, []):\n yield result\n", "path": "botocore/paginate.py"}]}
2,142
229
gh_patches_debug_27745
rasdani/github-patches
git_diff
NVIDIA-Merlin__NVTabular-1357
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [BUG] Test_s3 failing ``` with s3_context(s3_base=s3_base, bucket=engine, files=files) as s3fs: # Create nvt.Dataset from mock s3 paths url = f"s3://{engine}" if engine == "parquet" else f"s3://{engine}/*" dataset = nvt.Dataset(url, engine=engine, storage_options=s3so) # Check that the iteration API works columns = mycols_pq if engine == "parquet" else mycols_csv gdf = nvt.dispatch._concat(list(dataset.to_iter()))[columns] assert_eq(gdf.reset_index(drop=True), df.reset_index(drop=True)) cat_names = ["name-cat", "name-string"] if engine == "parquet" else ["name-string"] cont_names = ["x", "y", "id"] label_name = ["label"] conts = cont_names >> ops.FillMissing() >> ops.Clip(min_value=0) >> ops.LogOp() cats = cat_names >> ops.Categorify(cat_cache="host") processor = nvt.Workflow(conts + cats + label_name) processor.fit(dataset) # make sure we can write out the dataset back to S3 # (https://github.com/NVIDIA-Merlin/NVTabular/issues/1214) > processor.transform(dataset).to_parquet(f"s3://{engine}/output") /nvtabular/tests/unit/test_s3.py:111: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /nvtabular/nvtabular/io/dataset.py:906: in to_parquet self.schema.write(output_path) /nvtabular/nvtabular/graph/schema.py:154: in write return PbTxt_SchemaWriter.write(self, schema_path) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ cls = <class 'nvtabular.graph.schema_io.schema_writer_pbtxt.PbTxt_SchemaWriter'> schema = [{'name': 'x', 'tags': [<Tags.CONTINUOUS: 'continuous'>], 'properties': {}, 'dtype': <class 'float'>, '_is_list': Fals...int'>, '_is_list': False}, {'name': 'label', 'tags': [], 'properties': {}, 'dtype': dtype('int64'), '_is_list': False}] schema_path = PosixPath('s3:/csv/output') @classmethod def write(cls, schema, schema_path): schema_path = Path(schema_path) if not schema_path.is_dir(): > raise ValueError(f"The path provided is not a valid directory: {schema_path}") E ValueError: The path provided is not a valid directory: s3:/csv/output /nvtabular/nvtabular/graph/schema_io/schema_writer_pbtxt.py:45: ValueError ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `nvtabular/graph/schema_io/schema_writer_pbtxt.py` Content: ``` 1 # 2 # Copyright (c) 2021, NVIDIA CORPORATION. 3 # 4 # Licensed under the Apache License, Version 2.0 (the "License"); 5 # you may not use this file except in compliance with the License. 6 # You may obtain a copy of the License at 7 # 8 # http://www.apache.org/licenses/LICENSE-2.0 9 # 10 # Unless required by applicable law or agreed to in writing, software 11 # distributed under the License is distributed on an "AS IS" BASIS, 12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 # See the License for the specific language governing permissions and 14 # limitations under the License. 15 # 16 import os 17 from pathlib import Path 18 19 import numpy 20 21 os.environ["PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION"] = "python" 22 from google.protobuf import json_format, text_format # noqa 23 from google.protobuf.any_pb2 import Any # noqa 24 from google.protobuf.struct_pb2 import Struct # noqa 25 from tensorflow_metadata.proto.v0 import schema_pb2 # noqa 26 27 import nvtabular as nvt # noqa 28 from nvtabular.graph.schema_io.schema_writer_base import SchemaWriter # noqa 29 from nvtabular.graph.tags import Tags # noqa 30 31 32 class PbTxt_SchemaWriter(SchemaWriter): 33 @classmethod 34 def _read(cls, schema_path): 35 with open(schema_path, "r") as f: 36 schema = schema_pb2.Schema() 37 text_format.Parse(f.read(), schema) 38 39 return schema 40 41 @classmethod 42 def write(cls, schema, schema_path): 43 schema_path = Path(schema_path) 44 if not schema_path.is_dir(): 45 raise ValueError(f"The path provided is not a valid directory: {schema_path}") 46 47 # traverse list of column schema 48 schema_file = schema_pb2.Schema() 49 features = [] 50 for col_name, col_schema in schema.column_schemas.items(): 51 features.append(create_protobuf_feature(col_schema)) 52 schema_file.feature.extend(features) 53 54 with open(schema_path / "schema.pbtxt", "w") as f: 55 f.write(text_format.MessageToString(schema_file)) 56 return schema 57 58 @classmethod 59 def load(cls, schema_path): 60 columns = [] 61 if isinstance(schema_path, (str, Path)): 62 if isinstance(schema_path, str): 63 schema_path = Path(schema_path) 64 if schema_path.is_dir(): 65 schema_path = schema_path / "schema.pbtxt" 66 schema = cls._read(schema_path) 67 68 for feat in schema.feature: 69 _is_list = False 70 dtype = None 71 properties = {} 72 tags = list(feat.annotation.tag) or [] 73 # only one item should ever be in extra_metadata 74 if len(feat.annotation.extra_metadata) > 1: 75 raise ValueError( 76 f"{feat.name}: extra_metadata should have 1 item, has \ 77 {len(feat.annotation.extra_metadata)}" 78 ) 79 if feat.annotation.extra_metadata: 80 properties = json_format.MessageToDict(feat.annotation.extra_metadata[0])["value"] 81 # what domain 82 # load the domain values 83 shape_name = feat.WhichOneof("shape_type") 84 if shape_name: 85 _is_list = True 86 field_name = feat.WhichOneof("domain_info") 87 if field_name: 88 domain_values = getattr(feat, field_name) 89 # if zero no values were passed 90 if domain_values.max > 0: 91 properties["domain"] = {"min": domain_values.min, "max": domain_values.max} 92 if feat.type: 93 if feat.type == 2: 94 dtype = numpy.int 95 elif feat.type == 3: 96 dtype = numpy.float 97 columns.append( 98 nvt.ColumnSchema( 99 feat.name, tags=tags, properties=properties, dtype=dtype, _is_list=_is_list 100 ) 101 ) 102 103 return nvt.Schema(columns) 104 105 106 def register_extra_metadata(column_schema, feature): 107 filtered_properties = {k: v for k, v in column_schema.properties.items() if k != "domain"} 108 msg_struct = Struct() 109 # must pack message into "Any" type 110 any_pack = Any() 111 any_pack.Pack(json_format.ParseDict(filtered_properties, msg_struct)) 112 # extra_metadata only takes type "Any" messages 113 feature.annotation.extra_metadata.add().CopyFrom(any_pack) 114 return feature 115 116 117 def register_list(column_schema, feature): 118 if str(column_schema._is_list): 119 min_length, max_length = None, None 120 if "value_count" in column_schema.properties: 121 min_length = column_schema.properties["value_count"]["min"] 122 max_length = column_schema.properties["value_count"]["max"] 123 if min_length and max_length and min_length == max_length: 124 shape = schema_pb2.FixedShape() 125 dim = shape.dim.add() 126 dim.size = min_length 127 feature.shape.CopyFrom(shape) 128 elif min_length and max_length and min_length < max_length: 129 feature.value_count.CopyFrom(schema_pb2.ValueCount(min=min_length, max=max_length)) 130 else: 131 # if no min max available set dummy value, to signal this is list 132 feature.value_count.CopyFrom(schema_pb2.ValueCount(min=0, max=0)) 133 return feature 134 135 136 def set_protobuf_float(column_schema, feature): 137 domain = column_schema.properties.get("domain", {}) 138 feature.float_domain.CopyFrom( 139 schema_pb2.FloatDomain( 140 name=column_schema.name, 141 min=domain.get("min", None), 142 max=domain.get("max", None), 143 ) 144 ) 145 feature.type = schema_pb2.FeatureType.FLOAT 146 return feature 147 148 149 def set_protobuf_int(column_schema, feature): 150 domain = column_schema.properties.get("domain", {}) 151 feature.int_domain.CopyFrom( 152 schema_pb2.IntDomain( 153 name=column_schema.name, 154 min=domain.get("min", None), 155 max=domain.get("max", None), 156 is_categorical=( 157 Tags.CATEGORICAL in column_schema.tags 158 or Tags.CATEGORICAL.value in column_schema.tags 159 ), 160 ) 161 ) 162 feature.type = schema_pb2.FeatureType.INT 163 return feature 164 165 166 def register_dtype(column_schema, feature): 167 # column_schema is a dict, changes are held 168 # TODO: this double check can be refactored 169 if column_schema.dtype: 170 if column_schema._is_list: 171 feature = proto_dict["list"](column_schema, feature) 172 if hasattr(column_schema.dtype, "kind"): 173 string_name = numpy.core._dtype._kind_name(column_schema.dtype) 174 elif hasattr(column_schema.dtype, "item"): 175 string_name = type(column_schema.dtype(1).item()).__name__ 176 elif isinstance(column_schema.dtype, str): 177 string_name = column_schema.dtype 178 elif hasattr(column_schema.dtype, "__name__"): 179 string_name = column_schema.dtype.__name__ 180 else: 181 raise TypeError(f"unsupported dtype for column schema: {column_schema.dtype}") 182 183 if string_name in proto_dict: 184 feature = proto_dict[string_name](column_schema, feature) 185 return feature 186 187 188 proto_dict = { 189 "list": register_list, 190 "float": set_protobuf_float, 191 "int": set_protobuf_int, 192 "uint": set_protobuf_int, 193 } 194 195 196 def create_protobuf_feature(column_schema): 197 feature = schema_pb2.Feature() 198 feature.name = column_schema.name 199 feature = register_dtype(column_schema, feature) 200 annotation = feature.annotation 201 annotation.tag.extend( 202 [tag.value if hasattr(tag, "value") else tag for tag in column_schema.tags] 203 ) 204 # can be instantiated with no values 205 # if so, unnecessary to dump 206 # import pdb; pdb.set_trace() 207 if len(column_schema.properties) > 0: 208 feature = register_extra_metadata(column_schema, feature) 209 return feature 210 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/nvtabular/graph/schema_io/schema_writer_pbtxt.py b/nvtabular/graph/schema_io/schema_writer_pbtxt.py --- a/nvtabular/graph/schema_io/schema_writer_pbtxt.py +++ b/nvtabular/graph/schema_io/schema_writer_pbtxt.py @@ -16,6 +16,7 @@ import os from pathlib import Path +import fsspec import numpy os.environ["PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION"] = "python" @@ -40,9 +41,7 @@ @classmethod def write(cls, schema, schema_path): - schema_path = Path(schema_path) - if not schema_path.is_dir(): - raise ValueError(f"The path provided is not a valid directory: {schema_path}") + fs = fsspec.get_fs_token_paths(schema_path)[0] # traverse list of column schema schema_file = schema_pb2.Schema() @@ -51,9 +50,16 @@ features.append(create_protobuf_feature(col_schema)) schema_file.feature.extend(features) - with open(schema_path / "schema.pbtxt", "w") as f: - f.write(text_format.MessageToString(schema_file)) - return schema + try: + with fs.open(fs.sep.join([str(schema_path), "schema.pbtxt"]), "w") as f: + f.write(text_format.MessageToString(schema_file)) + return schema + except Exception as e: + if not fs.isdir(schema_path): + raise ValueError( + f"The path provided is not a valid directory: {schema_path}" + ) from e + raise @classmethod def load(cls, schema_path):
{"golden_diff": "diff --git a/nvtabular/graph/schema_io/schema_writer_pbtxt.py b/nvtabular/graph/schema_io/schema_writer_pbtxt.py\n--- a/nvtabular/graph/schema_io/schema_writer_pbtxt.py\n+++ b/nvtabular/graph/schema_io/schema_writer_pbtxt.py\n@@ -16,6 +16,7 @@\n import os\n from pathlib import Path\n \n+import fsspec\n import numpy\n \n os.environ[\"PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION\"] = \"python\"\n@@ -40,9 +41,7 @@\n \n @classmethod\n def write(cls, schema, schema_path):\n- schema_path = Path(schema_path)\n- if not schema_path.is_dir():\n- raise ValueError(f\"The path provided is not a valid directory: {schema_path}\")\n+ fs = fsspec.get_fs_token_paths(schema_path)[0]\n \n # traverse list of column schema\n schema_file = schema_pb2.Schema()\n@@ -51,9 +50,16 @@\n features.append(create_protobuf_feature(col_schema))\n schema_file.feature.extend(features)\n \n- with open(schema_path / \"schema.pbtxt\", \"w\") as f:\n- f.write(text_format.MessageToString(schema_file))\n- return schema\n+ try:\n+ with fs.open(fs.sep.join([str(schema_path), \"schema.pbtxt\"]), \"w\") as f:\n+ f.write(text_format.MessageToString(schema_file))\n+ return schema\n+ except Exception as e:\n+ if not fs.isdir(schema_path):\n+ raise ValueError(\n+ f\"The path provided is not a valid directory: {schema_path}\"\n+ ) from e\n+ raise\n \n @classmethod\n def load(cls, schema_path):\n", "issue": "[BUG] Test_s3 failing\n```\r\n with s3_context(s3_base=s3_base, bucket=engine, files=files) as s3fs:\r\n # Create nvt.Dataset from mock s3 paths\r\n url = f\"s3://{engine}\" if engine == \"parquet\" else f\"s3://{engine}/*\"\r\n dataset = nvt.Dataset(url, engine=engine, storage_options=s3so)\r\n \r\n # Check that the iteration API works\r\n columns = mycols_pq if engine == \"parquet\" else mycols_csv\r\n gdf = nvt.dispatch._concat(list(dataset.to_iter()))[columns]\r\n assert_eq(gdf.reset_index(drop=True), df.reset_index(drop=True))\r\n \r\n cat_names = [\"name-cat\", \"name-string\"] if engine == \"parquet\" else [\"name-string\"]\r\n cont_names = [\"x\", \"y\", \"id\"]\r\n label_name = [\"label\"]\r\n \r\n conts = cont_names >> ops.FillMissing() >> ops.Clip(min_value=0) >> ops.LogOp()\r\n cats = cat_names >> ops.Categorify(cat_cache=\"host\")\r\n \r\n processor = nvt.Workflow(conts + cats + label_name)\r\n processor.fit(dataset)\r\n \r\n # make sure we can write out the dataset back to S3\r\n # (https://github.com/NVIDIA-Merlin/NVTabular/issues/1214)\r\n> processor.transform(dataset).to_parquet(f\"s3://{engine}/output\")\r\n\r\n/nvtabular/tests/unit/test_s3.py:111: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n/nvtabular/nvtabular/io/dataset.py:906: in to_parquet\r\n self.schema.write(output_path)\r\n/nvtabular/nvtabular/graph/schema.py:154: in write\r\n return PbTxt_SchemaWriter.write(self, schema_path)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\ncls = <class 'nvtabular.graph.schema_io.schema_writer_pbtxt.PbTxt_SchemaWriter'>\r\nschema = [{'name': 'x', 'tags': [<Tags.CONTINUOUS: 'continuous'>], 'properties': {}, 'dtype': <class 'float'>, '_is_list': Fals...int'>, '_is_list': False}, {'name': 'label', 'tags': [], 'properties': {}, 'dtype': dtype('int64'), '_is_list': False}]\r\nschema_path = PosixPath('s3:/csv/output')\r\n\r\n @classmethod\r\n def write(cls, schema, schema_path):\r\n schema_path = Path(schema_path)\r\n if not schema_path.is_dir():\r\n> raise ValueError(f\"The path provided is not a valid directory: {schema_path}\")\r\nE ValueError: The path provided is not a valid directory: s3:/csv/output\r\n\r\n/nvtabular/nvtabular/graph/schema_io/schema_writer_pbtxt.py:45: ValueError\r\n```\n", "before_files": [{"content": "#\n# Copyright (c) 2021, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport os\nfrom pathlib import Path\n\nimport numpy\n\nos.environ[\"PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION\"] = \"python\"\nfrom google.protobuf import json_format, text_format # noqa\nfrom google.protobuf.any_pb2 import Any # noqa\nfrom google.protobuf.struct_pb2 import Struct # noqa\nfrom tensorflow_metadata.proto.v0 import schema_pb2 # noqa\n\nimport nvtabular as nvt # noqa\nfrom nvtabular.graph.schema_io.schema_writer_base import SchemaWriter # noqa\nfrom nvtabular.graph.tags import Tags # noqa\n\n\nclass PbTxt_SchemaWriter(SchemaWriter):\n @classmethod\n def _read(cls, schema_path):\n with open(schema_path, \"r\") as f:\n schema = schema_pb2.Schema()\n text_format.Parse(f.read(), schema)\n\n return schema\n\n @classmethod\n def write(cls, schema, schema_path):\n schema_path = Path(schema_path)\n if not schema_path.is_dir():\n raise ValueError(f\"The path provided is not a valid directory: {schema_path}\")\n\n # traverse list of column schema\n schema_file = schema_pb2.Schema()\n features = []\n for col_name, col_schema in schema.column_schemas.items():\n features.append(create_protobuf_feature(col_schema))\n schema_file.feature.extend(features)\n\n with open(schema_path / \"schema.pbtxt\", \"w\") as f:\n f.write(text_format.MessageToString(schema_file))\n return schema\n\n @classmethod\n def load(cls, schema_path):\n columns = []\n if isinstance(schema_path, (str, Path)):\n if isinstance(schema_path, str):\n schema_path = Path(schema_path)\n if schema_path.is_dir():\n schema_path = schema_path / \"schema.pbtxt\"\n schema = cls._read(schema_path)\n\n for feat in schema.feature:\n _is_list = False\n dtype = None\n properties = {}\n tags = list(feat.annotation.tag) or []\n # only one item should ever be in extra_metadata\n if len(feat.annotation.extra_metadata) > 1:\n raise ValueError(\n f\"{feat.name}: extra_metadata should have 1 item, has \\\n {len(feat.annotation.extra_metadata)}\"\n )\n if feat.annotation.extra_metadata:\n properties = json_format.MessageToDict(feat.annotation.extra_metadata[0])[\"value\"]\n # what domain\n # load the domain values\n shape_name = feat.WhichOneof(\"shape_type\")\n if shape_name:\n _is_list = True\n field_name = feat.WhichOneof(\"domain_info\")\n if field_name:\n domain_values = getattr(feat, field_name)\n # if zero no values were passed\n if domain_values.max > 0:\n properties[\"domain\"] = {\"min\": domain_values.min, \"max\": domain_values.max}\n if feat.type:\n if feat.type == 2:\n dtype = numpy.int\n elif feat.type == 3:\n dtype = numpy.float\n columns.append(\n nvt.ColumnSchema(\n feat.name, tags=tags, properties=properties, dtype=dtype, _is_list=_is_list\n )\n )\n\n return nvt.Schema(columns)\n\n\ndef register_extra_metadata(column_schema, feature):\n filtered_properties = {k: v for k, v in column_schema.properties.items() if k != \"domain\"}\n msg_struct = Struct()\n # must pack message into \"Any\" type\n any_pack = Any()\n any_pack.Pack(json_format.ParseDict(filtered_properties, msg_struct))\n # extra_metadata only takes type \"Any\" messages\n feature.annotation.extra_metadata.add().CopyFrom(any_pack)\n return feature\n\n\ndef register_list(column_schema, feature):\n if str(column_schema._is_list):\n min_length, max_length = None, None\n if \"value_count\" in column_schema.properties:\n min_length = column_schema.properties[\"value_count\"][\"min\"]\n max_length = column_schema.properties[\"value_count\"][\"max\"]\n if min_length and max_length and min_length == max_length:\n shape = schema_pb2.FixedShape()\n dim = shape.dim.add()\n dim.size = min_length\n feature.shape.CopyFrom(shape)\n elif min_length and max_length and min_length < max_length:\n feature.value_count.CopyFrom(schema_pb2.ValueCount(min=min_length, max=max_length))\n else:\n # if no min max available set dummy value, to signal this is list\n feature.value_count.CopyFrom(schema_pb2.ValueCount(min=0, max=0))\n return feature\n\n\ndef set_protobuf_float(column_schema, feature):\n domain = column_schema.properties.get(\"domain\", {})\n feature.float_domain.CopyFrom(\n schema_pb2.FloatDomain(\n name=column_schema.name,\n min=domain.get(\"min\", None),\n max=domain.get(\"max\", None),\n )\n )\n feature.type = schema_pb2.FeatureType.FLOAT\n return feature\n\n\ndef set_protobuf_int(column_schema, feature):\n domain = column_schema.properties.get(\"domain\", {})\n feature.int_domain.CopyFrom(\n schema_pb2.IntDomain(\n name=column_schema.name,\n min=domain.get(\"min\", None),\n max=domain.get(\"max\", None),\n is_categorical=(\n Tags.CATEGORICAL in column_schema.tags\n or Tags.CATEGORICAL.value in column_schema.tags\n ),\n )\n )\n feature.type = schema_pb2.FeatureType.INT\n return feature\n\n\ndef register_dtype(column_schema, feature):\n # column_schema is a dict, changes are held\n # TODO: this double check can be refactored\n if column_schema.dtype:\n if column_schema._is_list:\n feature = proto_dict[\"list\"](column_schema, feature)\n if hasattr(column_schema.dtype, \"kind\"):\n string_name = numpy.core._dtype._kind_name(column_schema.dtype)\n elif hasattr(column_schema.dtype, \"item\"):\n string_name = type(column_schema.dtype(1).item()).__name__\n elif isinstance(column_schema.dtype, str):\n string_name = column_schema.dtype\n elif hasattr(column_schema.dtype, \"__name__\"):\n string_name = column_schema.dtype.__name__\n else:\n raise TypeError(f\"unsupported dtype for column schema: {column_schema.dtype}\")\n\n if string_name in proto_dict:\n feature = proto_dict[string_name](column_schema, feature)\n return feature\n\n\nproto_dict = {\n \"list\": register_list,\n \"float\": set_protobuf_float,\n \"int\": set_protobuf_int,\n \"uint\": set_protobuf_int,\n}\n\n\ndef create_protobuf_feature(column_schema):\n feature = schema_pb2.Feature()\n feature.name = column_schema.name\n feature = register_dtype(column_schema, feature)\n annotation = feature.annotation\n annotation.tag.extend(\n [tag.value if hasattr(tag, \"value\") else tag for tag in column_schema.tags]\n )\n # can be instantiated with no values\n # if so, unnecessary to dump\n # import pdb; pdb.set_trace()\n if len(column_schema.properties) > 0:\n feature = register_extra_metadata(column_schema, feature)\n return feature\n", "path": "nvtabular/graph/schema_io/schema_writer_pbtxt.py"}], "after_files": [{"content": "#\n# Copyright (c) 2021, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport os\nfrom pathlib import Path\n\nimport fsspec\nimport numpy\n\nos.environ[\"PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION\"] = \"python\"\nfrom google.protobuf import json_format, text_format # noqa\nfrom google.protobuf.any_pb2 import Any # noqa\nfrom google.protobuf.struct_pb2 import Struct # noqa\nfrom tensorflow_metadata.proto.v0 import schema_pb2 # noqa\n\nimport nvtabular as nvt # noqa\nfrom nvtabular.graph.schema_io.schema_writer_base import SchemaWriter # noqa\nfrom nvtabular.graph.tags import Tags # noqa\n\n\nclass PbTxt_SchemaWriter(SchemaWriter):\n @classmethod\n def _read(cls, schema_path):\n with open(schema_path, \"r\") as f:\n schema = schema_pb2.Schema()\n text_format.Parse(f.read(), schema)\n\n return schema\n\n @classmethod\n def write(cls, schema, schema_path):\n fs = fsspec.get_fs_token_paths(schema_path)[0]\n\n # traverse list of column schema\n schema_file = schema_pb2.Schema()\n features = []\n for col_name, col_schema in schema.column_schemas.items():\n features.append(create_protobuf_feature(col_schema))\n schema_file.feature.extend(features)\n\n try:\n with fs.open(fs.sep.join([str(schema_path), \"schema.pbtxt\"]), \"w\") as f:\n f.write(text_format.MessageToString(schema_file))\n return schema\n except Exception as e:\n if not fs.isdir(schema_path):\n raise ValueError(\n f\"The path provided is not a valid directory: {schema_path}\"\n ) from e\n raise\n\n @classmethod\n def load(cls, schema_path):\n columns = []\n if isinstance(schema_path, (str, Path)):\n if isinstance(schema_path, str):\n schema_path = Path(schema_path)\n if schema_path.is_dir():\n schema_path = schema_path / \"schema.pbtxt\"\n schema = cls._read(schema_path)\n\n for feat in schema.feature:\n _is_list = False\n dtype = None\n properties = {}\n tags = list(feat.annotation.tag) or []\n # only one item should ever be in extra_metadata\n if len(feat.annotation.extra_metadata) > 1:\n raise ValueError(\n f\"{feat.name}: extra_metadata should have 1 item, has \\\n {len(feat.annotation.extra_metadata)}\"\n )\n if feat.annotation.extra_metadata:\n properties = json_format.MessageToDict(feat.annotation.extra_metadata[0])[\"value\"]\n # what domain\n # load the domain values\n shape_name = feat.WhichOneof(\"shape_type\")\n if shape_name:\n _is_list = True\n field_name = feat.WhichOneof(\"domain_info\")\n if field_name:\n domain_values = getattr(feat, field_name)\n # if zero no values were passed\n if domain_values.max > 0:\n properties[\"domain\"] = {\"min\": domain_values.min, \"max\": domain_values.max}\n if feat.type:\n if feat.type == 2:\n dtype = numpy.int\n elif feat.type == 3:\n dtype = numpy.float\n columns.append(\n nvt.ColumnSchema(\n feat.name, tags=tags, properties=properties, dtype=dtype, _is_list=_is_list\n )\n )\n\n return nvt.Schema(columns)\n\n\ndef register_extra_metadata(column_schema, feature):\n filtered_properties = {k: v for k, v in column_schema.properties.items() if k != \"domain\"}\n msg_struct = Struct()\n # must pack message into \"Any\" type\n any_pack = Any()\n any_pack.Pack(json_format.ParseDict(filtered_properties, msg_struct))\n # extra_metadata only takes type \"Any\" messages\n feature.annotation.extra_metadata.add().CopyFrom(any_pack)\n return feature\n\n\ndef register_list(column_schema, feature):\n if str(column_schema._is_list):\n min_length, max_length = None, None\n if \"value_count\" in column_schema.properties:\n min_length = column_schema.properties[\"value_count\"][\"min\"]\n max_length = column_schema.properties[\"value_count\"][\"max\"]\n if min_length and max_length and min_length == max_length:\n shape = schema_pb2.FixedShape()\n dim = shape.dim.add()\n dim.size = min_length\n feature.shape.CopyFrom(shape)\n elif min_length and max_length and min_length < max_length:\n feature.value_count.CopyFrom(schema_pb2.ValueCount(min=min_length, max=max_length))\n else:\n # if no min max available set dummy value, to signal this is list\n feature.value_count.CopyFrom(schema_pb2.ValueCount(min=0, max=0))\n return feature\n\n\ndef set_protobuf_float(column_schema, feature):\n domain = column_schema.properties.get(\"domain\", {})\n feature.float_domain.CopyFrom(\n schema_pb2.FloatDomain(\n name=column_schema.name,\n min=domain.get(\"min\", None),\n max=domain.get(\"max\", None),\n )\n )\n feature.type = schema_pb2.FeatureType.FLOAT\n return feature\n\n\ndef set_protobuf_int(column_schema, feature):\n domain = column_schema.properties.get(\"domain\", {})\n feature.int_domain.CopyFrom(\n schema_pb2.IntDomain(\n name=column_schema.name,\n min=domain.get(\"min\", None),\n max=domain.get(\"max\", None),\n is_categorical=(\n Tags.CATEGORICAL in column_schema.tags\n or Tags.CATEGORICAL.value in column_schema.tags\n ),\n )\n )\n feature.type = schema_pb2.FeatureType.INT\n return feature\n\n\ndef register_dtype(column_schema, feature):\n # column_schema is a dict, changes are held\n # TODO: this double check can be refactored\n if column_schema.dtype:\n if column_schema._is_list:\n feature = proto_dict[\"list\"](column_schema, feature)\n if hasattr(column_schema.dtype, \"kind\"):\n string_name = numpy.core._dtype._kind_name(column_schema.dtype)\n elif hasattr(column_schema.dtype, \"item\"):\n string_name = type(column_schema.dtype(1).item()).__name__\n elif isinstance(column_schema.dtype, str):\n string_name = column_schema.dtype\n elif hasattr(column_schema.dtype, \"__name__\"):\n string_name = column_schema.dtype.__name__\n else:\n raise TypeError(f\"unsupported dtype for column schema: {column_schema.dtype}\")\n\n if string_name in proto_dict:\n feature = proto_dict[string_name](column_schema, feature)\n return feature\n\n\nproto_dict = {\n \"list\": register_list,\n \"float\": set_protobuf_float,\n \"int\": set_protobuf_int,\n \"uint\": set_protobuf_int,\n}\n\n\ndef create_protobuf_feature(column_schema):\n feature = schema_pb2.Feature()\n feature.name = column_schema.name\n feature = register_dtype(column_schema, feature)\n annotation = feature.annotation\n annotation.tag.extend(\n [tag.value if hasattr(tag, \"value\") else tag for tag in column_schema.tags]\n )\n # can be instantiated with no values\n # if so, unnecessary to dump\n # import pdb; pdb.set_trace()\n if len(column_schema.properties) > 0:\n feature = register_extra_metadata(column_schema, feature)\n return feature\n", "path": "nvtabular/graph/schema_io/schema_writer_pbtxt.py"}]}
3,133
370
gh_patches_debug_33708
rasdani/github-patches
git_diff
cookiecutter__cookiecutter-1304
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- PEP257 docstrings for file "./docs/conf.py" Cover `./docs/conf.py` file with docstrings and follow [PEP257](https://www.python.org/dev/peps/pep-0257/). We use [pydocstyle](https://pypi.org/project/pydocstyle/) for validation. Current validation log: ``` ./docs/conf.py:1 at module level: D100: Missing docstring in public module ./docs/conf.py:28 in public class `Mock`: D101: Missing docstring in public class ./docs/conf.py:29 in public method `__init__`: D107: Missing docstring in __init__ ./docs/conf.py:32 in public method `__call__`: D102: Missing docstring in public method ./docs/conf.py:36 in public method `__getattr__`: D105: Missing docstring in magic method ``` Subtask for #742 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `docs/ccext.py` Content: ``` 1 # -*- coding: utf-8 -*- 2 3 """Custom Sphinx extension to build a list of all of cookiecutter's cli.""" 4 5 import click 6 from docutils import nodes 7 from docutils.parsers import rst 8 from docutils.statemachine import ViewList 9 10 from cookiecutter import cli 11 12 13 class CcCommandLineOptions(rst.Directive): 14 def _format_option(self, option): 15 return [ 16 ".. _`%s`:" % option.name, 17 "", 18 ".. option:: " + ", ".join(option.opts), 19 "", 20 option.help, 21 "" 22 ] 23 24 def process_actions(self): 25 for option in cli.main.params: 26 if isinstance(option, click.core.Option): 27 for line in self._format_option(option): 28 self.view_list.append(line, "") 29 30 def run(self): 31 node = nodes.paragraph() 32 node.document = self.state.document 33 self.view_list = ViewList() 34 self.process_actions() 35 self.state.nested_parse(self.view_list, 0, node) 36 return [node] 37 38 39 def setup(app): 40 app.add_directive('cc-command-line-options', CcCommandLineOptions) 41 ``` Path: `cookiecutter/extensions.py` Content: ``` 1 # -*- coding: utf-8 -*- 2 3 """Jinja2 extensions.""" 4 5 import json 6 import string 7 try: 8 # Python 3.6 and above 9 from secrets import choice 10 except ImportError: 11 from random import choice 12 13 from jinja2.ext import Extension 14 15 16 class JsonifyExtension(Extension): 17 """Jinja2 extension to convert a Python object to JSON.""" 18 19 def __init__(self, environment): 20 """Initialize the extension with the given environment.""" 21 super(JsonifyExtension, self).__init__(environment) 22 23 def jsonify(obj): 24 return json.dumps(obj, sort_keys=True, indent=4) 25 26 environment.filters['jsonify'] = jsonify 27 28 29 class RandomStringExtension(Extension): 30 """Jinja2 extension to create a random string.""" 31 32 def __init__(self, environment): 33 """Jinja2 Extension Constructor""" 34 super(RandomStringExtension, self).__init__(environment) 35 36 def random_ascii_string(length, punctuation=False): 37 if punctuation: 38 corpus = "".join((string.ascii_letters, string.punctuation)) 39 else: 40 corpus = string.ascii_letters 41 return "".join(choice(corpus) for _ in range(length)) 42 environment.globals.update(random_ascii_string=random_ascii_string) 43 ``` Path: `setup.py` Content: ``` 1 #!/usr/bin/env python 2 # -*- coding: utf-8 -*- 3 4 """cookiecutter distutils configuration""" 5 6 import os 7 import io 8 import sys 9 10 from setuptools import setup 11 12 version = "1.7.0" 13 14 if sys.argv[-1] == 'publish': 15 os.system('python setup.py sdist upload') 16 os.system('python setup.py bdist_wheel upload') 17 sys.exit() 18 19 if sys.argv[-1] == 'tag': 20 os.system("git tag -a %s -m 'version %s'" % (version, version)) 21 os.system("git push --tags") 22 sys.exit() 23 24 with io.open('README.md', 'r', encoding='utf-8') as readme_file: 25 readme = readme_file.read() 26 27 requirements = [ 28 'binaryornot>=0.2.0', 29 'jinja2>=2.7', 30 'click>=7.0', 31 'poyo>=0.1.0', 32 'jinja2-time>=0.1.0', 33 'requests>=2.18.0', 34 'six>=1.10', 35 ] 36 37 if sys.argv[-1] == 'readme': 38 print(readme) 39 sys.exit() 40 41 42 setup( 43 name='cookiecutter', 44 version=version, 45 description=('A command-line utility that creates projects from project ' 46 'templates, e.g. creating a Python package project from a ' 47 'Python package project template.'), 48 long_description=readme, 49 long_description_content_type='text/markdown', 50 author='Audrey Roy', 51 author_email='[email protected]', 52 url='https://github.com/cookiecutter/cookiecutter', 53 packages=[ 54 'cookiecutter', 55 ], 56 package_dir={'cookiecutter': 'cookiecutter'}, 57 entry_points={ 58 'console_scripts': [ 59 'cookiecutter = cookiecutter.__main__:main', 60 ] 61 }, 62 include_package_data=True, 63 python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*', 64 install_requires=requirements, 65 extras_require={ 66 ':python_version<"3.3"': ['whichcraft>=0.4.0'], 67 }, 68 license='BSD', 69 zip_safe=False, 70 classifiers=[ 71 "Development Status :: 5 - Production/Stable", 72 "Environment :: Console", 73 "Intended Audience :: Developers", 74 "Natural Language :: English", 75 "License :: OSI Approved :: BSD License", 76 "Programming Language :: Python", 77 "Programming Language :: Python :: 2", 78 "Programming Language :: Python :: 2.7", 79 "Programming Language :: Python :: 3", 80 "Programming Language :: Python :: 3.5", 81 "Programming Language :: Python :: 3.6", 82 "Programming Language :: Python :: 3.7", 83 "Programming Language :: Python :: 3.8", 84 "Programming Language :: Python :: Implementation :: CPython", 85 "Programming Language :: Python :: Implementation :: PyPy", 86 "Topic :: Software Development", 87 ], 88 keywords=( 89 'cookiecutter, Python, projects, project templates, Jinja2, ' 90 'skeleton, scaffolding, project directory, setup.py, package, ' 91 'packaging' 92 ), 93 ) 94 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/cookiecutter/extensions.py b/cookiecutter/extensions.py --- a/cookiecutter/extensions.py +++ b/cookiecutter/extensions.py @@ -30,7 +30,7 @@ """Jinja2 extension to create a random string.""" def __init__(self, environment): - """Jinja2 Extension Constructor""" + """Jinja2 Extension Constructor.""" super(RandomStringExtension, self).__init__(environment) def random_ascii_string(length, punctuation=False): diff --git a/docs/ccext.py b/docs/ccext.py --- a/docs/ccext.py +++ b/docs/ccext.py @@ -11,7 +11,10 @@ class CcCommandLineOptions(rst.Directive): + """Custom docutils extension class to parse cli commands from code.""" + def _format_option(self, option): + """Do cli options formatting.""" return [ ".. _`%s`:" % option.name, "", @@ -22,12 +25,14 @@ ] def process_actions(self): + """Get options from cookiecutter, send to formatter, prepare result.""" for option in cli.main.params: if isinstance(option, click.core.Option): for line in self._format_option(option): self.view_list.append(line, "") def run(self): + """Override `run` in `rst.Directive` class.""" node = nodes.paragraph() node.document = self.state.document self.view_list = ViewList() @@ -37,4 +42,5 @@ def setup(app): + """Register a Docutils extension directive.""" app.add_directive('cc-command-line-options', CcCommandLineOptions) diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -1,7 +1,7 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -"""cookiecutter distutils configuration""" +"""cookiecutter distutils configuration.""" import os import io
{"golden_diff": "diff --git a/cookiecutter/extensions.py b/cookiecutter/extensions.py\n--- a/cookiecutter/extensions.py\n+++ b/cookiecutter/extensions.py\n@@ -30,7 +30,7 @@\n \"\"\"Jinja2 extension to create a random string.\"\"\"\n \n def __init__(self, environment):\n- \"\"\"Jinja2 Extension Constructor\"\"\"\n+ \"\"\"Jinja2 Extension Constructor.\"\"\"\n super(RandomStringExtension, self).__init__(environment)\n \n def random_ascii_string(length, punctuation=False):\ndiff --git a/docs/ccext.py b/docs/ccext.py\n--- a/docs/ccext.py\n+++ b/docs/ccext.py\n@@ -11,7 +11,10 @@\n \n \n class CcCommandLineOptions(rst.Directive):\n+ \"\"\"Custom docutils extension class to parse cli commands from code.\"\"\"\n+\n def _format_option(self, option):\n+ \"\"\"Do cli options formatting.\"\"\"\n return [\n \".. _`%s`:\" % option.name,\n \"\",\n@@ -22,12 +25,14 @@\n ]\n \n def process_actions(self):\n+ \"\"\"Get options from cookiecutter, send to formatter, prepare result.\"\"\"\n for option in cli.main.params:\n if isinstance(option, click.core.Option):\n for line in self._format_option(option):\n self.view_list.append(line, \"\")\n \n def run(self):\n+ \"\"\"Override `run` in `rst.Directive` class.\"\"\"\n node = nodes.paragraph()\n node.document = self.state.document\n self.view_list = ViewList()\n@@ -37,4 +42,5 @@\n \n \n def setup(app):\n+ \"\"\"Register a Docutils extension directive.\"\"\"\n app.add_directive('cc-command-line-options', CcCommandLineOptions)\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,7 +1,7 @@\n #!/usr/bin/env python\n # -*- coding: utf-8 -*-\n \n-\"\"\"cookiecutter distutils configuration\"\"\"\n+\"\"\"cookiecutter distutils configuration.\"\"\"\n \n import os\n import io\n", "issue": "PEP257 docstrings for file \"./docs/conf.py\"\nCover `./docs/conf.py` file with docstrings and follow [PEP257](https://www.python.org/dev/peps/pep-0257/). We use [pydocstyle](https://pypi.org/project/pydocstyle/) for validation.\r\n\r\nCurrent validation log:\r\n\r\n```\r\n./docs/conf.py:1 at module level:\r\n D100: Missing docstring in public module\r\n./docs/conf.py:28 in public class `Mock`:\r\n D101: Missing docstring in public class\r\n./docs/conf.py:29 in public method `__init__`:\r\n D107: Missing docstring in __init__\r\n./docs/conf.py:32 in public method `__call__`:\r\n D102: Missing docstring in public method\r\n./docs/conf.py:36 in public method `__getattr__`:\r\n D105: Missing docstring in magic method\r\n```\r\n\r\nSubtask for #742 \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Custom Sphinx extension to build a list of all of cookiecutter's cli.\"\"\"\n\nimport click\nfrom docutils import nodes\nfrom docutils.parsers import rst\nfrom docutils.statemachine import ViewList\n\nfrom cookiecutter import cli\n\n\nclass CcCommandLineOptions(rst.Directive):\n def _format_option(self, option):\n return [\n \".. _`%s`:\" % option.name,\n \"\",\n \".. option:: \" + \", \".join(option.opts),\n \"\",\n option.help,\n \"\"\n ]\n\n def process_actions(self):\n for option in cli.main.params:\n if isinstance(option, click.core.Option):\n for line in self._format_option(option):\n self.view_list.append(line, \"\")\n\n def run(self):\n node = nodes.paragraph()\n node.document = self.state.document\n self.view_list = ViewList()\n self.process_actions()\n self.state.nested_parse(self.view_list, 0, node)\n return [node]\n\n\ndef setup(app):\n app.add_directive('cc-command-line-options', CcCommandLineOptions)\n", "path": "docs/ccext.py"}, {"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Jinja2 extensions.\"\"\"\n\nimport json\nimport string\ntry:\n # Python 3.6 and above\n from secrets import choice\nexcept ImportError:\n from random import choice\n\nfrom jinja2.ext import Extension\n\n\nclass JsonifyExtension(Extension):\n \"\"\"Jinja2 extension to convert a Python object to JSON.\"\"\"\n\n def __init__(self, environment):\n \"\"\"Initialize the extension with the given environment.\"\"\"\n super(JsonifyExtension, self).__init__(environment)\n\n def jsonify(obj):\n return json.dumps(obj, sort_keys=True, indent=4)\n\n environment.filters['jsonify'] = jsonify\n\n\nclass RandomStringExtension(Extension):\n \"\"\"Jinja2 extension to create a random string.\"\"\"\n\n def __init__(self, environment):\n \"\"\"Jinja2 Extension Constructor\"\"\"\n super(RandomStringExtension, self).__init__(environment)\n\n def random_ascii_string(length, punctuation=False):\n if punctuation:\n corpus = \"\".join((string.ascii_letters, string.punctuation))\n else:\n corpus = string.ascii_letters\n return \"\".join(choice(corpus) for _ in range(length))\n environment.globals.update(random_ascii_string=random_ascii_string)\n", "path": "cookiecutter/extensions.py"}, {"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"cookiecutter distutils configuration\"\"\"\n\nimport os\nimport io\nimport sys\n\nfrom setuptools import setup\n\nversion = \"1.7.0\"\n\nif sys.argv[-1] == 'publish':\n os.system('python setup.py sdist upload')\n os.system('python setup.py bdist_wheel upload')\n sys.exit()\n\nif sys.argv[-1] == 'tag':\n os.system(\"git tag -a %s -m 'version %s'\" % (version, version))\n os.system(\"git push --tags\")\n sys.exit()\n\nwith io.open('README.md', 'r', encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\nrequirements = [\n 'binaryornot>=0.2.0',\n 'jinja2>=2.7',\n 'click>=7.0',\n 'poyo>=0.1.0',\n 'jinja2-time>=0.1.0',\n 'requests>=2.18.0',\n 'six>=1.10',\n]\n\nif sys.argv[-1] == 'readme':\n print(readme)\n sys.exit()\n\n\nsetup(\n name='cookiecutter',\n version=version,\n description=('A command-line utility that creates projects from project '\n 'templates, e.g. creating a Python package project from a '\n 'Python package project template.'),\n long_description=readme,\n long_description_content_type='text/markdown',\n author='Audrey Roy',\n author_email='[email protected]',\n url='https://github.com/cookiecutter/cookiecutter',\n packages=[\n 'cookiecutter',\n ],\n package_dir={'cookiecutter': 'cookiecutter'},\n entry_points={\n 'console_scripts': [\n 'cookiecutter = cookiecutter.__main__:main',\n ]\n },\n include_package_data=True,\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*',\n install_requires=requirements,\n extras_require={\n ':python_version<\"3.3\"': ['whichcraft>=0.4.0'],\n },\n license='BSD',\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Natural Language :: English\",\n \"License :: OSI Approved :: BSD License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Software Development\",\n ],\n keywords=(\n 'cookiecutter, Python, projects, project templates, Jinja2, '\n 'skeleton, scaffolding, project directory, setup.py, package, '\n 'packaging'\n ),\n)\n", "path": "setup.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Custom Sphinx extension to build a list of all of cookiecutter's cli.\"\"\"\n\nimport click\nfrom docutils import nodes\nfrom docutils.parsers import rst\nfrom docutils.statemachine import ViewList\n\nfrom cookiecutter import cli\n\n\nclass CcCommandLineOptions(rst.Directive):\n \"\"\"Custom docutils extension class to parse cli commands from code.\"\"\"\n\n def _format_option(self, option):\n \"\"\"Do cli options formatting.\"\"\"\n return [\n \".. _`%s`:\" % option.name,\n \"\",\n \".. option:: \" + \", \".join(option.opts),\n \"\",\n option.help,\n \"\"\n ]\n\n def process_actions(self):\n \"\"\"Get options from cookiecutter, send to formatter, prepare result.\"\"\"\n for option in cli.main.params:\n if isinstance(option, click.core.Option):\n for line in self._format_option(option):\n self.view_list.append(line, \"\")\n\n def run(self):\n \"\"\"Override `run` in `rst.Directive` class.\"\"\"\n node = nodes.paragraph()\n node.document = self.state.document\n self.view_list = ViewList()\n self.process_actions()\n self.state.nested_parse(self.view_list, 0, node)\n return [node]\n\n\ndef setup(app):\n \"\"\"Register a Docutils extension directive.\"\"\"\n app.add_directive('cc-command-line-options', CcCommandLineOptions)\n", "path": "docs/ccext.py"}, {"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Jinja2 extensions.\"\"\"\n\nimport json\nimport string\ntry:\n # Python 3.6 and above\n from secrets import choice\nexcept ImportError:\n from random import choice\n\nfrom jinja2.ext import Extension\n\n\nclass JsonifyExtension(Extension):\n \"\"\"Jinja2 extension to convert a Python object to JSON.\"\"\"\n\n def __init__(self, environment):\n \"\"\"Initialize the extension with the given environment.\"\"\"\n super(JsonifyExtension, self).__init__(environment)\n\n def jsonify(obj):\n return json.dumps(obj, sort_keys=True, indent=4)\n\n environment.filters['jsonify'] = jsonify\n\n\nclass RandomStringExtension(Extension):\n \"\"\"Jinja2 extension to create a random string.\"\"\"\n\n def __init__(self, environment):\n \"\"\"Jinja2 Extension Constructor.\"\"\"\n super(RandomStringExtension, self).__init__(environment)\n\n def random_ascii_string(length, punctuation=False):\n if punctuation:\n corpus = \"\".join((string.ascii_letters, string.punctuation))\n else:\n corpus = string.ascii_letters\n return \"\".join(choice(corpus) for _ in range(length))\n environment.globals.update(random_ascii_string=random_ascii_string)\n", "path": "cookiecutter/extensions.py"}, {"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"cookiecutter distutils configuration.\"\"\"\n\nimport os\nimport io\nimport sys\n\nfrom setuptools import setup\n\nversion = \"1.7.0\"\n\nif sys.argv[-1] == 'publish':\n os.system('python setup.py sdist upload')\n os.system('python setup.py bdist_wheel upload')\n sys.exit()\n\nif sys.argv[-1] == 'tag':\n os.system(\"git tag -a %s -m 'version %s'\" % (version, version))\n os.system(\"git push --tags\")\n sys.exit()\n\nwith io.open('README.md', 'r', encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\nrequirements = [\n 'binaryornot>=0.2.0',\n 'jinja2>=2.7',\n 'click>=7.0',\n 'poyo>=0.1.0',\n 'jinja2-time>=0.1.0',\n 'requests>=2.18.0',\n 'six>=1.10',\n]\n\nif sys.argv[-1] == 'readme':\n print(readme)\n sys.exit()\n\n\nsetup(\n name='cookiecutter',\n version=version,\n description=('A command-line utility that creates projects from project '\n 'templates, e.g. creating a Python package project from a '\n 'Python package project template.'),\n long_description=readme,\n long_description_content_type='text/markdown',\n author='Audrey Roy',\n author_email='[email protected]',\n url='https://github.com/cookiecutter/cookiecutter',\n packages=[\n 'cookiecutter',\n ],\n package_dir={'cookiecutter': 'cookiecutter'},\n entry_points={\n 'console_scripts': [\n 'cookiecutter = cookiecutter.__main__:main',\n ]\n },\n include_package_data=True,\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*',\n install_requires=requirements,\n extras_require={\n ':python_version<\"3.3\"': ['whichcraft>=0.4.0'],\n },\n license='BSD',\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Natural Language :: English\",\n \"License :: OSI Approved :: BSD License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Software Development\",\n ],\n keywords=(\n 'cookiecutter, Python, projects, project templates, Jinja2, '\n 'skeleton, scaffolding, project directory, setup.py, package, '\n 'packaging'\n ),\n)\n", "path": "setup.py"}]}
2,050
446
gh_patches_debug_4070
rasdani/github-patches
git_diff
scrapy__scrapy-4033
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- may be 'accessible'? in the function [request_fingerprint](https://github.com/scrapy/scrapy/blob/master/scrapy/utils/request.py) ,‘accesible’ may be ‘accessible’ in comments. OCD XD.. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `scrapy/utils/request.py` Content: ``` 1 """ 2 This module provides some useful functions for working with 3 scrapy.http.Request objects 4 """ 5 6 from __future__ import print_function 7 import hashlib 8 import weakref 9 from six.moves.urllib.parse import urlunparse 10 11 from w3lib.http import basic_auth_header 12 from scrapy.utils.python import to_bytes, to_native_str 13 14 from w3lib.url import canonicalize_url 15 from scrapy.utils.httpobj import urlparse_cached 16 17 18 _fingerprint_cache = weakref.WeakKeyDictionary() 19 def request_fingerprint(request, include_headers=None): 20 """ 21 Return the request fingerprint. 22 23 The request fingerprint is a hash that uniquely identifies the resource the 24 request points to. For example, take the following two urls: 25 26 http://www.example.com/query?id=111&cat=222 27 http://www.example.com/query?cat=222&id=111 28 29 Even though those are two different URLs both point to the same resource 30 and are equivalent (ie. they should return the same response). 31 32 Another example are cookies used to store session ids. Suppose the 33 following page is only accesible to authenticated users: 34 35 http://www.example.com/members/offers.html 36 37 Lot of sites use a cookie to store the session id, which adds a random 38 component to the HTTP Request and thus should be ignored when calculating 39 the fingerprint. 40 41 For this reason, request headers are ignored by default when calculating 42 the fingeprint. If you want to include specific headers use the 43 include_headers argument, which is a list of Request headers to include. 44 45 """ 46 if include_headers: 47 include_headers = tuple(to_bytes(h.lower()) 48 for h in sorted(include_headers)) 49 cache = _fingerprint_cache.setdefault(request, {}) 50 if include_headers not in cache: 51 fp = hashlib.sha1() 52 fp.update(to_bytes(request.method)) 53 fp.update(to_bytes(canonicalize_url(request.url))) 54 fp.update(request.body or b'') 55 if include_headers: 56 for hdr in include_headers: 57 if hdr in request.headers: 58 fp.update(hdr) 59 for v in request.headers.getlist(hdr): 60 fp.update(v) 61 cache[include_headers] = fp.hexdigest() 62 return cache[include_headers] 63 64 65 def request_authenticate(request, username, password): 66 """Autenticate the given request (in place) using the HTTP basic access 67 authentication mechanism (RFC 2617) and the given username and password 68 """ 69 request.headers['Authorization'] = basic_auth_header(username, password) 70 71 72 def request_httprepr(request): 73 """Return the raw HTTP representation (as bytes) of the given request. 74 This is provided only for reference since it's not the actual stream of 75 bytes that will be send when performing the request (that's controlled 76 by Twisted). 77 """ 78 parsed = urlparse_cached(request) 79 path = urlunparse(('', '', parsed.path or '/', parsed.params, parsed.query, '')) 80 s = to_bytes(request.method) + b" " + to_bytes(path) + b" HTTP/1.1\r\n" 81 s += b"Host: " + to_bytes(parsed.hostname or b'') + b"\r\n" 82 if request.headers: 83 s += request.headers.to_string() + b"\r\n" 84 s += b"\r\n" 85 s += request.body 86 return s 87 88 89 def referer_str(request): 90 """ Return Referer HTTP header suitable for logging. """ 91 referrer = request.headers.get('Referer') 92 if referrer is None: 93 return referrer 94 return to_native_str(referrer, errors='replace') 95 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/scrapy/utils/request.py b/scrapy/utils/request.py --- a/scrapy/utils/request.py +++ b/scrapy/utils/request.py @@ -30,7 +30,7 @@ and are equivalent (ie. they should return the same response). Another example are cookies used to store session ids. Suppose the - following page is only accesible to authenticated users: + following page is only accessible to authenticated users: http://www.example.com/members/offers.html
{"golden_diff": "diff --git a/scrapy/utils/request.py b/scrapy/utils/request.py\n--- a/scrapy/utils/request.py\n+++ b/scrapy/utils/request.py\n@@ -30,7 +30,7 @@\n and are equivalent (ie. they should return the same response).\n \n Another example are cookies used to store session ids. Suppose the\n- following page is only accesible to authenticated users:\n+ following page is only accessible to authenticated users:\n \n http://www.example.com/members/offers.html\n", "issue": "may be 'accessible'?\nin the function [request_fingerprint](https://github.com/scrapy/scrapy/blob/master/scrapy/utils/request.py) \uff0c\u2018accesible\u2019 may be \u2018accessible\u2019 in comments. OCD XD..\r\n\n", "before_files": [{"content": "\"\"\"\nThis module provides some useful functions for working with\nscrapy.http.Request objects\n\"\"\"\n\nfrom __future__ import print_function\nimport hashlib\nimport weakref\nfrom six.moves.urllib.parse import urlunparse\n\nfrom w3lib.http import basic_auth_header\nfrom scrapy.utils.python import to_bytes, to_native_str\n\nfrom w3lib.url import canonicalize_url\nfrom scrapy.utils.httpobj import urlparse_cached\n\n\n_fingerprint_cache = weakref.WeakKeyDictionary()\ndef request_fingerprint(request, include_headers=None):\n \"\"\"\n Return the request fingerprint.\n\n The request fingerprint is a hash that uniquely identifies the resource the\n request points to. For example, take the following two urls:\n\n http://www.example.com/query?id=111&cat=222\n http://www.example.com/query?cat=222&id=111\n\n Even though those are two different URLs both point to the same resource\n and are equivalent (ie. they should return the same response).\n\n Another example are cookies used to store session ids. Suppose the\n following page is only accesible to authenticated users:\n\n http://www.example.com/members/offers.html\n\n Lot of sites use a cookie to store the session id, which adds a random\n component to the HTTP Request and thus should be ignored when calculating\n the fingerprint.\n\n For this reason, request headers are ignored by default when calculating\n the fingeprint. If you want to include specific headers use the\n include_headers argument, which is a list of Request headers to include.\n\n \"\"\"\n if include_headers:\n include_headers = tuple(to_bytes(h.lower())\n for h in sorted(include_headers))\n cache = _fingerprint_cache.setdefault(request, {})\n if include_headers not in cache:\n fp = hashlib.sha1()\n fp.update(to_bytes(request.method))\n fp.update(to_bytes(canonicalize_url(request.url)))\n fp.update(request.body or b'')\n if include_headers:\n for hdr in include_headers:\n if hdr in request.headers:\n fp.update(hdr)\n for v in request.headers.getlist(hdr):\n fp.update(v)\n cache[include_headers] = fp.hexdigest()\n return cache[include_headers]\n\n\ndef request_authenticate(request, username, password):\n \"\"\"Autenticate the given request (in place) using the HTTP basic access\n authentication mechanism (RFC 2617) and the given username and password\n \"\"\"\n request.headers['Authorization'] = basic_auth_header(username, password)\n\n\ndef request_httprepr(request):\n \"\"\"Return the raw HTTP representation (as bytes) of the given request.\n This is provided only for reference since it's not the actual stream of\n bytes that will be send when performing the request (that's controlled\n by Twisted).\n \"\"\"\n parsed = urlparse_cached(request)\n path = urlunparse(('', '', parsed.path or '/', parsed.params, parsed.query, ''))\n s = to_bytes(request.method) + b\" \" + to_bytes(path) + b\" HTTP/1.1\\r\\n\"\n s += b\"Host: \" + to_bytes(parsed.hostname or b'') + b\"\\r\\n\"\n if request.headers:\n s += request.headers.to_string() + b\"\\r\\n\"\n s += b\"\\r\\n\"\n s += request.body\n return s\n\n\ndef referer_str(request):\n \"\"\" Return Referer HTTP header suitable for logging. \"\"\"\n referrer = request.headers.get('Referer')\n if referrer is None:\n return referrer\n return to_native_str(referrer, errors='replace')\n", "path": "scrapy/utils/request.py"}], "after_files": [{"content": "\"\"\"\nThis module provides some useful functions for working with\nscrapy.http.Request objects\n\"\"\"\n\nfrom __future__ import print_function\nimport hashlib\nimport weakref\nfrom six.moves.urllib.parse import urlunparse\n\nfrom w3lib.http import basic_auth_header\nfrom scrapy.utils.python import to_bytes, to_native_str\n\nfrom w3lib.url import canonicalize_url\nfrom scrapy.utils.httpobj import urlparse_cached\n\n\n_fingerprint_cache = weakref.WeakKeyDictionary()\ndef request_fingerprint(request, include_headers=None):\n \"\"\"\n Return the request fingerprint.\n\n The request fingerprint is a hash that uniquely identifies the resource the\n request points to. For example, take the following two urls:\n\n http://www.example.com/query?id=111&cat=222\n http://www.example.com/query?cat=222&id=111\n\n Even though those are two different URLs both point to the same resource\n and are equivalent (ie. they should return the same response).\n\n Another example are cookies used to store session ids. Suppose the\n following page is only accessible to authenticated users:\n\n http://www.example.com/members/offers.html\n\n Lot of sites use a cookie to store the session id, which adds a random\n component to the HTTP Request and thus should be ignored when calculating\n the fingerprint.\n\n For this reason, request headers are ignored by default when calculating\n the fingeprint. If you want to include specific headers use the\n include_headers argument, which is a list of Request headers to include.\n\n \"\"\"\n if include_headers:\n include_headers = tuple(to_bytes(h.lower())\n for h in sorted(include_headers))\n cache = _fingerprint_cache.setdefault(request, {})\n if include_headers not in cache:\n fp = hashlib.sha1()\n fp.update(to_bytes(request.method))\n fp.update(to_bytes(canonicalize_url(request.url)))\n fp.update(request.body or b'')\n if include_headers:\n for hdr in include_headers:\n if hdr in request.headers:\n fp.update(hdr)\n for v in request.headers.getlist(hdr):\n fp.update(v)\n cache[include_headers] = fp.hexdigest()\n return cache[include_headers]\n\n\ndef request_authenticate(request, username, password):\n \"\"\"Autenticate the given request (in place) using the HTTP basic access\n authentication mechanism (RFC 2617) and the given username and password\n \"\"\"\n request.headers['Authorization'] = basic_auth_header(username, password)\n\n\ndef request_httprepr(request):\n \"\"\"Return the raw HTTP representation (as bytes) of the given request.\n This is provided only for reference since it's not the actual stream of\n bytes that will be send when performing the request (that's controlled\n by Twisted).\n \"\"\"\n parsed = urlparse_cached(request)\n path = urlunparse(('', '', parsed.path or '/', parsed.params, parsed.query, ''))\n s = to_bytes(request.method) + b\" \" + to_bytes(path) + b\" HTTP/1.1\\r\\n\"\n s += b\"Host: \" + to_bytes(parsed.hostname or b'') + b\"\\r\\n\"\n if request.headers:\n s += request.headers.to_string() + b\"\\r\\n\"\n s += b\"\\r\\n\"\n s += request.body\n return s\n\n\ndef referer_str(request):\n \"\"\" Return Referer HTTP header suitable for logging. \"\"\"\n referrer = request.headers.get('Referer')\n if referrer is None:\n return referrer\n return to_native_str(referrer, errors='replace')\n", "path": "scrapy/utils/request.py"}]}
1,263
109
gh_patches_debug_6415
rasdani/github-patches
git_diff
open-mmlab__mmpose-1338
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- 框架有很多没有通过测试的Bug 在我运行res50_freihand2d_224x224.py的过程中就碰到了两个问题 1. tools/dist_train.sh里面没有定义$MASTER_ADDR 2. res50_freihand2d_224x224.py 没有定义runner = dict(type='EpochBasedRunner', max_epochs=total_epochs) 其他的还没有跑,但感觉会有很多小问题 发布的版本有点粗糙呀,期待把这些小Bug尽快解决,后续很期待在这个框架上做更多实验 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `tools/train.py` Content: ``` 1 # Copyright (c) OpenMMLab. All rights reserved. 2 import argparse 3 import copy 4 import os 5 import os.path as osp 6 import time 7 import warnings 8 9 import mmcv 10 import torch 11 import torch.distributed as dist 12 from mmcv import Config, DictAction 13 from mmcv.runner import get_dist_info, init_dist, set_random_seed 14 from mmcv.utils import get_git_hash 15 16 from mmpose import __version__ 17 from mmpose.apis import init_random_seed, train_model 18 from mmpose.datasets import build_dataset 19 from mmpose.models import build_posenet 20 from mmpose.utils import collect_env, get_root_logger, setup_multi_processes 21 22 23 def parse_args(): 24 parser = argparse.ArgumentParser(description='Train a pose model') 25 parser.add_argument('config', help='train config file path') 26 parser.add_argument('--work-dir', help='the dir to save logs and models') 27 parser.add_argument( 28 '--resume-from', help='the checkpoint file to resume from') 29 parser.add_argument( 30 '--no-validate', 31 action='store_true', 32 help='whether not to evaluate the checkpoint during training') 33 group_gpus = parser.add_mutually_exclusive_group() 34 group_gpus.add_argument( 35 '--gpus', 36 type=int, 37 help='(Deprecated, please use --gpu-id) number of gpus to use ' 38 '(only applicable to non-distributed training)') 39 group_gpus.add_argument( 40 '--gpu-ids', 41 type=int, 42 nargs='+', 43 help='(Deprecated, please use --gpu-id) ids of gpus to use ' 44 '(only applicable to non-distributed training)') 45 group_gpus.add_argument( 46 '--gpu-id', 47 type=int, 48 default=0, 49 help='id of gpu to use ' 50 '(only applicable to non-distributed training)') 51 parser.add_argument('--seed', type=int, default=None, help='random seed') 52 parser.add_argument( 53 '--diff_seed', 54 action='store_true', 55 help='Whether or not set different seeds for different ranks') 56 parser.add_argument( 57 '--deterministic', 58 action='store_true', 59 help='whether to set deterministic options for CUDNN backend.') 60 parser.add_argument( 61 '--cfg-options', 62 nargs='+', 63 action=DictAction, 64 default={}, 65 help='override some settings in the used config, the key-value pair ' 66 'in xxx=yyy format will be merged into config file. For example, ' 67 "'--cfg-options model.backbone.depth=18 model.backbone.with_cp=True'") 68 parser.add_argument( 69 '--launcher', 70 choices=['none', 'pytorch', 'slurm', 'mpi'], 71 default='none', 72 help='job launcher') 73 parser.add_argument('--local-rank', type=int, default=0) 74 parser.add_argument( 75 '--local_rank', type=int, default=0, help='An alias to --local-rank') 76 parser.add_argument( 77 '--autoscale-lr', 78 action='store_true', 79 help='automatically scale lr with the number of gpus') 80 args = parser.parse_args() 81 if 'LOCAL_RANK' not in os.environ: 82 os.environ['LOCAL_RANK'] = str(args.local_rank) 83 84 return args 85 86 87 def main(): 88 args = parse_args() 89 90 cfg = Config.fromfile(args.config) 91 92 if args.cfg_options is not None: 93 cfg.merge_from_dict(args.cfg_options) 94 95 # set multi-process settings 96 setup_multi_processes(cfg) 97 98 # set cudnn_benchmark 99 if cfg.get('cudnn_benchmark', False): 100 torch.backends.cudnn.benchmark = True 101 102 # work_dir is determined in this priority: CLI > segment in file > filename 103 if args.work_dir is not None: 104 # update configs according to CLI args if args.work_dir is not None 105 cfg.work_dir = args.work_dir 106 elif cfg.get('work_dir', None) is None: 107 # use config filename as default work_dir if cfg.work_dir is None 108 cfg.work_dir = osp.join('./work_dirs', 109 osp.splitext(osp.basename(args.config))[0]) 110 if args.resume_from is not None: 111 cfg.resume_from = args.resume_from 112 if args.gpus is not None: 113 cfg.gpu_ids = range(1) 114 warnings.warn('`--gpus` is deprecated because we only support ' 115 'single GPU mode in non-distributed training. ' 116 'Use `gpus=1` now.') 117 if args.gpu_ids is not None: 118 cfg.gpu_ids = args.gpu_ids[0:1] 119 warnings.warn('`--gpu-ids` is deprecated, please use `--gpu-id`. ' 120 'Because we only support single GPU mode in ' 121 'non-distributed training. Use the first GPU ' 122 'in `gpu_ids` now.') 123 if args.gpus is None and args.gpu_ids is None: 124 cfg.gpu_ids = [args.gpu_id] 125 126 if args.autoscale_lr: 127 # apply the linear scaling rule (https://arxiv.org/abs/1706.02677) 128 cfg.optimizer['lr'] = cfg.optimizer['lr'] * len(cfg.gpu_ids) / 8 129 130 # init distributed env first, since logger depends on the dist info. 131 if args.launcher == 'none': 132 distributed = False 133 if len(cfg.gpu_ids) > 1: 134 warnings.warn( 135 f'We treat {cfg.gpu_ids} as gpu-ids, and reset to ' 136 f'{cfg.gpu_ids[0:1]} as gpu-ids to avoid potential error in ' 137 'non-distribute training time.') 138 cfg.gpu_ids = cfg.gpu_ids[0:1] 139 else: 140 distributed = True 141 init_dist(args.launcher, **cfg.dist_params) 142 # re-set gpu_ids with distributed training mode 143 _, world_size = get_dist_info() 144 cfg.gpu_ids = range(world_size) 145 146 # create work_dir 147 mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir)) 148 # init the logger before other steps 149 timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime()) 150 log_file = osp.join(cfg.work_dir, f'{timestamp}.log') 151 logger = get_root_logger(log_file=log_file, log_level=cfg.log_level) 152 153 # init the meta dict to record some important information such as 154 # environment info and seed, which will be logged 155 meta = dict() 156 # log env info 157 env_info_dict = collect_env() 158 env_info = '\n'.join([(f'{k}: {v}') for k, v in env_info_dict.items()]) 159 dash_line = '-' * 60 + '\n' 160 logger.info('Environment info:\n' + dash_line + env_info + '\n' + 161 dash_line) 162 meta['env_info'] = env_info 163 164 # log some basic info 165 logger.info(f'Distributed training: {distributed}') 166 logger.info(f'Config:\n{cfg.pretty_text}') 167 168 # set random seeds 169 seed = init_random_seed(args.seed) 170 seed = seed + dist.get_rank() if args.diff_seed else seed 171 logger.info(f'Set random seed to {seed}, ' 172 f'deterministic: {args.deterministic}') 173 set_random_seed(seed, deterministic=args.deterministic) 174 cfg.seed = seed 175 meta['seed'] = seed 176 177 model = build_posenet(cfg.model) 178 datasets = [build_dataset(cfg.data.train)] 179 180 if len(cfg.workflow) == 2: 181 val_dataset = copy.deepcopy(cfg.data.val) 182 val_dataset.pipeline = cfg.data.train.pipeline 183 datasets.append(build_dataset(val_dataset)) 184 185 if cfg.checkpoint_config is not None: 186 # save mmpose version, config file content 187 # checkpoints as meta data 188 cfg.checkpoint_config.meta = dict( 189 mmpose_version=__version__ + get_git_hash(digits=7), 190 config=cfg.pretty_text, 191 ) 192 train_model( 193 model, 194 datasets, 195 cfg, 196 distributed=distributed, 197 validate=(not args.no_validate), 198 timestamp=timestamp, 199 meta=meta) 200 201 202 if __name__ == '__main__': 203 main() 204 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/tools/train.py b/tools/train.py --- a/tools/train.py +++ b/tools/train.py @@ -70,9 +70,7 @@ choices=['none', 'pytorch', 'slurm', 'mpi'], default='none', help='job launcher') - parser.add_argument('--local-rank', type=int, default=0) - parser.add_argument( - '--local_rank', type=int, default=0, help='An alias to --local-rank') + parser.add_argument('--local_rank', type=int, default=0) parser.add_argument( '--autoscale-lr', action='store_true',
{"golden_diff": "diff --git a/tools/train.py b/tools/train.py\n--- a/tools/train.py\n+++ b/tools/train.py\n@@ -70,9 +70,7 @@\n choices=['none', 'pytorch', 'slurm', 'mpi'],\n default='none',\n help='job launcher')\n- parser.add_argument('--local-rank', type=int, default=0)\n- parser.add_argument(\n- '--local_rank', type=int, default=0, help='An alias to --local-rank')\n+ parser.add_argument('--local_rank', type=int, default=0)\n parser.add_argument(\n '--autoscale-lr',\n action='store_true',\n", "issue": "\u6846\u67b6\u6709\u5f88\u591a\u6ca1\u6709\u901a\u8fc7\u6d4b\u8bd5\u7684Bug\n\u5728\u6211\u8fd0\u884cres50_freihand2d_224x224.py\u7684\u8fc7\u7a0b\u4e2d\u5c31\u78b0\u5230\u4e86\u4e24\u4e2a\u95ee\u9898\r\n\r\n1. tools/dist_train.sh\u91cc\u9762\u6ca1\u6709\u5b9a\u4e49$MASTER_ADDR\r\n\r\n2. res50_freihand2d_224x224.py \u6ca1\u6709\u5b9a\u4e49runner = dict(type='EpochBasedRunner', max_epochs=total_epochs)\r\n\r\n\u5176\u4ed6\u7684\u8fd8\u6ca1\u6709\u8dd1\uff0c\u4f46\u611f\u89c9\u4f1a\u6709\u5f88\u591a\u5c0f\u95ee\u9898\r\n\r\n\u53d1\u5e03\u7684\u7248\u672c\u6709\u70b9\u7c97\u7cd9\u5440\uff0c\u671f\u5f85\u628a\u8fd9\u4e9b\u5c0fBug\u5c3d\u5feb\u89e3\u51b3\uff0c\u540e\u7eed\u5f88\u671f\u5f85\u5728\u8fd9\u4e2a\u6846\u67b6\u4e0a\u505a\u66f4\u591a\u5b9e\u9a8c\r\n\n", "before_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport argparse\nimport copy\nimport os\nimport os.path as osp\nimport time\nimport warnings\n\nimport mmcv\nimport torch\nimport torch.distributed as dist\nfrom mmcv import Config, DictAction\nfrom mmcv.runner import get_dist_info, init_dist, set_random_seed\nfrom mmcv.utils import get_git_hash\n\nfrom mmpose import __version__\nfrom mmpose.apis import init_random_seed, train_model\nfrom mmpose.datasets import build_dataset\nfrom mmpose.models import build_posenet\nfrom mmpose.utils import collect_env, get_root_logger, setup_multi_processes\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(description='Train a pose model')\n parser.add_argument('config', help='train config file path')\n parser.add_argument('--work-dir', help='the dir to save logs and models')\n parser.add_argument(\n '--resume-from', help='the checkpoint file to resume from')\n parser.add_argument(\n '--no-validate',\n action='store_true',\n help='whether not to evaluate the checkpoint during training')\n group_gpus = parser.add_mutually_exclusive_group()\n group_gpus.add_argument(\n '--gpus',\n type=int,\n help='(Deprecated, please use --gpu-id) number of gpus to use '\n '(only applicable to non-distributed training)')\n group_gpus.add_argument(\n '--gpu-ids',\n type=int,\n nargs='+',\n help='(Deprecated, please use --gpu-id) ids of gpus to use '\n '(only applicable to non-distributed training)')\n group_gpus.add_argument(\n '--gpu-id',\n type=int,\n default=0,\n help='id of gpu to use '\n '(only applicable to non-distributed training)')\n parser.add_argument('--seed', type=int, default=None, help='random seed')\n parser.add_argument(\n '--diff_seed',\n action='store_true',\n help='Whether or not set different seeds for different ranks')\n parser.add_argument(\n '--deterministic',\n action='store_true',\n help='whether to set deterministic options for CUDNN backend.')\n parser.add_argument(\n '--cfg-options',\n nargs='+',\n action=DictAction,\n default={},\n help='override some settings in the used config, the key-value pair '\n 'in xxx=yyy format will be merged into config file. For example, '\n \"'--cfg-options model.backbone.depth=18 model.backbone.with_cp=True'\")\n parser.add_argument(\n '--launcher',\n choices=['none', 'pytorch', 'slurm', 'mpi'],\n default='none',\n help='job launcher')\n parser.add_argument('--local-rank', type=int, default=0)\n parser.add_argument(\n '--local_rank', type=int, default=0, help='An alias to --local-rank')\n parser.add_argument(\n '--autoscale-lr',\n action='store_true',\n help='automatically scale lr with the number of gpus')\n args = parser.parse_args()\n if 'LOCAL_RANK' not in os.environ:\n os.environ['LOCAL_RANK'] = str(args.local_rank)\n\n return args\n\n\ndef main():\n args = parse_args()\n\n cfg = Config.fromfile(args.config)\n\n if args.cfg_options is not None:\n cfg.merge_from_dict(args.cfg_options)\n\n # set multi-process settings\n setup_multi_processes(cfg)\n\n # set cudnn_benchmark\n if cfg.get('cudnn_benchmark', False):\n torch.backends.cudnn.benchmark = True\n\n # work_dir is determined in this priority: CLI > segment in file > filename\n if args.work_dir is not None:\n # update configs according to CLI args if args.work_dir is not None\n cfg.work_dir = args.work_dir\n elif cfg.get('work_dir', None) is None:\n # use config filename as default work_dir if cfg.work_dir is None\n cfg.work_dir = osp.join('./work_dirs',\n osp.splitext(osp.basename(args.config))[0])\n if args.resume_from is not None:\n cfg.resume_from = args.resume_from\n if args.gpus is not None:\n cfg.gpu_ids = range(1)\n warnings.warn('`--gpus` is deprecated because we only support '\n 'single GPU mode in non-distributed training. '\n 'Use `gpus=1` now.')\n if args.gpu_ids is not None:\n cfg.gpu_ids = args.gpu_ids[0:1]\n warnings.warn('`--gpu-ids` is deprecated, please use `--gpu-id`. '\n 'Because we only support single GPU mode in '\n 'non-distributed training. Use the first GPU '\n 'in `gpu_ids` now.')\n if args.gpus is None and args.gpu_ids is None:\n cfg.gpu_ids = [args.gpu_id]\n\n if args.autoscale_lr:\n # apply the linear scaling rule (https://arxiv.org/abs/1706.02677)\n cfg.optimizer['lr'] = cfg.optimizer['lr'] * len(cfg.gpu_ids) / 8\n\n # init distributed env first, since logger depends on the dist info.\n if args.launcher == 'none':\n distributed = False\n if len(cfg.gpu_ids) > 1:\n warnings.warn(\n f'We treat {cfg.gpu_ids} as gpu-ids, and reset to '\n f'{cfg.gpu_ids[0:1]} as gpu-ids to avoid potential error in '\n 'non-distribute training time.')\n cfg.gpu_ids = cfg.gpu_ids[0:1]\n else:\n distributed = True\n init_dist(args.launcher, **cfg.dist_params)\n # re-set gpu_ids with distributed training mode\n _, world_size = get_dist_info()\n cfg.gpu_ids = range(world_size)\n\n # create work_dir\n mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))\n # init the logger before other steps\n timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime())\n log_file = osp.join(cfg.work_dir, f'{timestamp}.log')\n logger = get_root_logger(log_file=log_file, log_level=cfg.log_level)\n\n # init the meta dict to record some important information such as\n # environment info and seed, which will be logged\n meta = dict()\n # log env info\n env_info_dict = collect_env()\n env_info = '\\n'.join([(f'{k}: {v}') for k, v in env_info_dict.items()])\n dash_line = '-' * 60 + '\\n'\n logger.info('Environment info:\\n' + dash_line + env_info + '\\n' +\n dash_line)\n meta['env_info'] = env_info\n\n # log some basic info\n logger.info(f'Distributed training: {distributed}')\n logger.info(f'Config:\\n{cfg.pretty_text}')\n\n # set random seeds\n seed = init_random_seed(args.seed)\n seed = seed + dist.get_rank() if args.diff_seed else seed\n logger.info(f'Set random seed to {seed}, '\n f'deterministic: {args.deterministic}')\n set_random_seed(seed, deterministic=args.deterministic)\n cfg.seed = seed\n meta['seed'] = seed\n\n model = build_posenet(cfg.model)\n datasets = [build_dataset(cfg.data.train)]\n\n if len(cfg.workflow) == 2:\n val_dataset = copy.deepcopy(cfg.data.val)\n val_dataset.pipeline = cfg.data.train.pipeline\n datasets.append(build_dataset(val_dataset))\n\n if cfg.checkpoint_config is not None:\n # save mmpose version, config file content\n # checkpoints as meta data\n cfg.checkpoint_config.meta = dict(\n mmpose_version=__version__ + get_git_hash(digits=7),\n config=cfg.pretty_text,\n )\n train_model(\n model,\n datasets,\n cfg,\n distributed=distributed,\n validate=(not args.no_validate),\n timestamp=timestamp,\n meta=meta)\n\n\nif __name__ == '__main__':\n main()\n", "path": "tools/train.py"}], "after_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport argparse\nimport copy\nimport os\nimport os.path as osp\nimport time\nimport warnings\n\nimport mmcv\nimport torch\nimport torch.distributed as dist\nfrom mmcv import Config, DictAction\nfrom mmcv.runner import get_dist_info, init_dist, set_random_seed\nfrom mmcv.utils import get_git_hash\n\nfrom mmpose import __version__\nfrom mmpose.apis import init_random_seed, train_model\nfrom mmpose.datasets import build_dataset\nfrom mmpose.models import build_posenet\nfrom mmpose.utils import collect_env, get_root_logger, setup_multi_processes\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(description='Train a pose model')\n parser.add_argument('config', help='train config file path')\n parser.add_argument('--work-dir', help='the dir to save logs and models')\n parser.add_argument(\n '--resume-from', help='the checkpoint file to resume from')\n parser.add_argument(\n '--no-validate',\n action='store_true',\n help='whether not to evaluate the checkpoint during training')\n group_gpus = parser.add_mutually_exclusive_group()\n group_gpus.add_argument(\n '--gpus',\n type=int,\n help='(Deprecated, please use --gpu-id) number of gpus to use '\n '(only applicable to non-distributed training)')\n group_gpus.add_argument(\n '--gpu-ids',\n type=int,\n nargs='+',\n help='(Deprecated, please use --gpu-id) ids of gpus to use '\n '(only applicable to non-distributed training)')\n group_gpus.add_argument(\n '--gpu-id',\n type=int,\n default=0,\n help='id of gpu to use '\n '(only applicable to non-distributed training)')\n parser.add_argument('--seed', type=int, default=None, help='random seed')\n parser.add_argument(\n '--diff_seed',\n action='store_true',\n help='Whether or not set different seeds for different ranks')\n parser.add_argument(\n '--deterministic',\n action='store_true',\n help='whether to set deterministic options for CUDNN backend.')\n parser.add_argument(\n '--cfg-options',\n nargs='+',\n action=DictAction,\n default={},\n help='override some settings in the used config, the key-value pair '\n 'in xxx=yyy format will be merged into config file. For example, '\n \"'--cfg-options model.backbone.depth=18 model.backbone.with_cp=True'\")\n parser.add_argument(\n '--launcher',\n choices=['none', 'pytorch', 'slurm', 'mpi'],\n default='none',\n help='job launcher')\n parser.add_argument('--local_rank', type=int, default=0)\n parser.add_argument(\n '--autoscale-lr',\n action='store_true',\n help='automatically scale lr with the number of gpus')\n args = parser.parse_args()\n if 'LOCAL_RANK' not in os.environ:\n os.environ['LOCAL_RANK'] = str(args.local_rank)\n\n return args\n\n\ndef main():\n args = parse_args()\n\n cfg = Config.fromfile(args.config)\n\n if args.cfg_options is not None:\n cfg.merge_from_dict(args.cfg_options)\n\n # set multi-process settings\n setup_multi_processes(cfg)\n\n # set cudnn_benchmark\n if cfg.get('cudnn_benchmark', False):\n torch.backends.cudnn.benchmark = True\n\n # work_dir is determined in this priority: CLI > segment in file > filename\n if args.work_dir is not None:\n # update configs according to CLI args if args.work_dir is not None\n cfg.work_dir = args.work_dir\n elif cfg.get('work_dir', None) is None:\n # use config filename as default work_dir if cfg.work_dir is None\n cfg.work_dir = osp.join('./work_dirs',\n osp.splitext(osp.basename(args.config))[0])\n if args.resume_from is not None:\n cfg.resume_from = args.resume_from\n if args.gpus is not None:\n cfg.gpu_ids = range(1)\n warnings.warn('`--gpus` is deprecated because we only support '\n 'single GPU mode in non-distributed training. '\n 'Use `gpus=1` now.')\n if args.gpu_ids is not None:\n cfg.gpu_ids = args.gpu_ids[0:1]\n warnings.warn('`--gpu-ids` is deprecated, please use `--gpu-id`. '\n 'Because we only support single GPU mode in '\n 'non-distributed training. Use the first GPU '\n 'in `gpu_ids` now.')\n if args.gpus is None and args.gpu_ids is None:\n cfg.gpu_ids = [args.gpu_id]\n\n if args.autoscale_lr:\n # apply the linear scaling rule (https://arxiv.org/abs/1706.02677)\n cfg.optimizer['lr'] = cfg.optimizer['lr'] * len(cfg.gpu_ids) / 8\n\n # init distributed env first, since logger depends on the dist info.\n if args.launcher == 'none':\n distributed = False\n if len(cfg.gpu_ids) > 1:\n warnings.warn(\n f'We treat {cfg.gpu_ids} as gpu-ids, and reset to '\n f'{cfg.gpu_ids[0:1]} as gpu-ids to avoid potential error in '\n 'non-distribute training time.')\n cfg.gpu_ids = cfg.gpu_ids[0:1]\n else:\n distributed = True\n init_dist(args.launcher, **cfg.dist_params)\n # re-set gpu_ids with distributed training mode\n _, world_size = get_dist_info()\n cfg.gpu_ids = range(world_size)\n\n # create work_dir\n mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))\n # init the logger before other steps\n timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime())\n log_file = osp.join(cfg.work_dir, f'{timestamp}.log')\n logger = get_root_logger(log_file=log_file, log_level=cfg.log_level)\n\n # init the meta dict to record some important information such as\n # environment info and seed, which will be logged\n meta = dict()\n # log env info\n env_info_dict = collect_env()\n env_info = '\\n'.join([(f'{k}: {v}') for k, v in env_info_dict.items()])\n dash_line = '-' * 60 + '\\n'\n logger.info('Environment info:\\n' + dash_line + env_info + '\\n' +\n dash_line)\n meta['env_info'] = env_info\n\n # log some basic info\n logger.info(f'Distributed training: {distributed}')\n logger.info(f'Config:\\n{cfg.pretty_text}')\n\n # set random seeds\n seed = init_random_seed(args.seed)\n seed = seed + dist.get_rank() if args.diff_seed else seed\n logger.info(f'Set random seed to {seed}, '\n f'deterministic: {args.deterministic}')\n set_random_seed(seed, deterministic=args.deterministic)\n cfg.seed = seed\n meta['seed'] = seed\n\n model = build_posenet(cfg.model)\n datasets = [build_dataset(cfg.data.train)]\n\n if len(cfg.workflow) == 2:\n val_dataset = copy.deepcopy(cfg.data.val)\n val_dataset.pipeline = cfg.data.train.pipeline\n datasets.append(build_dataset(val_dataset))\n\n if cfg.checkpoint_config is not None:\n # save mmpose version, config file content\n # checkpoints as meta data\n cfg.checkpoint_config.meta = dict(\n mmpose_version=__version__ + get_git_hash(digits=7),\n config=cfg.pretty_text,\n )\n train_model(\n model,\n datasets,\n cfg,\n distributed=distributed,\n validate=(not args.no_validate),\n timestamp=timestamp,\n meta=meta)\n\n\nif __name__ == '__main__':\n main()\n", "path": "tools/train.py"}]}
2,635
143
gh_patches_debug_13920
rasdani/github-patches
git_diff
searx__searx-1135
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Yacy results crash Getting: Engines cannot retrieve results: yacy (unexpected crash) > ERROR:searx.search:engine yacy : exception : 'url' > Traceback (most recent call last): > File "/home/leo/searx/searx/search.py", line 118, in search_one_request_safe > search_results = search_one_request(engine, query, request_params, start_time, timeout_limit) > File "/home/leo/searx/searx/search.py", line 110, in search_one_request > return engine.response(response) > File "/home/leo/searx/searx/engines/yacy.py", line 80, in response > results.append({'url': result['url'], > KeyError: 'url' --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `searx/engines/yacy.py` Content: ``` 1 # Yacy (Web, Images, Videos, Music, Files) 2 # 3 # @website http://yacy.net 4 # @provide-api yes 5 # (http://www.yacy-websuche.de/wiki/index.php/Dev:APIyacysearch) 6 # 7 # @using-api yes 8 # @results JSON 9 # @stable yes 10 # @parse (general) url, title, content, publishedDate 11 # @parse (images) url, title, img_src 12 # 13 # @todo parse video, audio and file results 14 15 from json import loads 16 from dateutil import parser 17 from searx.url_utils import urlencode 18 19 from searx.utils import html_to_text 20 21 # engine dependent config 22 categories = ['general', 'images'] # TODO , 'music', 'videos', 'files' 23 paging = True 24 language_support = True 25 number_of_results = 5 26 27 # search-url 28 base_url = 'http://localhost:8090' 29 search_url = '/yacysearch.json?{query}'\ 30 '&startRecord={offset}'\ 31 '&maximumRecords={limit}'\ 32 '&contentdom={search_type}'\ 33 '&resource=global' 34 35 # yacy specific type-definitions 36 search_types = {'general': 'text', 37 'images': 'image', 38 'files': 'app', 39 'music': 'audio', 40 'videos': 'video'} 41 42 43 # do search-request 44 def request(query, params): 45 offset = (params['pageno'] - 1) * number_of_results 46 search_type = search_types.get(params.get('category'), '0') 47 48 params['url'] = base_url +\ 49 search_url.format(query=urlencode({'query': query}), 50 offset=offset, 51 limit=number_of_results, 52 search_type=search_type) 53 54 params['url'] += '&lr=lang_' + params['language'].split('-')[0] 55 56 return params 57 58 59 # get response from search-request 60 def response(resp): 61 results = [] 62 63 raw_search_results = loads(resp.text) 64 65 # return empty array if there are no results 66 if not raw_search_results: 67 return [] 68 69 search_results = raw_search_results.get('channels', []) 70 71 if len(search_results) == 0: 72 return [] 73 74 for result in search_results[0].get('items', []): 75 # parse image results 76 if result.get('image'): 77 # append result 78 results.append({'url': result['url'], 79 'title': result['title'], 80 'content': '', 81 'img_src': result['image'], 82 'template': 'images.html'}) 83 84 # parse general results 85 else: 86 publishedDate = parser.parse(result['pubDate']) 87 88 # append result 89 results.append({'url': result['link'], 90 'title': result['title'], 91 'content': html_to_text(result['description']), 92 'publishedDate': publishedDate}) 93 94 # TODO parse video, audio and file results 95 96 # return results 97 return results 98 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/searx/engines/yacy.py b/searx/engines/yacy.py --- a/searx/engines/yacy.py +++ b/searx/engines/yacy.py @@ -74,8 +74,17 @@ for result in search_results[0].get('items', []): # parse image results if result.get('image'): + + result_url = '' + if 'url' in result: + result_url = result['url'] + elif 'link' in result: + result_url = result['link'] + else: + continue + # append result - results.append({'url': result['url'], + results.append({'url': result_url, 'title': result['title'], 'content': '', 'img_src': result['image'],
{"golden_diff": "diff --git a/searx/engines/yacy.py b/searx/engines/yacy.py\n--- a/searx/engines/yacy.py\n+++ b/searx/engines/yacy.py\n@@ -74,8 +74,17 @@\n for result in search_results[0].get('items', []):\n # parse image results\n if result.get('image'):\n+\n+ result_url = ''\n+ if 'url' in result:\n+ result_url = result['url']\n+ elif 'link' in result:\n+ result_url = result['link']\n+ else:\n+ continue\n+\n # append result\n- results.append({'url': result['url'],\n+ results.append({'url': result_url,\n 'title': result['title'],\n 'content': '',\n 'img_src': result['image'],\n", "issue": "Yacy results crash\nGetting:\r\nEngines cannot retrieve results:\r\nyacy (unexpected crash)\r\n\r\n> ERROR:searx.search:engine yacy : exception : 'url'\r\n> Traceback (most recent call last):\r\n> File \"/home/leo/searx/searx/search.py\", line 118, in search_one_request_safe\r\n> search_results = search_one_request(engine, query, request_params, start_time, timeout_limit)\r\n> File \"/home/leo/searx/searx/search.py\", line 110, in search_one_request\r\n> return engine.response(response)\r\n> File \"/home/leo/searx/searx/engines/yacy.py\", line 80, in response\r\n> results.append({'url': result['url'],\r\n> KeyError: 'url'\n", "before_files": [{"content": "# Yacy (Web, Images, Videos, Music, Files)\n#\n# @website http://yacy.net\n# @provide-api yes\n# (http://www.yacy-websuche.de/wiki/index.php/Dev:APIyacysearch)\n#\n# @using-api yes\n# @results JSON\n# @stable yes\n# @parse (general) url, title, content, publishedDate\n# @parse (images) url, title, img_src\n#\n# @todo parse video, audio and file results\n\nfrom json import loads\nfrom dateutil import parser\nfrom searx.url_utils import urlencode\n\nfrom searx.utils import html_to_text\n\n# engine dependent config\ncategories = ['general', 'images'] # TODO , 'music', 'videos', 'files'\npaging = True\nlanguage_support = True\nnumber_of_results = 5\n\n# search-url\nbase_url = 'http://localhost:8090'\nsearch_url = '/yacysearch.json?{query}'\\\n '&startRecord={offset}'\\\n '&maximumRecords={limit}'\\\n '&contentdom={search_type}'\\\n '&resource=global'\n\n# yacy specific type-definitions\nsearch_types = {'general': 'text',\n 'images': 'image',\n 'files': 'app',\n 'music': 'audio',\n 'videos': 'video'}\n\n\n# do search-request\ndef request(query, params):\n offset = (params['pageno'] - 1) * number_of_results\n search_type = search_types.get(params.get('category'), '0')\n\n params['url'] = base_url +\\\n search_url.format(query=urlencode({'query': query}),\n offset=offset,\n limit=number_of_results,\n search_type=search_type)\n\n params['url'] += '&lr=lang_' + params['language'].split('-')[0]\n\n return params\n\n\n# get response from search-request\ndef response(resp):\n results = []\n\n raw_search_results = loads(resp.text)\n\n # return empty array if there are no results\n if not raw_search_results:\n return []\n\n search_results = raw_search_results.get('channels', [])\n\n if len(search_results) == 0:\n return []\n\n for result in search_results[0].get('items', []):\n # parse image results\n if result.get('image'):\n # append result\n results.append({'url': result['url'],\n 'title': result['title'],\n 'content': '',\n 'img_src': result['image'],\n 'template': 'images.html'})\n\n # parse general results\n else:\n publishedDate = parser.parse(result['pubDate'])\n\n # append result\n results.append({'url': result['link'],\n 'title': result['title'],\n 'content': html_to_text(result['description']),\n 'publishedDate': publishedDate})\n\n # TODO parse video, audio and file results\n\n # return results\n return results\n", "path": "searx/engines/yacy.py"}], "after_files": [{"content": "# Yacy (Web, Images, Videos, Music, Files)\n#\n# @website http://yacy.net\n# @provide-api yes\n# (http://www.yacy-websuche.de/wiki/index.php/Dev:APIyacysearch)\n#\n# @using-api yes\n# @results JSON\n# @stable yes\n# @parse (general) url, title, content, publishedDate\n# @parse (images) url, title, img_src\n#\n# @todo parse video, audio and file results\n\nfrom json import loads\nfrom dateutil import parser\nfrom searx.url_utils import urlencode\n\nfrom searx.utils import html_to_text\n\n# engine dependent config\ncategories = ['general', 'images'] # TODO , 'music', 'videos', 'files'\npaging = True\nlanguage_support = True\nnumber_of_results = 5\n\n# search-url\nbase_url = 'http://localhost:8090'\nsearch_url = '/yacysearch.json?{query}'\\\n '&startRecord={offset}'\\\n '&maximumRecords={limit}'\\\n '&contentdom={search_type}'\\\n '&resource=global'\n\n# yacy specific type-definitions\nsearch_types = {'general': 'text',\n 'images': 'image',\n 'files': 'app',\n 'music': 'audio',\n 'videos': 'video'}\n\n\n# do search-request\ndef request(query, params):\n offset = (params['pageno'] - 1) * number_of_results\n search_type = search_types.get(params.get('category'), '0')\n\n params['url'] = base_url +\\\n search_url.format(query=urlencode({'query': query}),\n offset=offset,\n limit=number_of_results,\n search_type=search_type)\n\n params['url'] += '&lr=lang_' + params['language'].split('-')[0]\n\n return params\n\n\n# get response from search-request\ndef response(resp):\n results = []\n\n raw_search_results = loads(resp.text)\n\n # return empty array if there are no results\n if not raw_search_results:\n return []\n\n search_results = raw_search_results.get('channels', [])\n\n if len(search_results) == 0:\n return []\n\n for result in search_results[0].get('items', []):\n # parse image results\n if result.get('image'):\n\n result_url = ''\n if 'url' in result:\n result_url = result['url']\n elif 'link' in result:\n result_url = result['link']\n else:\n continue\n\n # append result\n results.append({'url': result_url,\n 'title': result['title'],\n 'content': '',\n 'img_src': result['image'],\n 'template': 'images.html'})\n\n # parse general results\n else:\n publishedDate = parser.parse(result['pubDate'])\n\n # append result\n results.append({'url': result['link'],\n 'title': result['title'],\n 'content': html_to_text(result['description']),\n 'publishedDate': publishedDate})\n\n # TODO parse video, audio and file results\n\n # return results\n return results\n", "path": "searx/engines/yacy.py"}]}
1,275
189
gh_patches_debug_64039
rasdani/github-patches
git_diff
Rapptz__discord.py-1057
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Cannot connect to voice channels Running `await voice_channel.connect()` raises `AttributeError: 'LP_EncoderStruct' object has no attribute 'value'` Relevant portions of the traceback ([and the line itself](https://github.com/Rapptz/discord.py/blob/rewrite/discord/opus.py#L52)): ``` File ".../lib/python3.6/site-packages/discord/abc.py", line 985, in connect voice = VoiceClient(state=state, timeout=timeout, channel=self) File ".../lib/python3.6/site-packages/discord/voice_client.py", line 109, in __init__ self.encoder = opus.Encoder() File ".../lib/python3.6/site-packages/discord/opus.py", line 225, in __init__ self._state = self._create_state() File ".../lib/python3.6/site-packages/discord/opus.py", line 239, in _create_state return _lib.opus_encoder_create(self.SAMPLING_RATE, self.CHANNELS, self.application, ctypes.byref(ret)) File ".../lib/python3.6/site-packages/discord/opus.py", line 52, in _err_ne if result.value != 0: AttributeError: 'LP_EncoderStruct' object has no attribute 'value' ``` I have opus 1.2.1 installed on 64-bit Linux, and it is loaded according to `discord.opus.is_loaded()`. Any clue as to what might be the issue? --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `discord/opus.py` Content: ``` 1 # -*- coding: utf-8 -*- 2 3 """ 4 The MIT License (MIT) 5 6 Copyright (c) 2015-2017 Rapptz 7 8 Permission is hereby granted, free of charge, to any person obtaining a 9 copy of this software and associated documentation files (the "Software"), 10 to deal in the Software without restriction, including without limitation 11 the rights to use, copy, modify, merge, publish, distribute, sublicense, 12 and/or sell copies of the Software, and to permit persons to whom the 13 Software is furnished to do so, subject to the following conditions: 14 15 The above copyright notice and this permission notice shall be included in 16 all copies or substantial portions of the Software. 17 18 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS 19 OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 20 FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 21 AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 22 LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 23 FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER 24 DEALINGS IN THE SOFTWARE. 25 """ 26 27 import ctypes 28 import ctypes.util 29 import array 30 from .errors import DiscordException 31 import logging 32 import sys 33 import os.path 34 35 log = logging.getLogger(__name__) 36 c_int_ptr = ctypes.POINTER(ctypes.c_int) 37 c_int16_ptr = ctypes.POINTER(ctypes.c_int16) 38 c_float_ptr = ctypes.POINTER(ctypes.c_float) 39 40 class EncoderStruct(ctypes.Structure): 41 pass 42 43 EncoderStructPtr = ctypes.POINTER(EncoderStruct) 44 45 def _err_lt(result, func, args): 46 if result < 0: 47 log.info('error has happened in {0.__name__}'.format(func)) 48 raise OpusError(result) 49 return result 50 51 def _err_ne(result, func, args): 52 if result.value != 0: 53 log.info('error has happened in {0.__name__}'.format(func)) 54 raise OpusError(result.value) 55 return result 56 57 # A list of exported functions. 58 # The first argument is obviously the name. 59 # The second one are the types of arguments it takes. 60 # The third is the result type. 61 # The fourth is the error handler. 62 exported_functions = [ 63 ('opus_strerror', 64 [ctypes.c_int], ctypes.c_char_p, None), 65 ('opus_encoder_get_size', 66 [ctypes.c_int], ctypes.c_int, None), 67 ('opus_encoder_create', 68 [ctypes.c_int, ctypes.c_int, ctypes.c_int, c_int_ptr], EncoderStructPtr, _err_ne), 69 ('opus_encode', 70 [EncoderStructPtr, c_int16_ptr, ctypes.c_int, ctypes.c_char_p, ctypes.c_int32], ctypes.c_int32, _err_lt), 71 ('opus_encoder_ctl', 72 None, ctypes.c_int32, _err_lt), 73 ('opus_encoder_destroy', 74 [EncoderStructPtr], None, None), 75 ] 76 77 def libopus_loader(name): 78 # create the library... 79 lib = ctypes.cdll.LoadLibrary(name) 80 81 # register the functions... 82 for item in exported_functions: 83 try: 84 func = getattr(lib, item[0]) 85 except Exception as e: 86 raise e 87 88 try: 89 if item[1]: 90 func.argtypes = item[1] 91 92 func.restype = item[2] 93 except KeyError: 94 pass 95 96 try: 97 if item[3]: 98 func.errcheck = item[3] 99 except KeyError: 100 log.exception("Error assigning check function to %s", func) 101 102 return lib 103 104 try: 105 if sys.platform == 'win32': 106 _basedir = os.path.dirname(os.path.abspath(__file__)) 107 _bitness = 'x64' if sys.maxsize > 2**32 else 'x86' 108 _filename = os.path.join(_basedir, 'bin', 'libopus-0.{}.dll'.format(_bitness)) 109 _lib = libopus_loader(_filename) 110 else: 111 _lib = libopus_loader(ctypes.util.find_library('opus')) 112 except Exception as e: 113 _lib = None 114 115 def load_opus(name): 116 """Loads the libopus shared library for use with voice. 117 118 If this function is not called then the library uses the function 119 `ctypes.util.find_library`__ and then loads that one 120 if available. 121 122 .. _find library: https://docs.python.org/3.5/library/ctypes.html#finding-shared-libraries 123 __ `find library`_ 124 125 Not loading a library leads to voice not working. 126 127 This function propagates the exceptions thrown. 128 129 Warning 130 -------- 131 The bitness of the library must match the bitness of your python 132 interpreter. If the library is 64-bit then your python interpreter 133 must be 64-bit as well. Usually if there's a mismatch in bitness then 134 the load will throw an exception. 135 136 Note 137 ---- 138 On Windows, the .dll extension is not necessary. However, on Linux 139 the full extension is required to load the library, e.g. ``libopus.so.1``. 140 On Linux however, `find library`_ will usually find the library automatically 141 without you having to call this. 142 143 Parameters 144 ---------- 145 name: str 146 The filename of the shared library. 147 """ 148 global _lib 149 _lib = libopus_loader(name) 150 151 def is_loaded(): 152 """Function to check if opus lib is successfully loaded either 153 via the ``ctypes.util.find_library`` call of :func:`load_opus`. 154 155 This must return ``True`` for voice to work. 156 157 Returns 158 ------- 159 bool 160 Indicates if the opus library has been loaded. 161 """ 162 global _lib 163 return _lib is not None 164 165 class OpusError(DiscordException): 166 """An exception that is thrown for libopus related errors. 167 168 Attributes 169 ---------- 170 code : :class:`int` 171 The error code returned. 172 """ 173 174 def __init__(self, code): 175 self.code = code 176 msg = _lib.opus_strerror(self.code).decode('utf-8') 177 log.info('"%s" has happened', msg) 178 super().__init__(msg) 179 180 class OpusNotLoaded(DiscordException): 181 """An exception that is thrown for when libopus is not loaded.""" 182 pass 183 184 185 # Some constants... 186 OK = 0 187 APPLICATION_AUDIO = 2049 188 APPLICATION_VOIP = 2048 189 APPLICATION_LOWDELAY = 2051 190 CTL_SET_BITRATE = 4002 191 CTL_SET_BANDWIDTH = 4008 192 CTL_SET_FEC = 4012 193 CTL_SET_PLP = 4014 194 CTL_SET_SIGNAL = 4024 195 196 band_ctl = { 197 'narrow': 1101, 198 'medium': 1102, 199 'wide': 1103, 200 'superwide': 1104, 201 'full': 1105, 202 } 203 204 signal_ctl = { 205 'auto': -1000, 206 'voice': 3001, 207 'music': 3002, 208 } 209 210 class Encoder: 211 SAMPLING_RATE = 48000 212 CHANNELS = 2 213 FRAME_LENGTH = 20 214 SAMPLE_SIZE = 4 # (bit_rate / 8) * CHANNELS (bit_rate == 16) 215 SAMPLES_PER_FRAME = int(SAMPLING_RATE / 1000 * FRAME_LENGTH) 216 217 FRAME_SIZE = SAMPLES_PER_FRAME * SAMPLE_SIZE 218 219 def __init__(self, application=APPLICATION_AUDIO): 220 self.application = application 221 222 if not is_loaded(): 223 raise OpusNotLoaded() 224 225 self._state = self._create_state() 226 self.set_bitrate(128) 227 self.set_fec(True) 228 self.set_expected_packet_loss_percent(0.15) 229 self.set_bandwidth('full') 230 self.set_signal_type('auto') 231 232 def __del__(self): 233 if hasattr(self, '_state'): 234 _lib.opus_encoder_destroy(self._state) 235 self._state = None 236 237 def _create_state(self): 238 ret = ctypes.c_int() 239 return _lib.opus_encoder_create(self.SAMPLING_RATE, self.CHANNELS, self.application, ctypes.byref(ret)) 240 241 def set_bitrate(self, kbps): 242 kbps = min(128, max(16, int(kbps))) 243 244 _lib.opus_encoder_ctl(self._state, CTL_SET_BITRATE, kbps * 1024) 245 return kbps 246 247 def set_bandwidth(self, req): 248 if req not in band_ctl: 249 raise KeyError('%r is not a valid bandwidth setting. Try one of: %s' % (req, ','.join(band_ctl))) 250 251 k = band_ctl[req] 252 _lib.opus_encoder_ctl(self._state, CTL_SET_BANDWIDTH, k) 253 254 def set_signal_type(self, req): 255 if req not in signal_ctl: 256 raise KeyError('%r is not a valid signal setting. Try one of: %s' % (req, ','.join(signal_ctl))) 257 258 k = signal_ctl[req] 259 _lib.opus_encoder_ctl(self._state, CTL_SET_SIGNAL, k) 260 261 def set_fec(self, enabled=True): 262 _lib.opus_encoder_ctl(self._state, CTL_SET_FEC, 1 if enabled else 0) 263 264 def set_expected_packet_loss_percent(self, percentage): 265 _lib.opus_encoder_ctl(self._state, CTL_SET_PLP, min(100, max(0, int(percentage * 100)))) 266 267 def encode(self, pcm, frame_size): 268 max_data_bytes = len(pcm) 269 pcm = ctypes.cast(pcm, c_int16_ptr) 270 data = (ctypes.c_char * max_data_bytes)() 271 272 ret = _lib.opus_encode(self._state, pcm, frame_size, data, max_data_bytes) 273 274 return array.array('b', data[:ret]).tobytes() 275 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/discord/opus.py b/discord/opus.py --- a/discord/opus.py +++ b/discord/opus.py @@ -49,9 +49,10 @@ return result def _err_ne(result, func, args): - if result.value != 0: + ret = args[-1]._obj + if ret.value != 0: log.info('error has happened in {0.__name__}'.format(func)) - raise OpusError(result.value) + raise OpusError(ret.value) return result # A list of exported functions.
{"golden_diff": "diff --git a/discord/opus.py b/discord/opus.py\n--- a/discord/opus.py\n+++ b/discord/opus.py\n@@ -49,9 +49,10 @@\n return result\n \n def _err_ne(result, func, args):\n- if result.value != 0:\n+ ret = args[-1]._obj\n+ if ret.value != 0:\n log.info('error has happened in {0.__name__}'.format(func))\n- raise OpusError(result.value)\n+ raise OpusError(ret.value)\n return result\n \n # A list of exported functions.\n", "issue": "Cannot connect to voice channels\nRunning `await voice_channel.connect()` raises\r\n`AttributeError: 'LP_EncoderStruct' object has no attribute 'value'`\r\n\r\nRelevant portions of the traceback ([and the line itself](https://github.com/Rapptz/discord.py/blob/rewrite/discord/opus.py#L52)):\r\n```\r\n File \".../lib/python3.6/site-packages/discord/abc.py\", line 985, in connect\r\n voice = VoiceClient(state=state, timeout=timeout, channel=self)\r\n File \".../lib/python3.6/site-packages/discord/voice_client.py\", line 109, in __init__\r\n self.encoder = opus.Encoder()\r\n File \".../lib/python3.6/site-packages/discord/opus.py\", line 225, in __init__\r\n self._state = self._create_state()\r\n File \".../lib/python3.6/site-packages/discord/opus.py\", line 239, in _create_state\r\n return _lib.opus_encoder_create(self.SAMPLING_RATE, self.CHANNELS, self.application, ctypes.byref(ret))\r\n File \".../lib/python3.6/site-packages/discord/opus.py\", line 52, in _err_ne\r\n if result.value != 0:\r\nAttributeError: 'LP_EncoderStruct' object has no attribute 'value'\r\n```\r\nI have opus 1.2.1 installed on 64-bit Linux, and it is loaded according to `discord.opus.is_loaded()`.\r\n\r\nAny clue as to what might be the issue?\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\nThe MIT License (MIT)\n\nCopyright (c) 2015-2017 Rapptz\n\nPermission is hereby granted, free of charge, to any person obtaining a\ncopy of this software and associated documentation files (the \"Software\"),\nto deal in the Software without restriction, including without limitation\nthe rights to use, copy, modify, merge, publish, distribute, sublicense,\nand/or sell copies of the Software, and to permit persons to whom the\nSoftware is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\nOR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\nFROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\nDEALINGS IN THE SOFTWARE.\n\"\"\"\n\nimport ctypes\nimport ctypes.util\nimport array\nfrom .errors import DiscordException\nimport logging\nimport sys\nimport os.path\n\nlog = logging.getLogger(__name__)\nc_int_ptr = ctypes.POINTER(ctypes.c_int)\nc_int16_ptr = ctypes.POINTER(ctypes.c_int16)\nc_float_ptr = ctypes.POINTER(ctypes.c_float)\n\nclass EncoderStruct(ctypes.Structure):\n pass\n\nEncoderStructPtr = ctypes.POINTER(EncoderStruct)\n\ndef _err_lt(result, func, args):\n if result < 0:\n log.info('error has happened in {0.__name__}'.format(func))\n raise OpusError(result)\n return result\n\ndef _err_ne(result, func, args):\n if result.value != 0:\n log.info('error has happened in {0.__name__}'.format(func))\n raise OpusError(result.value)\n return result\n\n# A list of exported functions.\n# The first argument is obviously the name.\n# The second one are the types of arguments it takes.\n# The third is the result type.\n# The fourth is the error handler.\nexported_functions = [\n ('opus_strerror',\n [ctypes.c_int], ctypes.c_char_p, None),\n ('opus_encoder_get_size',\n [ctypes.c_int], ctypes.c_int, None),\n ('opus_encoder_create',\n [ctypes.c_int, ctypes.c_int, ctypes.c_int, c_int_ptr], EncoderStructPtr, _err_ne),\n ('opus_encode',\n [EncoderStructPtr, c_int16_ptr, ctypes.c_int, ctypes.c_char_p, ctypes.c_int32], ctypes.c_int32, _err_lt),\n ('opus_encoder_ctl',\n None, ctypes.c_int32, _err_lt),\n ('opus_encoder_destroy',\n [EncoderStructPtr], None, None),\n]\n\ndef libopus_loader(name):\n # create the library...\n lib = ctypes.cdll.LoadLibrary(name)\n\n # register the functions...\n for item in exported_functions:\n try:\n func = getattr(lib, item[0])\n except Exception as e:\n raise e\n\n try:\n if item[1]:\n func.argtypes = item[1]\n\n func.restype = item[2]\n except KeyError:\n pass\n\n try:\n if item[3]:\n func.errcheck = item[3]\n except KeyError:\n log.exception(\"Error assigning check function to %s\", func)\n\n return lib\n\ntry:\n if sys.platform == 'win32':\n _basedir = os.path.dirname(os.path.abspath(__file__))\n _bitness = 'x64' if sys.maxsize > 2**32 else 'x86'\n _filename = os.path.join(_basedir, 'bin', 'libopus-0.{}.dll'.format(_bitness))\n _lib = libopus_loader(_filename)\n else:\n _lib = libopus_loader(ctypes.util.find_library('opus'))\nexcept Exception as e:\n _lib = None\n\ndef load_opus(name):\n \"\"\"Loads the libopus shared library for use with voice.\n\n If this function is not called then the library uses the function\n `ctypes.util.find_library`__ and then loads that one\n if available.\n\n .. _find library: https://docs.python.org/3.5/library/ctypes.html#finding-shared-libraries\n __ `find library`_\n\n Not loading a library leads to voice not working.\n\n This function propagates the exceptions thrown.\n\n Warning\n --------\n The bitness of the library must match the bitness of your python\n interpreter. If the library is 64-bit then your python interpreter\n must be 64-bit as well. Usually if there's a mismatch in bitness then\n the load will throw an exception.\n\n Note\n ----\n On Windows, the .dll extension is not necessary. However, on Linux\n the full extension is required to load the library, e.g. ``libopus.so.1``.\n On Linux however, `find library`_ will usually find the library automatically\n without you having to call this.\n\n Parameters\n ----------\n name: str\n The filename of the shared library.\n \"\"\"\n global _lib\n _lib = libopus_loader(name)\n\ndef is_loaded():\n \"\"\"Function to check if opus lib is successfully loaded either\n via the ``ctypes.util.find_library`` call of :func:`load_opus`.\n\n This must return ``True`` for voice to work.\n\n Returns\n -------\n bool\n Indicates if the opus library has been loaded.\n \"\"\"\n global _lib\n return _lib is not None\n\nclass OpusError(DiscordException):\n \"\"\"An exception that is thrown for libopus related errors.\n\n Attributes\n ----------\n code : :class:`int`\n The error code returned.\n \"\"\"\n\n def __init__(self, code):\n self.code = code\n msg = _lib.opus_strerror(self.code).decode('utf-8')\n log.info('\"%s\" has happened', msg)\n super().__init__(msg)\n\nclass OpusNotLoaded(DiscordException):\n \"\"\"An exception that is thrown for when libopus is not loaded.\"\"\"\n pass\n\n\n# Some constants...\nOK = 0\nAPPLICATION_AUDIO = 2049\nAPPLICATION_VOIP = 2048\nAPPLICATION_LOWDELAY = 2051\nCTL_SET_BITRATE = 4002\nCTL_SET_BANDWIDTH = 4008\nCTL_SET_FEC = 4012\nCTL_SET_PLP = 4014\nCTL_SET_SIGNAL = 4024\n\nband_ctl = {\n 'narrow': 1101,\n 'medium': 1102,\n 'wide': 1103,\n 'superwide': 1104,\n 'full': 1105,\n}\n\nsignal_ctl = {\n 'auto': -1000,\n 'voice': 3001,\n 'music': 3002,\n}\n\nclass Encoder:\n SAMPLING_RATE = 48000\n CHANNELS = 2\n FRAME_LENGTH = 20\n SAMPLE_SIZE = 4 # (bit_rate / 8) * CHANNELS (bit_rate == 16)\n SAMPLES_PER_FRAME = int(SAMPLING_RATE / 1000 * FRAME_LENGTH)\n\n FRAME_SIZE = SAMPLES_PER_FRAME * SAMPLE_SIZE\n\n def __init__(self, application=APPLICATION_AUDIO):\n self.application = application\n\n if not is_loaded():\n raise OpusNotLoaded()\n\n self._state = self._create_state()\n self.set_bitrate(128)\n self.set_fec(True)\n self.set_expected_packet_loss_percent(0.15)\n self.set_bandwidth('full')\n self.set_signal_type('auto')\n\n def __del__(self):\n if hasattr(self, '_state'):\n _lib.opus_encoder_destroy(self._state)\n self._state = None\n\n def _create_state(self):\n ret = ctypes.c_int()\n return _lib.opus_encoder_create(self.SAMPLING_RATE, self.CHANNELS, self.application, ctypes.byref(ret))\n\n def set_bitrate(self, kbps):\n kbps = min(128, max(16, int(kbps)))\n\n _lib.opus_encoder_ctl(self._state, CTL_SET_BITRATE, kbps * 1024)\n return kbps\n\n def set_bandwidth(self, req):\n if req not in band_ctl:\n raise KeyError('%r is not a valid bandwidth setting. Try one of: %s' % (req, ','.join(band_ctl)))\n\n k = band_ctl[req]\n _lib.opus_encoder_ctl(self._state, CTL_SET_BANDWIDTH, k)\n\n def set_signal_type(self, req):\n if req not in signal_ctl:\n raise KeyError('%r is not a valid signal setting. Try one of: %s' % (req, ','.join(signal_ctl)))\n\n k = signal_ctl[req]\n _lib.opus_encoder_ctl(self._state, CTL_SET_SIGNAL, k)\n\n def set_fec(self, enabled=True):\n _lib.opus_encoder_ctl(self._state, CTL_SET_FEC, 1 if enabled else 0)\n\n def set_expected_packet_loss_percent(self, percentage):\n _lib.opus_encoder_ctl(self._state, CTL_SET_PLP, min(100, max(0, int(percentage * 100))))\n\n def encode(self, pcm, frame_size):\n max_data_bytes = len(pcm)\n pcm = ctypes.cast(pcm, c_int16_ptr)\n data = (ctypes.c_char * max_data_bytes)()\n\n ret = _lib.opus_encode(self._state, pcm, frame_size, data, max_data_bytes)\n\n return array.array('b', data[:ret]).tobytes()\n", "path": "discord/opus.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\nThe MIT License (MIT)\n\nCopyright (c) 2015-2017 Rapptz\n\nPermission is hereby granted, free of charge, to any person obtaining a\ncopy of this software and associated documentation files (the \"Software\"),\nto deal in the Software without restriction, including without limitation\nthe rights to use, copy, modify, merge, publish, distribute, sublicense,\nand/or sell copies of the Software, and to permit persons to whom the\nSoftware is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\nOR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\nFROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\nDEALINGS IN THE SOFTWARE.\n\"\"\"\n\nimport ctypes\nimport ctypes.util\nimport array\nfrom .errors import DiscordException\nimport logging\nimport sys\nimport os.path\n\nlog = logging.getLogger(__name__)\nc_int_ptr = ctypes.POINTER(ctypes.c_int)\nc_int16_ptr = ctypes.POINTER(ctypes.c_int16)\nc_float_ptr = ctypes.POINTER(ctypes.c_float)\n\nclass EncoderStruct(ctypes.Structure):\n pass\n\nEncoderStructPtr = ctypes.POINTER(EncoderStruct)\n\ndef _err_lt(result, func, args):\n if result < 0:\n log.info('error has happened in {0.__name__}'.format(func))\n raise OpusError(result)\n return result\n\ndef _err_ne(result, func, args):\n ret = args[-1]._obj\n if ret.value != 0:\n log.info('error has happened in {0.__name__}'.format(func))\n raise OpusError(ret.value)\n return result\n\n# A list of exported functions.\n# The first argument is obviously the name.\n# The second one are the types of arguments it takes.\n# The third is the result type.\n# The fourth is the error handler.\nexported_functions = [\n ('opus_strerror',\n [ctypes.c_int], ctypes.c_char_p, None),\n ('opus_encoder_get_size',\n [ctypes.c_int], ctypes.c_int, None),\n ('opus_encoder_create',\n [ctypes.c_int, ctypes.c_int, ctypes.c_int, c_int_ptr], EncoderStructPtr, _err_ne),\n ('opus_encode',\n [EncoderStructPtr, c_int16_ptr, ctypes.c_int, ctypes.c_char_p, ctypes.c_int32], ctypes.c_int32, _err_lt),\n ('opus_encoder_ctl',\n None, ctypes.c_int32, _err_lt),\n ('opus_encoder_destroy',\n [EncoderStructPtr], None, None),\n]\n\ndef libopus_loader(name):\n # create the library...\n lib = ctypes.cdll.LoadLibrary(name)\n\n # register the functions...\n for item in exported_functions:\n try:\n func = getattr(lib, item[0])\n except Exception as e:\n raise e\n\n try:\n if item[1]:\n func.argtypes = item[1]\n\n func.restype = item[2]\n except KeyError:\n pass\n\n try:\n if item[3]:\n func.errcheck = item[3]\n except KeyError:\n log.exception(\"Error assigning check function to %s\", func)\n\n return lib\n\ntry:\n if sys.platform == 'win32':\n _basedir = os.path.dirname(os.path.abspath(__file__))\n _bitness = 'x64' if sys.maxsize > 2**32 else 'x86'\n _filename = os.path.join(_basedir, 'bin', 'libopus-0.{}.dll'.format(_bitness))\n _lib = libopus_loader(_filename)\n else:\n _lib = libopus_loader(ctypes.util.find_library('opus'))\nexcept Exception as e:\n _lib = None\n\ndef load_opus(name):\n \"\"\"Loads the libopus shared library for use with voice.\n\n If this function is not called then the library uses the function\n `ctypes.util.find_library`__ and then loads that one\n if available.\n\n .. _find library: https://docs.python.org/3.5/library/ctypes.html#finding-shared-libraries\n __ `find library`_\n\n Not loading a library leads to voice not working.\n\n This function propagates the exceptions thrown.\n\n Warning\n --------\n The bitness of the library must match the bitness of your python\n interpreter. If the library is 64-bit then your python interpreter\n must be 64-bit as well. Usually if there's a mismatch in bitness then\n the load will throw an exception.\n\n Note\n ----\n On Windows, the .dll extension is not necessary. However, on Linux\n the full extension is required to load the library, e.g. ``libopus.so.1``.\n On Linux however, `find library`_ will usually find the library automatically\n without you having to call this.\n\n Parameters\n ----------\n name: str\n The filename of the shared library.\n \"\"\"\n global _lib\n _lib = libopus_loader(name)\n\ndef is_loaded():\n \"\"\"Function to check if opus lib is successfully loaded either\n via the ``ctypes.util.find_library`` call of :func:`load_opus`.\n\n This must return ``True`` for voice to work.\n\n Returns\n -------\n bool\n Indicates if the opus library has been loaded.\n \"\"\"\n global _lib\n return _lib is not None\n\nclass OpusError(DiscordException):\n \"\"\"An exception that is thrown for libopus related errors.\n\n Attributes\n ----------\n code : :class:`int`\n The error code returned.\n \"\"\"\n\n def __init__(self, code):\n self.code = code\n msg = _lib.opus_strerror(self.code).decode('utf-8')\n log.info('\"%s\" has happened', msg)\n super().__init__(msg)\n\nclass OpusNotLoaded(DiscordException):\n \"\"\"An exception that is thrown for when libopus is not loaded.\"\"\"\n pass\n\n\n# Some constants...\nOK = 0\nAPPLICATION_AUDIO = 2049\nAPPLICATION_VOIP = 2048\nAPPLICATION_LOWDELAY = 2051\nCTL_SET_BITRATE = 4002\nCTL_SET_BANDWIDTH = 4008\nCTL_SET_FEC = 4012\nCTL_SET_PLP = 4014\nCTL_SET_SIGNAL = 4024\n\nband_ctl = {\n 'narrow': 1101,\n 'medium': 1102,\n 'wide': 1103,\n 'superwide': 1104,\n 'full': 1105,\n}\n\nsignal_ctl = {\n 'auto': -1000,\n 'voice': 3001,\n 'music': 3002,\n}\n\nclass Encoder:\n SAMPLING_RATE = 48000\n CHANNELS = 2\n FRAME_LENGTH = 20\n SAMPLE_SIZE = 4 # (bit_rate / 8) * CHANNELS (bit_rate == 16)\n SAMPLES_PER_FRAME = int(SAMPLING_RATE / 1000 * FRAME_LENGTH)\n\n FRAME_SIZE = SAMPLES_PER_FRAME * SAMPLE_SIZE\n\n def __init__(self, application=APPLICATION_AUDIO):\n self.application = application\n\n if not is_loaded():\n raise OpusNotLoaded()\n\n self._state = self._create_state()\n self.set_bitrate(128)\n self.set_fec(True)\n self.set_expected_packet_loss_percent(0.15)\n self.set_bandwidth('full')\n self.set_signal_type('auto')\n\n def __del__(self):\n if hasattr(self, '_state'):\n _lib.opus_encoder_destroy(self._state)\n self._state = None\n\n def _create_state(self):\n ret = ctypes.c_int()\n return _lib.opus_encoder_create(self.SAMPLING_RATE, self.CHANNELS, self.application, ctypes.byref(ret))\n\n def set_bitrate(self, kbps):\n kbps = min(128, max(16, int(kbps)))\n\n _lib.opus_encoder_ctl(self._state, CTL_SET_BITRATE, kbps * 1024)\n return kbps\n\n def set_bandwidth(self, req):\n if req not in band_ctl:\n raise KeyError('%r is not a valid bandwidth setting. Try one of: %s' % (req, ','.join(band_ctl)))\n\n k = band_ctl[req]\n _lib.opus_encoder_ctl(self._state, CTL_SET_BANDWIDTH, k)\n\n def set_signal_type(self, req):\n if req not in signal_ctl:\n raise KeyError('%r is not a valid signal setting. Try one of: %s' % (req, ','.join(signal_ctl)))\n\n k = signal_ctl[req]\n _lib.opus_encoder_ctl(self._state, CTL_SET_SIGNAL, k)\n\n def set_fec(self, enabled=True):\n _lib.opus_encoder_ctl(self._state, CTL_SET_FEC, 1 if enabled else 0)\n\n def set_expected_packet_loss_percent(self, percentage):\n _lib.opus_encoder_ctl(self._state, CTL_SET_PLP, min(100, max(0, int(percentage * 100))))\n\n def encode(self, pcm, frame_size):\n max_data_bytes = len(pcm)\n pcm = ctypes.cast(pcm, c_int16_ptr)\n data = (ctypes.c_char * max_data_bytes)()\n\n ret = _lib.opus_encode(self._state, pcm, frame_size, data, max_data_bytes)\n\n return array.array('b', data[:ret]).tobytes()\n", "path": "discord/opus.py"}]}
3,528
133
gh_patches_debug_12988
rasdani/github-patches
git_diff
elastic__ecs-1488
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- `doc_values` parameter not set in Beats artifact Certain fields have `index: false` and `doc_values: false` in their ECS definition, like `event.original`: https://github.com/elastic/ecs/blob/master/schemas/event.yml#L577-L599 When `doc_values: false` is defined in the field definition, it's not being added to the maintained Beats fields YAML artifact: https://github.com/elastic/ecs/blob/master/generated/beats/fields.ecs.yml#L1737-L1750 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `scripts/generators/beats.py` Content: ``` 1 from os.path import join 2 from collections import OrderedDict 3 from generators import ecs_helpers 4 5 6 def generate(ecs_nested, ecs_version, out_dir): 7 # Load temporary allowlist for default_fields workaround. 8 df_allowlist = ecs_helpers.yaml_load('scripts/generators/beats_default_fields_allowlist.yml') 9 10 # base first 11 beats_fields = fieldset_field_array(ecs_nested['base']['fields'], df_allowlist, ecs_nested['base']['prefix']) 12 13 allowed_fieldset_keys = ['name', 'title', 'group', 'description', 'footnote', 'type'] 14 # other fieldsets 15 for fieldset_name in sorted(ecs_nested): 16 if 'base' == fieldset_name: 17 continue 18 fieldset = ecs_nested[fieldset_name] 19 20 # Handle when `root:true` 21 if fieldset.get('root', False): 22 beats_fields.extend(fieldset_field_array(fieldset['fields'], df_allowlist, fieldset['prefix'])) 23 continue 24 25 beats_field = ecs_helpers.dict_copy_keys_ordered(fieldset, allowed_fieldset_keys) 26 beats_field['fields'] = fieldset_field_array(fieldset['fields'], df_allowlist, fieldset['prefix']) 27 beats_fields.append(beats_field) 28 29 beats_file = OrderedDict() 30 beats_file['key'] = 'ecs' 31 beats_file['title'] = 'ECS' 32 beats_file['description'] = 'ECS Fields.' 33 beats_file['fields'] = beats_fields 34 35 write_beats_yaml(beats_file, ecs_version, out_dir) 36 37 38 def fieldset_field_array(source_fields, df_allowlist, fieldset_prefix): 39 allowed_keys = ['name', 'level', 'required', 'type', 'object_type', 40 'ignore_above', 'multi_fields', 'format', 'input_format', 41 'output_format', 'output_precision', 'description', 42 'example', 'enabled', 'index', 'path', 'scaling_factor'] 43 multi_fields_allowed_keys = ['name', 'type', 'norms', 'default_field', 'normalizer', 'ignore_above'] 44 45 fields = [] 46 for nested_field_name in source_fields: 47 ecs_field = source_fields[nested_field_name] 48 beats_field = ecs_helpers.dict_copy_keys_ordered(ecs_field, allowed_keys) 49 if '' == fieldset_prefix: 50 contextual_name = nested_field_name 51 else: 52 contextual_name = '.'.join(nested_field_name.split('.')[1:]) 53 54 cleaned_multi_fields = [] 55 if 'multi_fields' in ecs_field: 56 for mf in ecs_field['multi_fields']: 57 # Set default_field if necessary. Avoid adding the key if the parent 58 # field already is marked with default_field: false. 59 if not mf['flat_name'] in df_allowlist and ecs_field['flat_name'] in df_allowlist: 60 mf['default_field'] = False 61 cleaned_multi_fields.append( 62 ecs_helpers.dict_copy_keys_ordered(mf, multi_fields_allowed_keys)) 63 beats_field['multi_fields'] = cleaned_multi_fields 64 65 beats_field['name'] = contextual_name 66 67 if not ecs_field['flat_name'] in df_allowlist: 68 beats_field['default_field'] = False 69 70 fields.append(beats_field) 71 return sorted(fields, key=lambda x: x['name']) 72 73 # Helpers 74 75 76 def write_beats_yaml(beats_file, ecs_version, out_dir): 77 ecs_helpers.make_dirs(join(out_dir, 'beats')) 78 warning = file_header().format(version=ecs_version) 79 ecs_helpers.yaml_dump(join(out_dir, 'beats/fields.ecs.yml'), [beats_file], preamble=warning) 80 81 82 # Templates 83 84 85 def file_header(): 86 return """ 87 # WARNING! Do not edit this file directly, it was generated by the ECS project, 88 # based on ECS version {version}. 89 # Please visit https://github.com/elastic/ecs to suggest changes to ECS fields. 90 91 """.lstrip() 92 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/scripts/generators/beats.py b/scripts/generators/beats.py --- a/scripts/generators/beats.py +++ b/scripts/generators/beats.py @@ -39,7 +39,8 @@ allowed_keys = ['name', 'level', 'required', 'type', 'object_type', 'ignore_above', 'multi_fields', 'format', 'input_format', 'output_format', 'output_precision', 'description', - 'example', 'enabled', 'index', 'path', 'scaling_factor'] + 'example', 'enabled', 'index', 'doc_values', 'path', + 'scaling_factor'] multi_fields_allowed_keys = ['name', 'type', 'norms', 'default_field', 'normalizer', 'ignore_above'] fields = []
{"golden_diff": "diff --git a/scripts/generators/beats.py b/scripts/generators/beats.py\n--- a/scripts/generators/beats.py\n+++ b/scripts/generators/beats.py\n@@ -39,7 +39,8 @@\n allowed_keys = ['name', 'level', 'required', 'type', 'object_type',\n 'ignore_above', 'multi_fields', 'format', 'input_format',\n 'output_format', 'output_precision', 'description',\n- 'example', 'enabled', 'index', 'path', 'scaling_factor']\n+ 'example', 'enabled', 'index', 'doc_values', 'path',\n+ 'scaling_factor']\n multi_fields_allowed_keys = ['name', 'type', 'norms', 'default_field', 'normalizer', 'ignore_above']\n \n fields = []\n", "issue": "`doc_values` parameter not set in Beats artifact \nCertain fields have `index: false` and `doc_values: false` in their ECS definition, like `event.original`:\r\n\r\nhttps://github.com/elastic/ecs/blob/master/schemas/event.yml#L577-L599\r\n\r\nWhen `doc_values: false` is defined in the field definition, it's not being added to the maintained Beats fields YAML artifact:\r\n\r\nhttps://github.com/elastic/ecs/blob/master/generated/beats/fields.ecs.yml#L1737-L1750\n", "before_files": [{"content": "from os.path import join\nfrom collections import OrderedDict\nfrom generators import ecs_helpers\n\n\ndef generate(ecs_nested, ecs_version, out_dir):\n # Load temporary allowlist for default_fields workaround.\n df_allowlist = ecs_helpers.yaml_load('scripts/generators/beats_default_fields_allowlist.yml')\n\n # base first\n beats_fields = fieldset_field_array(ecs_nested['base']['fields'], df_allowlist, ecs_nested['base']['prefix'])\n\n allowed_fieldset_keys = ['name', 'title', 'group', 'description', 'footnote', 'type']\n # other fieldsets\n for fieldset_name in sorted(ecs_nested):\n if 'base' == fieldset_name:\n continue\n fieldset = ecs_nested[fieldset_name]\n\n # Handle when `root:true`\n if fieldset.get('root', False):\n beats_fields.extend(fieldset_field_array(fieldset['fields'], df_allowlist, fieldset['prefix']))\n continue\n\n beats_field = ecs_helpers.dict_copy_keys_ordered(fieldset, allowed_fieldset_keys)\n beats_field['fields'] = fieldset_field_array(fieldset['fields'], df_allowlist, fieldset['prefix'])\n beats_fields.append(beats_field)\n\n beats_file = OrderedDict()\n beats_file['key'] = 'ecs'\n beats_file['title'] = 'ECS'\n beats_file['description'] = 'ECS Fields.'\n beats_file['fields'] = beats_fields\n\n write_beats_yaml(beats_file, ecs_version, out_dir)\n\n\ndef fieldset_field_array(source_fields, df_allowlist, fieldset_prefix):\n allowed_keys = ['name', 'level', 'required', 'type', 'object_type',\n 'ignore_above', 'multi_fields', 'format', 'input_format',\n 'output_format', 'output_precision', 'description',\n 'example', 'enabled', 'index', 'path', 'scaling_factor']\n multi_fields_allowed_keys = ['name', 'type', 'norms', 'default_field', 'normalizer', 'ignore_above']\n\n fields = []\n for nested_field_name in source_fields:\n ecs_field = source_fields[nested_field_name]\n beats_field = ecs_helpers.dict_copy_keys_ordered(ecs_field, allowed_keys)\n if '' == fieldset_prefix:\n contextual_name = nested_field_name\n else:\n contextual_name = '.'.join(nested_field_name.split('.')[1:])\n\n cleaned_multi_fields = []\n if 'multi_fields' in ecs_field:\n for mf in ecs_field['multi_fields']:\n # Set default_field if necessary. Avoid adding the key if the parent\n # field already is marked with default_field: false.\n if not mf['flat_name'] in df_allowlist and ecs_field['flat_name'] in df_allowlist:\n mf['default_field'] = False\n cleaned_multi_fields.append(\n ecs_helpers.dict_copy_keys_ordered(mf, multi_fields_allowed_keys))\n beats_field['multi_fields'] = cleaned_multi_fields\n\n beats_field['name'] = contextual_name\n\n if not ecs_field['flat_name'] in df_allowlist:\n beats_field['default_field'] = False\n\n fields.append(beats_field)\n return sorted(fields, key=lambda x: x['name'])\n\n# Helpers\n\n\ndef write_beats_yaml(beats_file, ecs_version, out_dir):\n ecs_helpers.make_dirs(join(out_dir, 'beats'))\n warning = file_header().format(version=ecs_version)\n ecs_helpers.yaml_dump(join(out_dir, 'beats/fields.ecs.yml'), [beats_file], preamble=warning)\n\n\n# Templates\n\n\ndef file_header():\n return \"\"\"\n# WARNING! Do not edit this file directly, it was generated by the ECS project,\n# based on ECS version {version}.\n# Please visit https://github.com/elastic/ecs to suggest changes to ECS fields.\n\n\"\"\".lstrip()\n", "path": "scripts/generators/beats.py"}], "after_files": [{"content": "from os.path import join\nfrom collections import OrderedDict\nfrom generators import ecs_helpers\n\n\ndef generate(ecs_nested, ecs_version, out_dir):\n # Load temporary allowlist for default_fields workaround.\n df_allowlist = ecs_helpers.yaml_load('scripts/generators/beats_default_fields_allowlist.yml')\n\n # base first\n beats_fields = fieldset_field_array(ecs_nested['base']['fields'], df_allowlist, ecs_nested['base']['prefix'])\n\n allowed_fieldset_keys = ['name', 'title', 'group', 'description', 'footnote', 'type']\n # other fieldsets\n for fieldset_name in sorted(ecs_nested):\n if 'base' == fieldset_name:\n continue\n fieldset = ecs_nested[fieldset_name]\n\n # Handle when `root:true`\n if fieldset.get('root', False):\n beats_fields.extend(fieldset_field_array(fieldset['fields'], df_allowlist, fieldset['prefix']))\n continue\n\n beats_field = ecs_helpers.dict_copy_keys_ordered(fieldset, allowed_fieldset_keys)\n beats_field['fields'] = fieldset_field_array(fieldset['fields'], df_allowlist, fieldset['prefix'])\n beats_fields.append(beats_field)\n\n beats_file = OrderedDict()\n beats_file['key'] = 'ecs'\n beats_file['title'] = 'ECS'\n beats_file['description'] = 'ECS Fields.'\n beats_file['fields'] = beats_fields\n\n write_beats_yaml(beats_file, ecs_version, out_dir)\n\n\ndef fieldset_field_array(source_fields, df_allowlist, fieldset_prefix):\n allowed_keys = ['name', 'level', 'required', 'type', 'object_type',\n 'ignore_above', 'multi_fields', 'format', 'input_format',\n 'output_format', 'output_precision', 'description',\n 'example', 'enabled', 'index', 'doc_values', 'path',\n 'scaling_factor']\n multi_fields_allowed_keys = ['name', 'type', 'norms', 'default_field', 'normalizer', 'ignore_above']\n\n fields = []\n for nested_field_name in source_fields:\n ecs_field = source_fields[nested_field_name]\n beats_field = ecs_helpers.dict_copy_keys_ordered(ecs_field, allowed_keys)\n if '' == fieldset_prefix:\n contextual_name = nested_field_name\n else:\n contextual_name = '.'.join(nested_field_name.split('.')[1:])\n\n cleaned_multi_fields = []\n if 'multi_fields' in ecs_field:\n for mf in ecs_field['multi_fields']:\n # Set default_field if necessary. Avoid adding the key if the parent\n # field already is marked with default_field: false.\n if not mf['flat_name'] in df_allowlist and ecs_field['flat_name'] in df_allowlist:\n mf['default_field'] = False\n cleaned_multi_fields.append(\n ecs_helpers.dict_copy_keys_ordered(mf, multi_fields_allowed_keys))\n beats_field['multi_fields'] = cleaned_multi_fields\n\n beats_field['name'] = contextual_name\n\n if not ecs_field['flat_name'] in df_allowlist:\n beats_field['default_field'] = False\n\n fields.append(beats_field)\n return sorted(fields, key=lambda x: x['name'])\n\n# Helpers\n\n\ndef write_beats_yaml(beats_file, ecs_version, out_dir):\n ecs_helpers.make_dirs(join(out_dir, 'beats'))\n warning = file_header().format(version=ecs_version)\n ecs_helpers.yaml_dump(join(out_dir, 'beats/fields.ecs.yml'), [beats_file], preamble=warning)\n\n\n# Templates\n\n\ndef file_header():\n return \"\"\"\n# WARNING! Do not edit this file directly, it was generated by the ECS project,\n# based on ECS version {version}.\n# Please visit https://github.com/elastic/ecs to suggest changes to ECS fields.\n\n\"\"\".lstrip()\n", "path": "scripts/generators/beats.py"}]}
1,380
175
gh_patches_debug_8516
rasdani/github-patches
git_diff
iterative__dvc-10005
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- exp save: Short option for --message is -M, but for dvc exp run it is -m It would be nice if the short options of `dvc exp run` and `dvc exp save` for specifying a commit message would be identical. Also, best to use the same options as one would use for `git commit`, i.e., `-m` instead of `-M`. ``` usage: dvc experiments save [-h] [-q | -v] [-f] [--json] [-n <name>] [-I <path>] [-M MESSAGE] Save current workspace as an experiment. Documentation: <https://man.dvc.org/exp/save> options: -h, --help show this help message and exit -q, --quiet Be quiet. -v, --verbose Be verbose. -f, --force Replace experiment if it already exists. --json Show output in JSON format. -n <name>, --name <name> Human-readable experiment name. If not specified, a name will be auto-generated. -I <path>, --include-untracked <path> List of untracked paths to include in the experiment. -M MESSAGE, --message MESSAGE Custom commit message to use when committing the experiment. ``` DVC CLI v3.22.1 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `dvc/commands/experiments/save.py` Content: ``` 1 import argparse 2 import logging 3 4 from dvc.cli.command import CmdBase 5 from dvc.cli.utils import append_doc_link 6 from dvc.exceptions import DvcException 7 from dvc.ui import ui 8 9 logger = logging.getLogger(__name__) 10 11 12 class CmdExperimentsSave(CmdBase): 13 def run(self): 14 try: 15 ref = self.repo.experiments.save( 16 name=self.args.name, 17 force=self.args.force, 18 include_untracked=self.args.include_untracked, 19 message=self.args.message, 20 ) 21 except DvcException: 22 logger.exception("failed to save experiment") 23 return 1 24 25 if self.args.json: 26 ui.write_json({"ref": ref}) 27 else: 28 name = self.repo.experiments.get_exact_name([ref])[ref] 29 ui.write(f"Experiment has been saved as: {name}") 30 31 return 0 32 33 34 def add_parser(experiments_subparsers, parent_parser): 35 EXPERIMENTS_SAVE_HELP = "Save current workspace as an experiment." 36 save_parser = experiments_subparsers.add_parser( 37 "save", 38 parents=[parent_parser], 39 description=append_doc_link(EXPERIMENTS_SAVE_HELP, "exp/save"), 40 help=EXPERIMENTS_SAVE_HELP, 41 formatter_class=argparse.RawDescriptionHelpFormatter, 42 ) 43 save_parser.add_argument( 44 "-f", 45 "--force", 46 action="store_true", 47 default=False, 48 help="Replace experiment if it already exists.", 49 ) 50 save_parser.add_argument( 51 "--json", 52 action="store_true", 53 default=False, 54 help="Show output in JSON format.", 55 ) 56 save_parser.add_argument( 57 "-n", 58 "--name", 59 default=None, 60 help=( 61 "Human-readable experiment name. If not specified, a name will " 62 "be auto-generated." 63 ), 64 metavar="<name>", 65 ) 66 save_parser.add_argument( 67 "-I", 68 "--include-untracked", 69 action="append", 70 default=[], 71 help="List of untracked paths to include in the experiment.", 72 metavar="<path>", 73 ) 74 save_parser.add_argument( 75 "-M", 76 "--message", 77 type=str, 78 default=None, 79 help="Custom commit message to use when committing the experiment.", 80 ) 81 save_parser.set_defaults(func=CmdExperimentsSave) 82 ``` Path: `dvc/commands/experiments/exec_run.py` Content: ``` 1 import logging 2 3 from dvc.cli.command import CmdBaseNoRepo 4 5 logger = logging.getLogger(__name__) 6 7 8 class CmdExecutorRun(CmdBaseNoRepo): 9 """Run an experiment executor.""" 10 11 def run(self): 12 from dvc.repo.experiments.executor.base import BaseExecutor, ExecutorInfo 13 from dvc.utils.serialize import load_json 14 15 info = ExecutorInfo.from_dict(load_json(self.args.infofile)) 16 BaseExecutor.reproduce( 17 info=info, 18 rev="", 19 queue=None, 20 log_level=logger.getEffectiveLevel(), 21 infofile=self.args.infofile, 22 copy_paths=self.args.copy_paths, 23 message=self.args.message, 24 ) 25 return 0 26 27 28 def add_parser(experiments_subparsers, parent_parser): 29 EXEC_RUN_HELP = "Run an experiment executor." 30 exec_run_parser = experiments_subparsers.add_parser( 31 "exec-run", 32 parents=[parent_parser], 33 description=EXEC_RUN_HELP, 34 add_help=False, 35 ) 36 exec_run_parser.add_argument( 37 "--infofile", 38 help="Path to executor info file", 39 default=None, 40 ) 41 exec_run_parser.add_argument( 42 "-C", 43 "--copy-paths", 44 action="append", 45 default=[], 46 help=( 47 "List of ignored or untracked paths to copy into the temp directory." 48 " Only used if `--temp` or `--queue` is specified." 49 ), 50 ) 51 exec_run_parser.add_argument( 52 "-M", 53 "--message", 54 type=str, 55 default=None, 56 help="Custom commit message to use when committing the experiment.", 57 ) 58 exec_run_parser.set_defaults(func=CmdExecutorRun) 59 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/dvc/commands/experiments/exec_run.py b/dvc/commands/experiments/exec_run.py --- a/dvc/commands/experiments/exec_run.py +++ b/dvc/commands/experiments/exec_run.py @@ -49,7 +49,7 @@ ), ) exec_run_parser.add_argument( - "-M", + "-m", "--message", type=str, default=None, diff --git a/dvc/commands/experiments/save.py b/dvc/commands/experiments/save.py --- a/dvc/commands/experiments/save.py +++ b/dvc/commands/experiments/save.py @@ -72,7 +72,7 @@ metavar="<path>", ) save_parser.add_argument( - "-M", + "-m", "--message", type=str, default=None,
{"golden_diff": "diff --git a/dvc/commands/experiments/exec_run.py b/dvc/commands/experiments/exec_run.py\n--- a/dvc/commands/experiments/exec_run.py\n+++ b/dvc/commands/experiments/exec_run.py\n@@ -49,7 +49,7 @@\n ),\n )\n exec_run_parser.add_argument(\n- \"-M\",\n+ \"-m\",\n \"--message\",\n type=str,\n default=None,\ndiff --git a/dvc/commands/experiments/save.py b/dvc/commands/experiments/save.py\n--- a/dvc/commands/experiments/save.py\n+++ b/dvc/commands/experiments/save.py\n@@ -72,7 +72,7 @@\n metavar=\"<path>\",\n )\n save_parser.add_argument(\n- \"-M\",\n+ \"-m\",\n \"--message\",\n type=str,\n default=None,\n", "issue": "exp save: Short option for --message is -M, but for dvc exp run it is -m\nIt would be nice if the short options of `dvc exp run` and `dvc exp save` for specifying a commit message would be identical. Also, best to use the same options as one would use for `git commit`, i.e., `-m` instead of `-M`.\r\n\r\n```\r\nusage: dvc experiments save [-h] [-q | -v] [-f] [--json] [-n <name>] [-I <path>] [-M MESSAGE]\r\n\r\nSave current workspace as an experiment.\r\nDocumentation: <https://man.dvc.org/exp/save>\r\n\r\noptions:\r\n -h, --help show this help message and exit\r\n -q, --quiet Be quiet.\r\n -v, --verbose Be verbose.\r\n -f, --force Replace experiment if it already exists.\r\n --json Show output in JSON format.\r\n -n <name>, --name <name>\r\n Human-readable experiment name. If not specified, a name will be auto-generated.\r\n -I <path>, --include-untracked <path>\r\n List of untracked paths to include in the experiment.\r\n -M MESSAGE, --message MESSAGE\r\n Custom commit message to use when committing the experiment.\r\n```\r\n\r\nDVC CLI v3.22.1\n", "before_files": [{"content": "import argparse\nimport logging\n\nfrom dvc.cli.command import CmdBase\nfrom dvc.cli.utils import append_doc_link\nfrom dvc.exceptions import DvcException\nfrom dvc.ui import ui\n\nlogger = logging.getLogger(__name__)\n\n\nclass CmdExperimentsSave(CmdBase):\n def run(self):\n try:\n ref = self.repo.experiments.save(\n name=self.args.name,\n force=self.args.force,\n include_untracked=self.args.include_untracked,\n message=self.args.message,\n )\n except DvcException:\n logger.exception(\"failed to save experiment\")\n return 1\n\n if self.args.json:\n ui.write_json({\"ref\": ref})\n else:\n name = self.repo.experiments.get_exact_name([ref])[ref]\n ui.write(f\"Experiment has been saved as: {name}\")\n\n return 0\n\n\ndef add_parser(experiments_subparsers, parent_parser):\n EXPERIMENTS_SAVE_HELP = \"Save current workspace as an experiment.\"\n save_parser = experiments_subparsers.add_parser(\n \"save\",\n parents=[parent_parser],\n description=append_doc_link(EXPERIMENTS_SAVE_HELP, \"exp/save\"),\n help=EXPERIMENTS_SAVE_HELP,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n save_parser.add_argument(\n \"-f\",\n \"--force\",\n action=\"store_true\",\n default=False,\n help=\"Replace experiment if it already exists.\",\n )\n save_parser.add_argument(\n \"--json\",\n action=\"store_true\",\n default=False,\n help=\"Show output in JSON format.\",\n )\n save_parser.add_argument(\n \"-n\",\n \"--name\",\n default=None,\n help=(\n \"Human-readable experiment name. If not specified, a name will \"\n \"be auto-generated.\"\n ),\n metavar=\"<name>\",\n )\n save_parser.add_argument(\n \"-I\",\n \"--include-untracked\",\n action=\"append\",\n default=[],\n help=\"List of untracked paths to include in the experiment.\",\n metavar=\"<path>\",\n )\n save_parser.add_argument(\n \"-M\",\n \"--message\",\n type=str,\n default=None,\n help=\"Custom commit message to use when committing the experiment.\",\n )\n save_parser.set_defaults(func=CmdExperimentsSave)\n", "path": "dvc/commands/experiments/save.py"}, {"content": "import logging\n\nfrom dvc.cli.command import CmdBaseNoRepo\n\nlogger = logging.getLogger(__name__)\n\n\nclass CmdExecutorRun(CmdBaseNoRepo):\n \"\"\"Run an experiment executor.\"\"\"\n\n def run(self):\n from dvc.repo.experiments.executor.base import BaseExecutor, ExecutorInfo\n from dvc.utils.serialize import load_json\n\n info = ExecutorInfo.from_dict(load_json(self.args.infofile))\n BaseExecutor.reproduce(\n info=info,\n rev=\"\",\n queue=None,\n log_level=logger.getEffectiveLevel(),\n infofile=self.args.infofile,\n copy_paths=self.args.copy_paths,\n message=self.args.message,\n )\n return 0\n\n\ndef add_parser(experiments_subparsers, parent_parser):\n EXEC_RUN_HELP = \"Run an experiment executor.\"\n exec_run_parser = experiments_subparsers.add_parser(\n \"exec-run\",\n parents=[parent_parser],\n description=EXEC_RUN_HELP,\n add_help=False,\n )\n exec_run_parser.add_argument(\n \"--infofile\",\n help=\"Path to executor info file\",\n default=None,\n )\n exec_run_parser.add_argument(\n \"-C\",\n \"--copy-paths\",\n action=\"append\",\n default=[],\n help=(\n \"List of ignored or untracked paths to copy into the temp directory.\"\n \" Only used if `--temp` or `--queue` is specified.\"\n ),\n )\n exec_run_parser.add_argument(\n \"-M\",\n \"--message\",\n type=str,\n default=None,\n help=\"Custom commit message to use when committing the experiment.\",\n )\n exec_run_parser.set_defaults(func=CmdExecutorRun)\n", "path": "dvc/commands/experiments/exec_run.py"}], "after_files": [{"content": "import argparse\nimport logging\n\nfrom dvc.cli.command import CmdBase\nfrom dvc.cli.utils import append_doc_link\nfrom dvc.exceptions import DvcException\nfrom dvc.ui import ui\n\nlogger = logging.getLogger(__name__)\n\n\nclass CmdExperimentsSave(CmdBase):\n def run(self):\n try:\n ref = self.repo.experiments.save(\n name=self.args.name,\n force=self.args.force,\n include_untracked=self.args.include_untracked,\n message=self.args.message,\n )\n except DvcException:\n logger.exception(\"failed to save experiment\")\n return 1\n\n if self.args.json:\n ui.write_json({\"ref\": ref})\n else:\n name = self.repo.experiments.get_exact_name([ref])[ref]\n ui.write(f\"Experiment has been saved as: {name}\")\n\n return 0\n\n\ndef add_parser(experiments_subparsers, parent_parser):\n EXPERIMENTS_SAVE_HELP = \"Save current workspace as an experiment.\"\n save_parser = experiments_subparsers.add_parser(\n \"save\",\n parents=[parent_parser],\n description=append_doc_link(EXPERIMENTS_SAVE_HELP, \"exp/save\"),\n help=EXPERIMENTS_SAVE_HELP,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n save_parser.add_argument(\n \"-f\",\n \"--force\",\n action=\"store_true\",\n default=False,\n help=\"Replace experiment if it already exists.\",\n )\n save_parser.add_argument(\n \"--json\",\n action=\"store_true\",\n default=False,\n help=\"Show output in JSON format.\",\n )\n save_parser.add_argument(\n \"-n\",\n \"--name\",\n default=None,\n help=(\n \"Human-readable experiment name. If not specified, a name will \"\n \"be auto-generated.\"\n ),\n metavar=\"<name>\",\n )\n save_parser.add_argument(\n \"-I\",\n \"--include-untracked\",\n action=\"append\",\n default=[],\n help=\"List of untracked paths to include in the experiment.\",\n metavar=\"<path>\",\n )\n save_parser.add_argument(\n \"-m\",\n \"--message\",\n type=str,\n default=None,\n help=\"Custom commit message to use when committing the experiment.\",\n )\n save_parser.set_defaults(func=CmdExperimentsSave)\n", "path": "dvc/commands/experiments/save.py"}, {"content": "import logging\n\nfrom dvc.cli.command import CmdBaseNoRepo\n\nlogger = logging.getLogger(__name__)\n\n\nclass CmdExecutorRun(CmdBaseNoRepo):\n \"\"\"Run an experiment executor.\"\"\"\n\n def run(self):\n from dvc.repo.experiments.executor.base import BaseExecutor, ExecutorInfo\n from dvc.utils.serialize import load_json\n\n info = ExecutorInfo.from_dict(load_json(self.args.infofile))\n BaseExecutor.reproduce(\n info=info,\n rev=\"\",\n queue=None,\n log_level=logger.getEffectiveLevel(),\n infofile=self.args.infofile,\n copy_paths=self.args.copy_paths,\n message=self.args.message,\n )\n return 0\n\n\ndef add_parser(experiments_subparsers, parent_parser):\n EXEC_RUN_HELP = \"Run an experiment executor.\"\n exec_run_parser = experiments_subparsers.add_parser(\n \"exec-run\",\n parents=[parent_parser],\n description=EXEC_RUN_HELP,\n add_help=False,\n )\n exec_run_parser.add_argument(\n \"--infofile\",\n help=\"Path to executor info file\",\n default=None,\n )\n exec_run_parser.add_argument(\n \"-C\",\n \"--copy-paths\",\n action=\"append\",\n default=[],\n help=(\n \"List of ignored or untracked paths to copy into the temp directory.\"\n \" Only used if `--temp` or `--queue` is specified.\"\n ),\n )\n exec_run_parser.add_argument(\n \"-m\",\n \"--message\",\n type=str,\n default=None,\n help=\"Custom commit message to use when committing the experiment.\",\n )\n exec_run_parser.set_defaults(func=CmdExecutorRun)\n", "path": "dvc/commands/experiments/exec_run.py"}]}
1,663
184
gh_patches_debug_22467
rasdani/github-patches
git_diff
pre-commit__pre-commit-400
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Stashed changes lost if hook fails with non-UTF-8 diff containing trailing whitespace Hi, A colleague almost lost all the changes she was working on after launching a `git commit` (with zero file added) and `pre-commit` crashing without restoring its [patch](https://github.com/pre-commit/pre-commit/blob/master/pre_commit/staged_files_only.py#L15). Here is the terminal message she got: ``` [WARNING] Stashed changes conflicted with hook auto-fixes... Rolling back fixes... An unexpected error has occurred: CalledProcessError: Command: ['git', 'apply', 'C:\\Users\\toto\\.pre-commit\\patch1471341002'] ``` This seems very similar to a past solved issue: https://github.com/pre-commit/pre-commit/issues/176 I think it had to do with CRLF conversion. I'm going to try to reproduce this. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `pre_commit/staged_files_only.py` Content: ``` 1 from __future__ import unicode_literals 2 3 import contextlib 4 import io 5 import logging 6 import time 7 8 from pre_commit.util import CalledProcessError 9 10 11 logger = logging.getLogger('pre_commit') 12 13 14 @contextlib.contextmanager 15 def staged_files_only(cmd_runner): 16 """Clear any unstaged changes from the git working directory inside this 17 context. 18 19 Args: 20 cmd_runner - PrefixedCommandRunner 21 """ 22 # Determine if there are unstaged files 23 retcode, diff_stdout_binary, _ = cmd_runner.run( 24 [ 25 'git', 'diff', '--ignore-submodules', '--binary', '--exit-code', 26 '--no-color', 27 ], 28 retcode=None, 29 encoding=None, 30 ) 31 if retcode and diff_stdout_binary.strip(): 32 patch_filename = cmd_runner.path('patch{0}'.format(int(time.time()))) 33 logger.warning('Unstaged files detected.') 34 logger.info( 35 'Stashing unstaged files to {0}.'.format(patch_filename), 36 ) 37 # Save the current unstaged changes as a patch 38 with io.open(patch_filename, 'wb') as patch_file: 39 patch_file.write(diff_stdout_binary) 40 41 # Clear the working directory of unstaged changes 42 cmd_runner.run(['git', 'checkout', '--', '.']) 43 try: 44 yield 45 finally: 46 # Try to apply the patch we saved 47 try: 48 cmd_runner.run(['git', 'apply', patch_filename]) 49 except CalledProcessError: 50 logger.warning( 51 'Stashed changes conflicted with hook auto-fixes... ' 52 'Rolling back fixes...' 53 ) 54 # We failed to apply the patch, presumably due to fixes made 55 # by hooks. 56 # Roll back the changes made by hooks. 57 cmd_runner.run(['git', 'checkout', '--', '.']) 58 cmd_runner.run(['git', 'apply', patch_filename]) 59 logger.info('Restored changes from {0}.'.format(patch_filename)) 60 else: 61 # There weren't any staged files so we don't need to do anything 62 # special 63 yield 64 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/pre_commit/staged_files_only.py b/pre_commit/staged_files_only.py --- a/pre_commit/staged_files_only.py +++ b/pre_commit/staged_files_only.py @@ -45,7 +45,7 @@ finally: # Try to apply the patch we saved try: - cmd_runner.run(['git', 'apply', patch_filename]) + cmd_runner.run(('git', 'apply', patch_filename), encoding=None) except CalledProcessError: logger.warning( 'Stashed changes conflicted with hook auto-fixes... ' @@ -55,7 +55,7 @@ # by hooks. # Roll back the changes made by hooks. cmd_runner.run(['git', 'checkout', '--', '.']) - cmd_runner.run(['git', 'apply', patch_filename]) + cmd_runner.run(('git', 'apply', patch_filename), encoding=None) logger.info('Restored changes from {0}.'.format(patch_filename)) else: # There weren't any staged files so we don't need to do anything
{"golden_diff": "diff --git a/pre_commit/staged_files_only.py b/pre_commit/staged_files_only.py\n--- a/pre_commit/staged_files_only.py\n+++ b/pre_commit/staged_files_only.py\n@@ -45,7 +45,7 @@\n finally:\n # Try to apply the patch we saved\n try:\n- cmd_runner.run(['git', 'apply', patch_filename])\n+ cmd_runner.run(('git', 'apply', patch_filename), encoding=None)\n except CalledProcessError:\n logger.warning(\n 'Stashed changes conflicted with hook auto-fixes... '\n@@ -55,7 +55,7 @@\n # by hooks.\n # Roll back the changes made by hooks.\n cmd_runner.run(['git', 'checkout', '--', '.'])\n- cmd_runner.run(['git', 'apply', patch_filename])\n+ cmd_runner.run(('git', 'apply', patch_filename), encoding=None)\n logger.info('Restored changes from {0}.'.format(patch_filename))\n else:\n # There weren't any staged files so we don't need to do anything\n", "issue": "Stashed changes lost if hook fails with non-UTF-8 diff containing trailing whitespace\nHi,\n\nA colleague almost lost all the changes she was working on after launching a `git commit` (with zero file added) and `pre-commit` crashing without restoring its [patch](https://github.com/pre-commit/pre-commit/blob/master/pre_commit/staged_files_only.py#L15).\n\nHere is the terminal message she got:\n\n```\n[WARNING] Stashed changes conflicted with hook auto-fixes... Rolling back fixes...\nAn unexpected error has occurred: CalledProcessError: Command: ['git', 'apply', 'C:\\\\Users\\\\toto\\\\.pre-commit\\\\patch1471341002']\n```\n\nThis seems very similar to a past solved issue:\nhttps://github.com/pre-commit/pre-commit/issues/176\n\nI think it had to do with CRLF conversion.\nI'm going to try to reproduce this.\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport io\nimport logging\nimport time\n\nfrom pre_commit.util import CalledProcessError\n\n\nlogger = logging.getLogger('pre_commit')\n\n\[email protected]\ndef staged_files_only(cmd_runner):\n \"\"\"Clear any unstaged changes from the git working directory inside this\n context.\n\n Args:\n cmd_runner - PrefixedCommandRunner\n \"\"\"\n # Determine if there are unstaged files\n retcode, diff_stdout_binary, _ = cmd_runner.run(\n [\n 'git', 'diff', '--ignore-submodules', '--binary', '--exit-code',\n '--no-color',\n ],\n retcode=None,\n encoding=None,\n )\n if retcode and diff_stdout_binary.strip():\n patch_filename = cmd_runner.path('patch{0}'.format(int(time.time())))\n logger.warning('Unstaged files detected.')\n logger.info(\n 'Stashing unstaged files to {0}.'.format(patch_filename),\n )\n # Save the current unstaged changes as a patch\n with io.open(patch_filename, 'wb') as patch_file:\n patch_file.write(diff_stdout_binary)\n\n # Clear the working directory of unstaged changes\n cmd_runner.run(['git', 'checkout', '--', '.'])\n try:\n yield\n finally:\n # Try to apply the patch we saved\n try:\n cmd_runner.run(['git', 'apply', patch_filename])\n except CalledProcessError:\n logger.warning(\n 'Stashed changes conflicted with hook auto-fixes... '\n 'Rolling back fixes...'\n )\n # We failed to apply the patch, presumably due to fixes made\n # by hooks.\n # Roll back the changes made by hooks.\n cmd_runner.run(['git', 'checkout', '--', '.'])\n cmd_runner.run(['git', 'apply', patch_filename])\n logger.info('Restored changes from {0}.'.format(patch_filename))\n else:\n # There weren't any staged files so we don't need to do anything\n # special\n yield\n", "path": "pre_commit/staged_files_only.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport io\nimport logging\nimport time\n\nfrom pre_commit.util import CalledProcessError\n\n\nlogger = logging.getLogger('pre_commit')\n\n\[email protected]\ndef staged_files_only(cmd_runner):\n \"\"\"Clear any unstaged changes from the git working directory inside this\n context.\n\n Args:\n cmd_runner - PrefixedCommandRunner\n \"\"\"\n # Determine if there are unstaged files\n retcode, diff_stdout_binary, _ = cmd_runner.run(\n [\n 'git', 'diff', '--ignore-submodules', '--binary', '--exit-code',\n '--no-color',\n ],\n retcode=None,\n encoding=None,\n )\n if retcode and diff_stdout_binary.strip():\n patch_filename = cmd_runner.path('patch{0}'.format(int(time.time())))\n logger.warning('Unstaged files detected.')\n logger.info(\n 'Stashing unstaged files to {0}.'.format(patch_filename),\n )\n # Save the current unstaged changes as a patch\n with io.open(patch_filename, 'wb') as patch_file:\n patch_file.write(diff_stdout_binary)\n\n # Clear the working directory of unstaged changes\n cmd_runner.run(['git', 'checkout', '--', '.'])\n try:\n yield\n finally:\n # Try to apply the patch we saved\n try:\n cmd_runner.run(('git', 'apply', patch_filename), encoding=None)\n except CalledProcessError:\n logger.warning(\n 'Stashed changes conflicted with hook auto-fixes... '\n 'Rolling back fixes...'\n )\n # We failed to apply the patch, presumably due to fixes made\n # by hooks.\n # Roll back the changes made by hooks.\n cmd_runner.run(['git', 'checkout', '--', '.'])\n cmd_runner.run(('git', 'apply', patch_filename), encoding=None)\n logger.info('Restored changes from {0}.'.format(patch_filename))\n else:\n # There weren't any staged files so we don't need to do anything\n # special\n yield\n", "path": "pre_commit/staged_files_only.py"}]}
1,013
231
gh_patches_debug_28280
rasdani/github-patches
git_diff
sanic-org__sanic-2154
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Allow later websockets releases **Describe the bug** `websockets` is [pinned](https://github.com/sanic-org/sanic/blob/main/setup.py#L91 ). The latest `websockets` is 9.1 and this release is fixing a [authentication vulnerability](https://websockets.readthedocs.io/en/stable/changelog.html) which was introduced with 8.0. **Expected behavior** Allow to use `websockets>9` **Environment (please complete the following information):** - OS: probably all - Version: current **Additional context** n/a --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `sanic/websocket.py` Content: ``` 1 from typing import ( 2 Any, 3 Awaitable, 4 Callable, 5 Dict, 6 List, 7 MutableMapping, 8 Optional, 9 Union, 10 ) 11 12 from httptools import HttpParserUpgrade # type: ignore 13 from websockets import ( # type: ignore 14 ConnectionClosed, 15 InvalidHandshake, 16 WebSocketCommonProtocol, 17 handshake, 18 ) 19 20 from sanic.exceptions import InvalidUsage 21 from sanic.server import HttpProtocol 22 23 24 __all__ = ["ConnectionClosed", "WebSocketProtocol", "WebSocketConnection"] 25 26 ASIMessage = MutableMapping[str, Any] 27 28 29 class WebSocketProtocol(HttpProtocol): 30 def __init__( 31 self, 32 *args, 33 websocket_timeout=10, 34 websocket_max_size=None, 35 websocket_max_queue=None, 36 websocket_read_limit=2 ** 16, 37 websocket_write_limit=2 ** 16, 38 websocket_ping_interval=20, 39 websocket_ping_timeout=20, 40 **kwargs 41 ): 42 super().__init__(*args, **kwargs) 43 self.websocket = None 44 # self.app = None 45 self.websocket_timeout = websocket_timeout 46 self.websocket_max_size = websocket_max_size 47 self.websocket_max_queue = websocket_max_queue 48 self.websocket_read_limit = websocket_read_limit 49 self.websocket_write_limit = websocket_write_limit 50 self.websocket_ping_interval = websocket_ping_interval 51 self.websocket_ping_timeout = websocket_ping_timeout 52 53 # timeouts make no sense for websocket routes 54 def request_timeout_callback(self): 55 if self.websocket is None: 56 super().request_timeout_callback() 57 58 def response_timeout_callback(self): 59 if self.websocket is None: 60 super().response_timeout_callback() 61 62 def keep_alive_timeout_callback(self): 63 if self.websocket is None: 64 super().keep_alive_timeout_callback() 65 66 def connection_lost(self, exc): 67 if self.websocket is not None: 68 self.websocket.connection_lost(exc) 69 super().connection_lost(exc) 70 71 def data_received(self, data): 72 if self.websocket is not None: 73 # pass the data to the websocket protocol 74 self.websocket.data_received(data) 75 else: 76 try: 77 super().data_received(data) 78 except HttpParserUpgrade: 79 # this is okay, it just indicates we've got an upgrade request 80 pass 81 82 def write_response(self, response): 83 if self.websocket is not None: 84 # websocket requests do not write a response 85 self.transport.close() 86 else: 87 super().write_response(response) 88 89 async def websocket_handshake(self, request, subprotocols=None): 90 # let the websockets package do the handshake with the client 91 headers = {} 92 93 try: 94 key = handshake.check_request(request.headers) 95 handshake.build_response(headers, key) 96 except InvalidHandshake: 97 raise InvalidUsage("Invalid websocket request") 98 99 subprotocol = None 100 if subprotocols and "Sec-Websocket-Protocol" in request.headers: 101 # select a subprotocol 102 client_subprotocols = [ 103 p.strip() 104 for p in request.headers["Sec-Websocket-Protocol"].split(",") 105 ] 106 for p in client_subprotocols: 107 if p in subprotocols: 108 subprotocol = p 109 headers["Sec-Websocket-Protocol"] = subprotocol 110 break 111 112 # write the 101 response back to the client 113 rv = b"HTTP/1.1 101 Switching Protocols\r\n" 114 for k, v in headers.items(): 115 rv += k.encode("utf-8") + b": " + v.encode("utf-8") + b"\r\n" 116 rv += b"\r\n" 117 request.transport.write(rv) 118 119 # hook up the websocket protocol 120 self.websocket = WebSocketCommonProtocol( 121 close_timeout=self.websocket_timeout, 122 max_size=self.websocket_max_size, 123 max_queue=self.websocket_max_queue, 124 read_limit=self.websocket_read_limit, 125 write_limit=self.websocket_write_limit, 126 ping_interval=self.websocket_ping_interval, 127 ping_timeout=self.websocket_ping_timeout, 128 ) 129 # Following two lines are required for websockets 8.x 130 self.websocket.is_client = False 131 self.websocket.side = "server" 132 self.websocket.subprotocol = subprotocol 133 self.websocket.connection_made(request.transport) 134 self.websocket.connection_open() 135 return self.websocket 136 137 138 class WebSocketConnection: 139 140 # TODO 141 # - Implement ping/pong 142 143 def __init__( 144 self, 145 send: Callable[[ASIMessage], Awaitable[None]], 146 receive: Callable[[], Awaitable[ASIMessage]], 147 subprotocols: Optional[List[str]] = None, 148 ) -> None: 149 self._send = send 150 self._receive = receive 151 self.subprotocols = subprotocols or [] 152 153 async def send(self, data: Union[str, bytes], *args, **kwargs) -> None: 154 message: Dict[str, Union[str, bytes]] = {"type": "websocket.send"} 155 156 if isinstance(data, bytes): 157 message.update({"bytes": data}) 158 else: 159 message.update({"text": str(data)}) 160 161 await self._send(message) 162 163 async def recv(self, *args, **kwargs) -> Optional[str]: 164 message = await self._receive() 165 166 if message["type"] == "websocket.receive": 167 return message["text"] 168 elif message["type"] == "websocket.disconnect": 169 pass 170 171 return None 172 173 receive = recv 174 175 async def accept(self) -> None: 176 await self._send( 177 { 178 "type": "websocket.accept", 179 "subprotocol": ",".join(list(self.subprotocols)), 180 } 181 ) 182 183 async def close(self) -> None: 184 pass 185 ``` Path: `setup.py` Content: ``` 1 """ 2 Sanic 3 """ 4 import codecs 5 import os 6 import re 7 import sys 8 9 from distutils.util import strtobool 10 11 from setuptools import find_packages, setup 12 from setuptools.command.test import test as TestCommand 13 14 15 class PyTest(TestCommand): 16 """ 17 Provide a Test runner to be used from setup.py to run unit tests 18 """ 19 20 user_options = [("pytest-args=", "a", "Arguments to pass to pytest")] 21 22 def initialize_options(self): 23 TestCommand.initialize_options(self) 24 self.pytest_args = "" 25 26 def run_tests(self): 27 import shlex 28 29 import pytest 30 31 errno = pytest.main(shlex.split(self.pytest_args)) 32 sys.exit(errno) 33 34 35 def open_local(paths, mode="r", encoding="utf8"): 36 path = os.path.join(os.path.abspath(os.path.dirname(__file__)), *paths) 37 38 return codecs.open(path, mode, encoding) 39 40 41 with open_local(["sanic", "__version__.py"], encoding="latin1") as fp: 42 try: 43 version = re.findall( 44 r"^__version__ = \"([^']+)\"\r?$", fp.read(), re.M 45 )[0] 46 except IndexError: 47 raise RuntimeError("Unable to determine version.") 48 49 with open_local(["README.rst"]) as rm: 50 long_description = rm.read() 51 52 setup_kwargs = { 53 "name": "sanic", 54 "version": version, 55 "url": "http://github.com/sanic-org/sanic/", 56 "license": "MIT", 57 "author": "Sanic Community", 58 "author_email": "[email protected]", 59 "description": ( 60 "A web server and web framework that's written to go fast. " 61 "Build fast. Run fast." 62 ), 63 "long_description": long_description, 64 "packages": find_packages(), 65 "package_data": {"sanic": ["py.typed"]}, 66 "platforms": "any", 67 "python_requires": ">=3.7", 68 "classifiers": [ 69 "Development Status :: 4 - Beta", 70 "Environment :: Web Environment", 71 "License :: OSI Approved :: MIT License", 72 "Programming Language :: Python :: 3.7", 73 "Programming Language :: Python :: 3.8", 74 "Programming Language :: Python :: 3.9", 75 ], 76 "entry_points": {"console_scripts": ["sanic = sanic.__main__:main"]}, 77 } 78 79 env_dependency = ( 80 '; sys_platform != "win32" ' 'and implementation_name == "cpython"' 81 ) 82 ujson = "ujson>=1.35" + env_dependency 83 uvloop = "uvloop>=0.5.3" + env_dependency 84 85 requirements = [ 86 "sanic-routing>=0.6.0", 87 "httptools>=0.0.10", 88 uvloop, 89 ujson, 90 "aiofiles>=0.6.0", 91 "websockets>=8.1,<9.0", 92 "multidict>=5.0,<6.0", 93 ] 94 95 tests_require = [ 96 "sanic-testing", 97 "pytest==5.2.1", 98 "multidict>=5.0,<6.0", 99 "gunicorn==20.0.4", 100 "pytest-cov", 101 "beautifulsoup4", 102 uvloop, 103 ujson, 104 "pytest-sanic", 105 "pytest-sugar", 106 "pytest-benchmark", 107 ] 108 109 docs_require = [ 110 "sphinx>=2.1.2", 111 "sphinx_rtd_theme", 112 "recommonmark>=0.5.0", 113 "docutils", 114 "pygments", 115 ] 116 117 dev_require = tests_require + [ 118 "aiofiles", 119 "tox", 120 "black", 121 "flake8", 122 "bandit", 123 "towncrier", 124 ] 125 126 all_require = dev_require + docs_require 127 128 if strtobool(os.environ.get("SANIC_NO_UJSON", "no")): 129 print("Installing without uJSON") 130 requirements.remove(ujson) 131 tests_require.remove(ujson) 132 133 # 'nt' means windows OS 134 if strtobool(os.environ.get("SANIC_NO_UVLOOP", "no")): 135 print("Installing without uvLoop") 136 requirements.remove(uvloop) 137 tests_require.remove(uvloop) 138 139 extras_require = { 140 "test": tests_require, 141 "dev": dev_require, 142 "docs": docs_require, 143 "all": all_require, 144 } 145 146 setup_kwargs["install_requires"] = requirements 147 setup_kwargs["tests_require"] = tests_require 148 setup_kwargs["extras_require"] = extras_require 149 setup_kwargs["cmdclass"] = {"test": PyTest} 150 setup(**setup_kwargs) 151 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/sanic/websocket.py b/sanic/websocket.py --- a/sanic/websocket.py +++ b/sanic/websocket.py @@ -14,9 +14,13 @@ ConnectionClosed, InvalidHandshake, WebSocketCommonProtocol, - handshake, ) +# Despite the "legacy" namespace, the primary maintainer of websockets +# committed to maintaining backwards-compatibility until 2026 and will +# consider extending it if sanic continues depending on this module. +from websockets.legacy import handshake + from sanic.exceptions import InvalidUsage from sanic.server import HttpProtocol @@ -126,7 +130,9 @@ ping_interval=self.websocket_ping_interval, ping_timeout=self.websocket_ping_timeout, ) - # Following two lines are required for websockets 8.x + # we use WebSocketCommonProtocol because we don't want the handshake + # logic from WebSocketServerProtocol; however, we must tell it that + # we're running on the server side self.websocket.is_client = False self.websocket.side = "server" self.websocket.subprotocol = subprotocol diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -88,12 +88,12 @@ uvloop, ujson, "aiofiles>=0.6.0", - "websockets>=8.1,<9.0", + "websockets>=9.0", "multidict>=5.0,<6.0", ] tests_require = [ - "sanic-testing", + "sanic-testing>=0.6.0", "pytest==5.2.1", "multidict>=5.0,<6.0", "gunicorn==20.0.4",
{"golden_diff": "diff --git a/sanic/websocket.py b/sanic/websocket.py\n--- a/sanic/websocket.py\n+++ b/sanic/websocket.py\n@@ -14,9 +14,13 @@\n ConnectionClosed,\n InvalidHandshake,\n WebSocketCommonProtocol,\n- handshake,\n )\n \n+# Despite the \"legacy\" namespace, the primary maintainer of websockets\n+# committed to maintaining backwards-compatibility until 2026 and will\n+# consider extending it if sanic continues depending on this module.\n+from websockets.legacy import handshake\n+\n from sanic.exceptions import InvalidUsage\n from sanic.server import HttpProtocol\n \n@@ -126,7 +130,9 @@\n ping_interval=self.websocket_ping_interval,\n ping_timeout=self.websocket_ping_timeout,\n )\n- # Following two lines are required for websockets 8.x\n+ # we use WebSocketCommonProtocol because we don't want the handshake\n+ # logic from WebSocketServerProtocol; however, we must tell it that\n+ # we're running on the server side\n self.websocket.is_client = False\n self.websocket.side = \"server\"\n self.websocket.subprotocol = subprotocol\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -88,12 +88,12 @@\n uvloop,\n ujson,\n \"aiofiles>=0.6.0\",\n- \"websockets>=8.1,<9.0\",\n+ \"websockets>=9.0\",\n \"multidict>=5.0,<6.0\",\n ]\n \n tests_require = [\n- \"sanic-testing\",\n+ \"sanic-testing>=0.6.0\",\n \"pytest==5.2.1\",\n \"multidict>=5.0,<6.0\",\n \"gunicorn==20.0.4\",\n", "issue": "Allow later websockets releases\n**Describe the bug**\r\n`websockets` is [pinned](https://github.com/sanic-org/sanic/blob/main/setup.py#L91\r\n). The latest `websockets` is 9.1 and this release is fixing a [authentication vulnerability](https://websockets.readthedocs.io/en/stable/changelog.html) which was introduced with 8.0.\r\n\r\n**Expected behavior**\r\nAllow to use `websockets>9`\r\n\r\n**Environment (please complete the following information):**\r\n - OS: probably all\r\n - Version: current\r\n\r\n**Additional context**\r\nn/a\n", "before_files": [{"content": "from typing import (\n Any,\n Awaitable,\n Callable,\n Dict,\n List,\n MutableMapping,\n Optional,\n Union,\n)\n\nfrom httptools import HttpParserUpgrade # type: ignore\nfrom websockets import ( # type: ignore\n ConnectionClosed,\n InvalidHandshake,\n WebSocketCommonProtocol,\n handshake,\n)\n\nfrom sanic.exceptions import InvalidUsage\nfrom sanic.server import HttpProtocol\n\n\n__all__ = [\"ConnectionClosed\", \"WebSocketProtocol\", \"WebSocketConnection\"]\n\nASIMessage = MutableMapping[str, Any]\n\n\nclass WebSocketProtocol(HttpProtocol):\n def __init__(\n self,\n *args,\n websocket_timeout=10,\n websocket_max_size=None,\n websocket_max_queue=None,\n websocket_read_limit=2 ** 16,\n websocket_write_limit=2 ** 16,\n websocket_ping_interval=20,\n websocket_ping_timeout=20,\n **kwargs\n ):\n super().__init__(*args, **kwargs)\n self.websocket = None\n # self.app = None\n self.websocket_timeout = websocket_timeout\n self.websocket_max_size = websocket_max_size\n self.websocket_max_queue = websocket_max_queue\n self.websocket_read_limit = websocket_read_limit\n self.websocket_write_limit = websocket_write_limit\n self.websocket_ping_interval = websocket_ping_interval\n self.websocket_ping_timeout = websocket_ping_timeout\n\n # timeouts make no sense for websocket routes\n def request_timeout_callback(self):\n if self.websocket is None:\n super().request_timeout_callback()\n\n def response_timeout_callback(self):\n if self.websocket is None:\n super().response_timeout_callback()\n\n def keep_alive_timeout_callback(self):\n if self.websocket is None:\n super().keep_alive_timeout_callback()\n\n def connection_lost(self, exc):\n if self.websocket is not None:\n self.websocket.connection_lost(exc)\n super().connection_lost(exc)\n\n def data_received(self, data):\n if self.websocket is not None:\n # pass the data to the websocket protocol\n self.websocket.data_received(data)\n else:\n try:\n super().data_received(data)\n except HttpParserUpgrade:\n # this is okay, it just indicates we've got an upgrade request\n pass\n\n def write_response(self, response):\n if self.websocket is not None:\n # websocket requests do not write a response\n self.transport.close()\n else:\n super().write_response(response)\n\n async def websocket_handshake(self, request, subprotocols=None):\n # let the websockets package do the handshake with the client\n headers = {}\n\n try:\n key = handshake.check_request(request.headers)\n handshake.build_response(headers, key)\n except InvalidHandshake:\n raise InvalidUsage(\"Invalid websocket request\")\n\n subprotocol = None\n if subprotocols and \"Sec-Websocket-Protocol\" in request.headers:\n # select a subprotocol\n client_subprotocols = [\n p.strip()\n for p in request.headers[\"Sec-Websocket-Protocol\"].split(\",\")\n ]\n for p in client_subprotocols:\n if p in subprotocols:\n subprotocol = p\n headers[\"Sec-Websocket-Protocol\"] = subprotocol\n break\n\n # write the 101 response back to the client\n rv = b\"HTTP/1.1 101 Switching Protocols\\r\\n\"\n for k, v in headers.items():\n rv += k.encode(\"utf-8\") + b\": \" + v.encode(\"utf-8\") + b\"\\r\\n\"\n rv += b\"\\r\\n\"\n request.transport.write(rv)\n\n # hook up the websocket protocol\n self.websocket = WebSocketCommonProtocol(\n close_timeout=self.websocket_timeout,\n max_size=self.websocket_max_size,\n max_queue=self.websocket_max_queue,\n read_limit=self.websocket_read_limit,\n write_limit=self.websocket_write_limit,\n ping_interval=self.websocket_ping_interval,\n ping_timeout=self.websocket_ping_timeout,\n )\n # Following two lines are required for websockets 8.x\n self.websocket.is_client = False\n self.websocket.side = \"server\"\n self.websocket.subprotocol = subprotocol\n self.websocket.connection_made(request.transport)\n self.websocket.connection_open()\n return self.websocket\n\n\nclass WebSocketConnection:\n\n # TODO\n # - Implement ping/pong\n\n def __init__(\n self,\n send: Callable[[ASIMessage], Awaitable[None]],\n receive: Callable[[], Awaitable[ASIMessage]],\n subprotocols: Optional[List[str]] = None,\n ) -> None:\n self._send = send\n self._receive = receive\n self.subprotocols = subprotocols or []\n\n async def send(self, data: Union[str, bytes], *args, **kwargs) -> None:\n message: Dict[str, Union[str, bytes]] = {\"type\": \"websocket.send\"}\n\n if isinstance(data, bytes):\n message.update({\"bytes\": data})\n else:\n message.update({\"text\": str(data)})\n\n await self._send(message)\n\n async def recv(self, *args, **kwargs) -> Optional[str]:\n message = await self._receive()\n\n if message[\"type\"] == \"websocket.receive\":\n return message[\"text\"]\n elif message[\"type\"] == \"websocket.disconnect\":\n pass\n\n return None\n\n receive = recv\n\n async def accept(self) -> None:\n await self._send(\n {\n \"type\": \"websocket.accept\",\n \"subprotocol\": \",\".join(list(self.subprotocols)),\n }\n )\n\n async def close(self) -> None:\n pass\n", "path": "sanic/websocket.py"}, {"content": "\"\"\"\nSanic\n\"\"\"\nimport codecs\nimport os\nimport re\nimport sys\n\nfrom distutils.util import strtobool\n\nfrom setuptools import find_packages, setup\nfrom setuptools.command.test import test as TestCommand\n\n\nclass PyTest(TestCommand):\n \"\"\"\n Provide a Test runner to be used from setup.py to run unit tests\n \"\"\"\n\n user_options = [(\"pytest-args=\", \"a\", \"Arguments to pass to pytest\")]\n\n def initialize_options(self):\n TestCommand.initialize_options(self)\n self.pytest_args = \"\"\n\n def run_tests(self):\n import shlex\n\n import pytest\n\n errno = pytest.main(shlex.split(self.pytest_args))\n sys.exit(errno)\n\n\ndef open_local(paths, mode=\"r\", encoding=\"utf8\"):\n path = os.path.join(os.path.abspath(os.path.dirname(__file__)), *paths)\n\n return codecs.open(path, mode, encoding)\n\n\nwith open_local([\"sanic\", \"__version__.py\"], encoding=\"latin1\") as fp:\n try:\n version = re.findall(\n r\"^__version__ = \\\"([^']+)\\\"\\r?$\", fp.read(), re.M\n )[0]\n except IndexError:\n raise RuntimeError(\"Unable to determine version.\")\n\nwith open_local([\"README.rst\"]) as rm:\n long_description = rm.read()\n\nsetup_kwargs = {\n \"name\": \"sanic\",\n \"version\": version,\n \"url\": \"http://github.com/sanic-org/sanic/\",\n \"license\": \"MIT\",\n \"author\": \"Sanic Community\",\n \"author_email\": \"[email protected]\",\n \"description\": (\n \"A web server and web framework that's written to go fast. \"\n \"Build fast. Run fast.\"\n ),\n \"long_description\": long_description,\n \"packages\": find_packages(),\n \"package_data\": {\"sanic\": [\"py.typed\"]},\n \"platforms\": \"any\",\n \"python_requires\": \">=3.7\",\n \"classifiers\": [\n \"Development Status :: 4 - Beta\",\n \"Environment :: Web Environment\",\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n ],\n \"entry_points\": {\"console_scripts\": [\"sanic = sanic.__main__:main\"]},\n}\n\nenv_dependency = (\n '; sys_platform != \"win32\" ' 'and implementation_name == \"cpython\"'\n)\nujson = \"ujson>=1.35\" + env_dependency\nuvloop = \"uvloop>=0.5.3\" + env_dependency\n\nrequirements = [\n \"sanic-routing>=0.6.0\",\n \"httptools>=0.0.10\",\n uvloop,\n ujson,\n \"aiofiles>=0.6.0\",\n \"websockets>=8.1,<9.0\",\n \"multidict>=5.0,<6.0\",\n]\n\ntests_require = [\n \"sanic-testing\",\n \"pytest==5.2.1\",\n \"multidict>=5.0,<6.0\",\n \"gunicorn==20.0.4\",\n \"pytest-cov\",\n \"beautifulsoup4\",\n uvloop,\n ujson,\n \"pytest-sanic\",\n \"pytest-sugar\",\n \"pytest-benchmark\",\n]\n\ndocs_require = [\n \"sphinx>=2.1.2\",\n \"sphinx_rtd_theme\",\n \"recommonmark>=0.5.0\",\n \"docutils\",\n \"pygments\",\n]\n\ndev_require = tests_require + [\n \"aiofiles\",\n \"tox\",\n \"black\",\n \"flake8\",\n \"bandit\",\n \"towncrier\",\n]\n\nall_require = dev_require + docs_require\n\nif strtobool(os.environ.get(\"SANIC_NO_UJSON\", \"no\")):\n print(\"Installing without uJSON\")\n requirements.remove(ujson)\n tests_require.remove(ujson)\n\n# 'nt' means windows OS\nif strtobool(os.environ.get(\"SANIC_NO_UVLOOP\", \"no\")):\n print(\"Installing without uvLoop\")\n requirements.remove(uvloop)\n tests_require.remove(uvloop)\n\nextras_require = {\n \"test\": tests_require,\n \"dev\": dev_require,\n \"docs\": docs_require,\n \"all\": all_require,\n}\n\nsetup_kwargs[\"install_requires\"] = requirements\nsetup_kwargs[\"tests_require\"] = tests_require\nsetup_kwargs[\"extras_require\"] = extras_require\nsetup_kwargs[\"cmdclass\"] = {\"test\": PyTest}\nsetup(**setup_kwargs)\n", "path": "setup.py"}], "after_files": [{"content": "from typing import (\n Any,\n Awaitable,\n Callable,\n Dict,\n List,\n MutableMapping,\n Optional,\n Union,\n)\n\nfrom httptools import HttpParserUpgrade # type: ignore\nfrom websockets import ( # type: ignore\n ConnectionClosed,\n InvalidHandshake,\n WebSocketCommonProtocol,\n)\n\n# Despite the \"legacy\" namespace, the primary maintainer of websockets\n# committed to maintaining backwards-compatibility until 2026 and will\n# consider extending it if sanic continues depending on this module.\nfrom websockets.legacy import handshake\n\nfrom sanic.exceptions import InvalidUsage\nfrom sanic.server import HttpProtocol\n\n\n__all__ = [\"ConnectionClosed\", \"WebSocketProtocol\", \"WebSocketConnection\"]\n\nASIMessage = MutableMapping[str, Any]\n\n\nclass WebSocketProtocol(HttpProtocol):\n def __init__(\n self,\n *args,\n websocket_timeout=10,\n websocket_max_size=None,\n websocket_max_queue=None,\n websocket_read_limit=2 ** 16,\n websocket_write_limit=2 ** 16,\n websocket_ping_interval=20,\n websocket_ping_timeout=20,\n **kwargs\n ):\n super().__init__(*args, **kwargs)\n self.websocket = None\n # self.app = None\n self.websocket_timeout = websocket_timeout\n self.websocket_max_size = websocket_max_size\n self.websocket_max_queue = websocket_max_queue\n self.websocket_read_limit = websocket_read_limit\n self.websocket_write_limit = websocket_write_limit\n self.websocket_ping_interval = websocket_ping_interval\n self.websocket_ping_timeout = websocket_ping_timeout\n\n # timeouts make no sense for websocket routes\n def request_timeout_callback(self):\n if self.websocket is None:\n super().request_timeout_callback()\n\n def response_timeout_callback(self):\n if self.websocket is None:\n super().response_timeout_callback()\n\n def keep_alive_timeout_callback(self):\n if self.websocket is None:\n super().keep_alive_timeout_callback()\n\n def connection_lost(self, exc):\n if self.websocket is not None:\n self.websocket.connection_lost(exc)\n super().connection_lost(exc)\n\n def data_received(self, data):\n if self.websocket is not None:\n # pass the data to the websocket protocol\n self.websocket.data_received(data)\n else:\n try:\n super().data_received(data)\n except HttpParserUpgrade:\n # this is okay, it just indicates we've got an upgrade request\n pass\n\n def write_response(self, response):\n if self.websocket is not None:\n # websocket requests do not write a response\n self.transport.close()\n else:\n super().write_response(response)\n\n async def websocket_handshake(self, request, subprotocols=None):\n # let the websockets package do the handshake with the client\n headers = {}\n\n try:\n key = handshake.check_request(request.headers)\n handshake.build_response(headers, key)\n except InvalidHandshake:\n raise InvalidUsage(\"Invalid websocket request\")\n\n subprotocol = None\n if subprotocols and \"Sec-Websocket-Protocol\" in request.headers:\n # select a subprotocol\n client_subprotocols = [\n p.strip()\n for p in request.headers[\"Sec-Websocket-Protocol\"].split(\",\")\n ]\n for p in client_subprotocols:\n if p in subprotocols:\n subprotocol = p\n headers[\"Sec-Websocket-Protocol\"] = subprotocol\n break\n\n # write the 101 response back to the client\n rv = b\"HTTP/1.1 101 Switching Protocols\\r\\n\"\n for k, v in headers.items():\n rv += k.encode(\"utf-8\") + b\": \" + v.encode(\"utf-8\") + b\"\\r\\n\"\n rv += b\"\\r\\n\"\n request.transport.write(rv)\n\n # hook up the websocket protocol\n self.websocket = WebSocketCommonProtocol(\n close_timeout=self.websocket_timeout,\n max_size=self.websocket_max_size,\n max_queue=self.websocket_max_queue,\n read_limit=self.websocket_read_limit,\n write_limit=self.websocket_write_limit,\n ping_interval=self.websocket_ping_interval,\n ping_timeout=self.websocket_ping_timeout,\n )\n # we use WebSocketCommonProtocol because we don't want the handshake\n # logic from WebSocketServerProtocol; however, we must tell it that\n # we're running on the server side\n self.websocket.is_client = False\n self.websocket.side = \"server\"\n self.websocket.subprotocol = subprotocol\n self.websocket.connection_made(request.transport)\n self.websocket.connection_open()\n return self.websocket\n\n\nclass WebSocketConnection:\n\n # TODO\n # - Implement ping/pong\n\n def __init__(\n self,\n send: Callable[[ASIMessage], Awaitable[None]],\n receive: Callable[[], Awaitable[ASIMessage]],\n subprotocols: Optional[List[str]] = None,\n ) -> None:\n self._send = send\n self._receive = receive\n self.subprotocols = subprotocols or []\n\n async def send(self, data: Union[str, bytes], *args, **kwargs) -> None:\n message: Dict[str, Union[str, bytes]] = {\"type\": \"websocket.send\"}\n\n if isinstance(data, bytes):\n message.update({\"bytes\": data})\n else:\n message.update({\"text\": str(data)})\n\n await self._send(message)\n\n async def recv(self, *args, **kwargs) -> Optional[str]:\n message = await self._receive()\n\n if message[\"type\"] == \"websocket.receive\":\n return message[\"text\"]\n elif message[\"type\"] == \"websocket.disconnect\":\n pass\n\n return None\n\n receive = recv\n\n async def accept(self) -> None:\n await self._send(\n {\n \"type\": \"websocket.accept\",\n \"subprotocol\": \",\".join(list(self.subprotocols)),\n }\n )\n\n async def close(self) -> None:\n pass\n", "path": "sanic/websocket.py"}, {"content": "\"\"\"\nSanic\n\"\"\"\nimport codecs\nimport os\nimport re\nimport sys\n\nfrom distutils.util import strtobool\n\nfrom setuptools import find_packages, setup\nfrom setuptools.command.test import test as TestCommand\n\n\nclass PyTest(TestCommand):\n \"\"\"\n Provide a Test runner to be used from setup.py to run unit tests\n \"\"\"\n\n user_options = [(\"pytest-args=\", \"a\", \"Arguments to pass to pytest\")]\n\n def initialize_options(self):\n TestCommand.initialize_options(self)\n self.pytest_args = \"\"\n\n def run_tests(self):\n import shlex\n\n import pytest\n\n errno = pytest.main(shlex.split(self.pytest_args))\n sys.exit(errno)\n\n\ndef open_local(paths, mode=\"r\", encoding=\"utf8\"):\n path = os.path.join(os.path.abspath(os.path.dirname(__file__)), *paths)\n\n return codecs.open(path, mode, encoding)\n\n\nwith open_local([\"sanic\", \"__version__.py\"], encoding=\"latin1\") as fp:\n try:\n version = re.findall(\n r\"^__version__ = \\\"([^']+)\\\"\\r?$\", fp.read(), re.M\n )[0]\n except IndexError:\n raise RuntimeError(\"Unable to determine version.\")\n\nwith open_local([\"README.rst\"]) as rm:\n long_description = rm.read()\n\nsetup_kwargs = {\n \"name\": \"sanic\",\n \"version\": version,\n \"url\": \"http://github.com/sanic-org/sanic/\",\n \"license\": \"MIT\",\n \"author\": \"Sanic Community\",\n \"author_email\": \"[email protected]\",\n \"description\": (\n \"A web server and web framework that's written to go fast. \"\n \"Build fast. Run fast.\"\n ),\n \"long_description\": long_description,\n \"packages\": find_packages(),\n \"package_data\": {\"sanic\": [\"py.typed\"]},\n \"platforms\": \"any\",\n \"python_requires\": \">=3.7\",\n \"classifiers\": [\n \"Development Status :: 4 - Beta\",\n \"Environment :: Web Environment\",\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n ],\n \"entry_points\": {\"console_scripts\": [\"sanic = sanic.__main__:main\"]},\n}\n\nenv_dependency = (\n '; sys_platform != \"win32\" ' 'and implementation_name == \"cpython\"'\n)\nujson = \"ujson>=1.35\" + env_dependency\nuvloop = \"uvloop>=0.5.3\" + env_dependency\n\nrequirements = [\n \"sanic-routing>=0.6.0\",\n \"httptools>=0.0.10\",\n uvloop,\n ujson,\n \"aiofiles>=0.6.0\",\n \"websockets>=9.0\",\n \"multidict>=5.0,<6.0\",\n]\n\ntests_require = [\n \"sanic-testing>=0.6.0\",\n \"pytest==5.2.1\",\n \"multidict>=5.0,<6.0\",\n \"gunicorn==20.0.4\",\n \"pytest-cov\",\n \"beautifulsoup4\",\n uvloop,\n ujson,\n \"pytest-sanic\",\n \"pytest-sugar\",\n \"pytest-benchmark\",\n]\n\ndocs_require = [\n \"sphinx>=2.1.2\",\n \"sphinx_rtd_theme\",\n \"recommonmark>=0.5.0\",\n \"docutils\",\n \"pygments\",\n]\n\ndev_require = tests_require + [\n \"aiofiles\",\n \"tox\",\n \"black\",\n \"flake8\",\n \"bandit\",\n \"towncrier\",\n]\n\nall_require = dev_require + docs_require\n\nif strtobool(os.environ.get(\"SANIC_NO_UJSON\", \"no\")):\n print(\"Installing without uJSON\")\n requirements.remove(ujson)\n tests_require.remove(ujson)\n\n# 'nt' means windows OS\nif strtobool(os.environ.get(\"SANIC_NO_UVLOOP\", \"no\")):\n print(\"Installing without uvLoop\")\n requirements.remove(uvloop)\n tests_require.remove(uvloop)\n\nextras_require = {\n \"test\": tests_require,\n \"dev\": dev_require,\n \"docs\": docs_require,\n \"all\": all_require,\n}\n\nsetup_kwargs[\"install_requires\"] = requirements\nsetup_kwargs[\"tests_require\"] = tests_require\nsetup_kwargs[\"extras_require\"] = extras_require\nsetup_kwargs[\"cmdclass\"] = {\"test\": PyTest}\nsetup(**setup_kwargs)\n", "path": "setup.py"}]}
3,403
406
gh_patches_debug_65236
rasdani/github-patches
git_diff
streamlink__streamlink-5698
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- plugins.btv: No playable streams found ### Checklist - [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose) - [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink) - [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22) - [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master) ### Streamlink version Your Streamlink version (6.4.2+1.g7e722ec1) is up to date! ### Description The plug-in does not display video. It displays errors shown in the logs below. ### Debug log ```text streamlink --loglevel=debug "https://btvplus.bg/live/" best [cli][debug] OS: Linux-6.2.0-35-generic-x86_64-with-glibc2.35 [cli][debug] Python: 3.10.12 [cli][debug] OpenSSL: OpenSSL 3.0.2 15 Mar 2022 [cli][debug] Streamlink: 6.4.2+1.g7e722ec1 [cli][debug] Dependencies: [cli][debug] certifi: 2023.5.7 [cli][debug] isodate: 0.6.1 [cli][debug] lxml: 4.8.0 [cli][debug] pycountry: 20.7.3 [cli][debug] pycryptodome: 3.17 [cli][debug] PySocks: 1.7.1 [cli][debug] requests: 2.31.0 [cli][debug] trio: 0.22.2 [cli][debug] trio-websocket: 0.10.3 [cli][debug] typing-extensions: 4.7.1 [cli][debug] urllib3: 1.26.16 [cli][debug] websocket-client: 1.2.3 [cli][debug] Arguments: [cli][debug] url=https://btvplus.bg/live/ [cli][debug] stream=['best'] [cli][debug] --loglevel=debug [cli][info] Found matching plugin btv for URL https://btvplus.bg/live/ [cli][info] Available streams: live (worst, best) [cli][info] Opening stream: live (hls) [cli][info] Starting player: /usr/bin/vlc [stream.hls][debug] Reloading playlist [cli][debug] Pre-buffering 8192 bytes [stream.hls][error] Attempted to play a variant playlist, use 'hls://https://cdn.bweb.bg/live/PhRBlmfjy0uVGxaj1_BMiw/1701627017/61065646.m3u8' instead [stream.segmented][debug] Closing worker thread [stream.segmented][debug] Closing writer thread [cli][error] Try 1/1: Could not open stream <HLSStream ['hls', 'https://cdn.bweb.bg/live/PhRBlmfjy0uVGxaj1_BMiw/1701627017/61065646.m3u8']> (No data returned from stream) error: Could not open stream <HLSStream ['hls', 'https://cdn.bweb.bg/live/PhRBlmfjy0uVGxaj1_BMiw/1701627017/61065646.m3u8']>, tried 1 times, exiting [cli][info] Closing currently open stream... ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `src/streamlink/plugins/btv.py` Content: ``` 1 """ 2 $description A privately owned Bulgarian live TV channel. 3 $url btvplus.bg 4 $type live 5 $region Bulgaria 6 """ 7 8 import logging 9 import re 10 11 from streamlink.plugin import Plugin, pluginmatcher 12 from streamlink.plugin.api import validate 13 from streamlink.stream.hls import HLSStream 14 15 16 log = logging.getLogger(__name__) 17 18 19 @pluginmatcher(re.compile( 20 r"https?://(?:www\.)?btvplus\.bg/live/?", 21 )) 22 class BTV(Plugin): 23 URL_API = "https://btvplus.bg/lbin/v3/btvplus/player_config.php" 24 25 def _get_streams(self): 26 media_id = self.session.http.get(self.url, schema=validate.Schema( 27 re.compile(r"media_id=(\d+)"), 28 validate.any(None, validate.get(1)), 29 )) 30 if media_id is None: 31 return 32 33 stream_url = self.session.http.get( 34 self.URL_API, 35 params={ 36 "media_id": media_id, 37 }, 38 schema=validate.Schema( 39 validate.any( 40 validate.all( 41 validate.regex(re.compile(r"geo_blocked_stream")), 42 validate.get(0), 43 ), 44 validate.all( 45 validate.parse_json(), 46 { 47 "status": "ok", 48 "info": { 49 "file": validate.url(path=validate.endswith(".m3u8")), 50 }, 51 }, 52 validate.get(("info", "file")), 53 ), 54 ), 55 ), 56 ) 57 if not stream_url: 58 return 59 60 if stream_url == "geo_blocked_stream": 61 log.error("The content is not available in your region") 62 return 63 64 return {"live": HLSStream(self.session, stream_url)} 65 66 67 __plugin__ = BTV 68 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/src/streamlink/plugins/btv.py b/src/streamlink/plugins/btv.py --- a/src/streamlink/plugins/btv.py +++ b/src/streamlink/plugins/btv.py @@ -61,7 +61,7 @@ log.error("The content is not available in your region") return - return {"live": HLSStream(self.session, stream_url)} + return HLSStream.parse_variant_playlist(self.session, stream_url) __plugin__ = BTV
{"golden_diff": "diff --git a/src/streamlink/plugins/btv.py b/src/streamlink/plugins/btv.py\n--- a/src/streamlink/plugins/btv.py\n+++ b/src/streamlink/plugins/btv.py\n@@ -61,7 +61,7 @@\n log.error(\"The content is not available in your region\")\n return\n \n- return {\"live\": HLSStream(self.session, stream_url)}\n+ return HLSStream.parse_variant_playlist(self.session, stream_url)\n \n \n __plugin__ = BTV\n", "issue": "plugins.btv: No playable streams found\n### Checklist\n\n- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\n\n### Streamlink version\n\nYour Streamlink version (6.4.2+1.g7e722ec1) is up to date!\n\n### Description\n\nThe plug-in does not display video. It displays errors shown in the logs below.\r\n\n\n### Debug log\n\n```text\nstreamlink --loglevel=debug \"https://btvplus.bg/live/\" best\r\n[cli][debug] OS: Linux-6.2.0-35-generic-x86_64-with-glibc2.35\r\n[cli][debug] Python: 3.10.12\r\n[cli][debug] OpenSSL: OpenSSL 3.0.2 15 Mar 2022\r\n[cli][debug] Streamlink: 6.4.2+1.g7e722ec1\r\n[cli][debug] Dependencies:\r\n[cli][debug] certifi: 2023.5.7\r\n[cli][debug] isodate: 0.6.1\r\n[cli][debug] lxml: 4.8.0\r\n[cli][debug] pycountry: 20.7.3\r\n[cli][debug] pycryptodome: 3.17\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.31.0\r\n[cli][debug] trio: 0.22.2\r\n[cli][debug] trio-websocket: 0.10.3\r\n[cli][debug] typing-extensions: 4.7.1\r\n[cli][debug] urllib3: 1.26.16\r\n[cli][debug] websocket-client: 1.2.3\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://btvplus.bg/live/\r\n[cli][debug] stream=['best']\r\n[cli][debug] --loglevel=debug\r\n[cli][info] Found matching plugin btv for URL https://btvplus.bg/live/\r\n[cli][info] Available streams: live (worst, best)\r\n[cli][info] Opening stream: live (hls)\r\n[cli][info] Starting player: /usr/bin/vlc\r\n[stream.hls][debug] Reloading playlist\r\n[cli][debug] Pre-buffering 8192 bytes\r\n[stream.hls][error] Attempted to play a variant playlist, use 'hls://https://cdn.bweb.bg/live/PhRBlmfjy0uVGxaj1_BMiw/1701627017/61065646.m3u8' instead\r\n[stream.segmented][debug] Closing worker thread\r\n[stream.segmented][debug] Closing writer thread\r\n[cli][error] Try 1/1: Could not open stream <HLSStream ['hls', 'https://cdn.bweb.bg/live/PhRBlmfjy0uVGxaj1_BMiw/1701627017/61065646.m3u8']> (No data returned from stream)\r\nerror: Could not open stream <HLSStream ['hls', 'https://cdn.bweb.bg/live/PhRBlmfjy0uVGxaj1_BMiw/1701627017/61065646.m3u8']>, tried 1 times, exiting\r\n[cli][info] Closing currently open stream...\n```\n\n", "before_files": [{"content": "\"\"\"\n$description A privately owned Bulgarian live TV channel.\n$url btvplus.bg\n$type live\n$region Bulgaria\n\"\"\"\n\nimport logging\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:www\\.)?btvplus\\.bg/live/?\",\n))\nclass BTV(Plugin):\n URL_API = \"https://btvplus.bg/lbin/v3/btvplus/player_config.php\"\n\n def _get_streams(self):\n media_id = self.session.http.get(self.url, schema=validate.Schema(\n re.compile(r\"media_id=(\\d+)\"),\n validate.any(None, validate.get(1)),\n ))\n if media_id is None:\n return\n\n stream_url = self.session.http.get(\n self.URL_API,\n params={\n \"media_id\": media_id,\n },\n schema=validate.Schema(\n validate.any(\n validate.all(\n validate.regex(re.compile(r\"geo_blocked_stream\")),\n validate.get(0),\n ),\n validate.all(\n validate.parse_json(),\n {\n \"status\": \"ok\",\n \"info\": {\n \"file\": validate.url(path=validate.endswith(\".m3u8\")),\n },\n },\n validate.get((\"info\", \"file\")),\n ),\n ),\n ),\n )\n if not stream_url:\n return\n\n if stream_url == \"geo_blocked_stream\":\n log.error(\"The content is not available in your region\")\n return\n\n return {\"live\": HLSStream(self.session, stream_url)}\n\n\n__plugin__ = BTV\n", "path": "src/streamlink/plugins/btv.py"}], "after_files": [{"content": "\"\"\"\n$description A privately owned Bulgarian live TV channel.\n$url btvplus.bg\n$type live\n$region Bulgaria\n\"\"\"\n\nimport logging\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:www\\.)?btvplus\\.bg/live/?\",\n))\nclass BTV(Plugin):\n URL_API = \"https://btvplus.bg/lbin/v3/btvplus/player_config.php\"\n\n def _get_streams(self):\n media_id = self.session.http.get(self.url, schema=validate.Schema(\n re.compile(r\"media_id=(\\d+)\"),\n validate.any(None, validate.get(1)),\n ))\n if media_id is None:\n return\n\n stream_url = self.session.http.get(\n self.URL_API,\n params={\n \"media_id\": media_id,\n },\n schema=validate.Schema(\n validate.any(\n validate.all(\n validate.regex(re.compile(r\"geo_blocked_stream\")),\n validate.get(0),\n ),\n validate.all(\n validate.parse_json(),\n {\n \"status\": \"ok\",\n \"info\": {\n \"file\": validate.url(path=validate.endswith(\".m3u8\")),\n },\n },\n validate.get((\"info\", \"file\")),\n ),\n ),\n ),\n )\n if not stream_url:\n return\n\n if stream_url == \"geo_blocked_stream\":\n log.error(\"The content is not available in your region\")\n return\n\n return HLSStream.parse_variant_playlist(self.session, stream_url)\n\n\n__plugin__ = BTV\n", "path": "src/streamlink/plugins/btv.py"}]}
1,695
104
gh_patches_debug_7467
rasdani/github-patches
git_diff
sublimelsp__LSP-660
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- when cancelling the symbols panel, the last symbol is selected https://github.com/tomv564/LSP/blob/be904c56fddf35f724486de405a168786ed4ffeb/plugin/symbols.py#L82-L92 ```diff def on_symbol_selected(self, symbol_index): + if symbol_index == -1: + return selected_symbol = self.symbols[symbol_index] range = selected_symbol.get('location', selected_symbol.get('range')) range = range.get('range', range) ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `plugin/symbols.py` Content: ``` 1 from .core.logging import debug 2 from .core.protocol import Request, Range 3 from .core.protocol import SymbolKind 4 from .core.registry import client_for_view, LspTextCommand 5 from .core.url import filename_to_uri 6 from .core.views import range_to_region 7 8 try: 9 from typing import List, Optional, Any 10 assert List and Optional and Any 11 except ImportError: 12 pass 13 14 symbol_kind_names = { 15 SymbolKind.File: "file", 16 SymbolKind.Module: "module", 17 SymbolKind.Namespace: "namespace", 18 SymbolKind.Package: "package", 19 SymbolKind.Class: "class", 20 SymbolKind.Method: "method", 21 SymbolKind.Property: "property", 22 SymbolKind.Field: "field", 23 SymbolKind.Constructor: "constructor", 24 SymbolKind.Enum: "enum", 25 SymbolKind.Interface: "interface", 26 SymbolKind.Function: "function", 27 SymbolKind.Variable: "variable", 28 SymbolKind.Constant: "constant", 29 SymbolKind.String: "string", 30 SymbolKind.Number: "number", 31 SymbolKind.Boolean: "boolean", 32 SymbolKind.Array: "array", 33 SymbolKind.Object: "object", 34 SymbolKind.Key: "key", 35 SymbolKind.Null: "null", 36 SymbolKind.EnumMember: "enum member", 37 SymbolKind.Struct: "struct", 38 SymbolKind.Event: "event", 39 SymbolKind.Operator: "operator", 40 SymbolKind.TypeParameter: "type parameter" 41 } 42 43 44 def format_symbol_kind(kind): 45 return symbol_kind_names.get(kind, str(kind)) 46 47 48 def format_symbol(item): 49 """ 50 items may be a list of strings, or a list of string lists. 51 In the latter case, each entry in the quick panel will show multiple rows 52 """ 53 prefix = item.get("containerName", "") 54 label = prefix + "." + item.get("name") if prefix else item.get("name") 55 return [label, format_symbol_kind(item.get("kind"))] 56 57 58 class LspDocumentSymbolsCommand(LspTextCommand): 59 def __init__(self, view): 60 super().__init__(view) 61 62 def is_enabled(self, event=None): 63 return self.has_client_with_capability('documentSymbolProvider') 64 65 def run(self, edit) -> None: 66 client = client_for_view(self.view) 67 if client: 68 params = { 69 "textDocument": { 70 "uri": filename_to_uri(self.view.file_name()) 71 } 72 } 73 request = Request.documentSymbols(params) 74 client.send_request(request, self.handle_response) 75 76 def handle_response(self, response: 'Optional[List]') -> None: 77 response_list = response or [] 78 symbols = list(format_symbol(item) for item in response_list) 79 self.symbols = response_list 80 self.view.window().show_quick_panel(symbols, self.on_symbol_selected) 81 82 def on_symbol_selected(self, symbol_index): 83 selected_symbol = self.symbols[symbol_index] 84 range = selected_symbol.get('location', selected_symbol.get('range')) 85 range = range.get('range', range) 86 if not range: 87 debug('could not recognize the type: expected either SymbolInformation or DocumentSymbol') 88 return 89 region = range_to_region(Range.from_lsp(range), self.view) 90 self.view.show_at_center(region) 91 self.view.sel().clear() 92 self.view.sel().add(region) 93 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/plugin/symbols.py b/plugin/symbols.py --- a/plugin/symbols.py +++ b/plugin/symbols.py @@ -80,6 +80,8 @@ self.view.window().show_quick_panel(symbols, self.on_symbol_selected) def on_symbol_selected(self, symbol_index): + if symbol_index == -1: + return selected_symbol = self.symbols[symbol_index] range = selected_symbol.get('location', selected_symbol.get('range')) range = range.get('range', range)
{"golden_diff": "diff --git a/plugin/symbols.py b/plugin/symbols.py\n--- a/plugin/symbols.py\n+++ b/plugin/symbols.py\n@@ -80,6 +80,8 @@\n self.view.window().show_quick_panel(symbols, self.on_symbol_selected)\n \n def on_symbol_selected(self, symbol_index):\n+ if symbol_index == -1:\n+ return\n selected_symbol = self.symbols[symbol_index]\n range = selected_symbol.get('location', selected_symbol.get('range'))\n range = range.get('range', range)\n", "issue": "when cancelling the symbols panel, the last symbol is selected\nhttps://github.com/tomv564/LSP/blob/be904c56fddf35f724486de405a168786ed4ffeb/plugin/symbols.py#L82-L92\r\n```diff\r\n def on_symbol_selected(self, symbol_index):\r\n+ if symbol_index == -1:\r\n+ return\r\n selected_symbol = self.symbols[symbol_index]\r\n range = selected_symbol.get('location', selected_symbol.get('range'))\r\n range = range.get('range', range)\r\n```\n", "before_files": [{"content": "from .core.logging import debug\nfrom .core.protocol import Request, Range\nfrom .core.protocol import SymbolKind\nfrom .core.registry import client_for_view, LspTextCommand\nfrom .core.url import filename_to_uri\nfrom .core.views import range_to_region\n\ntry:\n from typing import List, Optional, Any\n assert List and Optional and Any\nexcept ImportError:\n pass\n\nsymbol_kind_names = {\n SymbolKind.File: \"file\",\n SymbolKind.Module: \"module\",\n SymbolKind.Namespace: \"namespace\",\n SymbolKind.Package: \"package\",\n SymbolKind.Class: \"class\",\n SymbolKind.Method: \"method\",\n SymbolKind.Property: \"property\",\n SymbolKind.Field: \"field\",\n SymbolKind.Constructor: \"constructor\",\n SymbolKind.Enum: \"enum\",\n SymbolKind.Interface: \"interface\",\n SymbolKind.Function: \"function\",\n SymbolKind.Variable: \"variable\",\n SymbolKind.Constant: \"constant\",\n SymbolKind.String: \"string\",\n SymbolKind.Number: \"number\",\n SymbolKind.Boolean: \"boolean\",\n SymbolKind.Array: \"array\",\n SymbolKind.Object: \"object\",\n SymbolKind.Key: \"key\",\n SymbolKind.Null: \"null\",\n SymbolKind.EnumMember: \"enum member\",\n SymbolKind.Struct: \"struct\",\n SymbolKind.Event: \"event\",\n SymbolKind.Operator: \"operator\",\n SymbolKind.TypeParameter: \"type parameter\"\n}\n\n\ndef format_symbol_kind(kind):\n return symbol_kind_names.get(kind, str(kind))\n\n\ndef format_symbol(item):\n \"\"\"\n items may be a list of strings, or a list of string lists.\n In the latter case, each entry in the quick panel will show multiple rows\n \"\"\"\n prefix = item.get(\"containerName\", \"\")\n label = prefix + \".\" + item.get(\"name\") if prefix else item.get(\"name\")\n return [label, format_symbol_kind(item.get(\"kind\"))]\n\n\nclass LspDocumentSymbolsCommand(LspTextCommand):\n def __init__(self, view):\n super().__init__(view)\n\n def is_enabled(self, event=None):\n return self.has_client_with_capability('documentSymbolProvider')\n\n def run(self, edit) -> None:\n client = client_for_view(self.view)\n if client:\n params = {\n \"textDocument\": {\n \"uri\": filename_to_uri(self.view.file_name())\n }\n }\n request = Request.documentSymbols(params)\n client.send_request(request, self.handle_response)\n\n def handle_response(self, response: 'Optional[List]') -> None:\n response_list = response or []\n symbols = list(format_symbol(item) for item in response_list)\n self.symbols = response_list\n self.view.window().show_quick_panel(symbols, self.on_symbol_selected)\n\n def on_symbol_selected(self, symbol_index):\n selected_symbol = self.symbols[symbol_index]\n range = selected_symbol.get('location', selected_symbol.get('range'))\n range = range.get('range', range)\n if not range:\n debug('could not recognize the type: expected either SymbolInformation or DocumentSymbol')\n return\n region = range_to_region(Range.from_lsp(range), self.view)\n self.view.show_at_center(region)\n self.view.sel().clear()\n self.view.sel().add(region)\n", "path": "plugin/symbols.py"}], "after_files": [{"content": "from .core.logging import debug\nfrom .core.protocol import Request, Range\nfrom .core.protocol import SymbolKind\nfrom .core.registry import client_for_view, LspTextCommand\nfrom .core.url import filename_to_uri\nfrom .core.views import range_to_region\n\ntry:\n from typing import List, Optional, Any\n assert List and Optional and Any\nexcept ImportError:\n pass\n\nsymbol_kind_names = {\n SymbolKind.File: \"file\",\n SymbolKind.Module: \"module\",\n SymbolKind.Namespace: \"namespace\",\n SymbolKind.Package: \"package\",\n SymbolKind.Class: \"class\",\n SymbolKind.Method: \"method\",\n SymbolKind.Property: \"property\",\n SymbolKind.Field: \"field\",\n SymbolKind.Constructor: \"constructor\",\n SymbolKind.Enum: \"enum\",\n SymbolKind.Interface: \"interface\",\n SymbolKind.Function: \"function\",\n SymbolKind.Variable: \"variable\",\n SymbolKind.Constant: \"constant\",\n SymbolKind.String: \"string\",\n SymbolKind.Number: \"number\",\n SymbolKind.Boolean: \"boolean\",\n SymbolKind.Array: \"array\",\n SymbolKind.Object: \"object\",\n SymbolKind.Key: \"key\",\n SymbolKind.Null: \"null\",\n SymbolKind.EnumMember: \"enum member\",\n SymbolKind.Struct: \"struct\",\n SymbolKind.Event: \"event\",\n SymbolKind.Operator: \"operator\",\n SymbolKind.TypeParameter: \"type parameter\"\n}\n\n\ndef format_symbol_kind(kind):\n return symbol_kind_names.get(kind, str(kind))\n\n\ndef format_symbol(item):\n \"\"\"\n items may be a list of strings, or a list of string lists.\n In the latter case, each entry in the quick panel will show multiple rows\n \"\"\"\n prefix = item.get(\"containerName\", \"\")\n label = prefix + \".\" + item.get(\"name\") if prefix else item.get(\"name\")\n return [label, format_symbol_kind(item.get(\"kind\"))]\n\n\nclass LspDocumentSymbolsCommand(LspTextCommand):\n def __init__(self, view):\n super().__init__(view)\n\n def is_enabled(self, event=None):\n return self.has_client_with_capability('documentSymbolProvider')\n\n def run(self, edit) -> None:\n client = client_for_view(self.view)\n if client:\n params = {\n \"textDocument\": {\n \"uri\": filename_to_uri(self.view.file_name())\n }\n }\n request = Request.documentSymbols(params)\n client.send_request(request, self.handle_response)\n\n def handle_response(self, response: 'Optional[List]') -> None:\n response_list = response or []\n symbols = list(format_symbol(item) for item in response_list)\n self.symbols = response_list\n self.view.window().show_quick_panel(symbols, self.on_symbol_selected)\n\n def on_symbol_selected(self, symbol_index):\n if symbol_index == -1:\n return\n selected_symbol = self.symbols[symbol_index]\n range = selected_symbol.get('location', selected_symbol.get('range'))\n range = range.get('range', range)\n if not range:\n debug('could not recognize the type: expected either SymbolInformation or DocumentSymbol')\n return\n region = range_to_region(Range.from_lsp(range), self.view)\n self.view.show_at_center(region)\n self.view.sel().clear()\n self.view.sel().add(region)\n", "path": "plugin/symbols.py"}]}
1,270
116
gh_patches_debug_7466
rasdani/github-patches
git_diff
matrix-org__synapse-15961
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Build packages for Debian Trixie Please can we publish packages to the apt repository for [Debian Trixie (13)](https://wiki.debian.org/DebianTrixie) which is the current testing release at the time of writing. It became the current testing release on 2023-06-10. I run debian testing on the server I run synapse on and the change from bookworm to trixie has meant that I now get errors on `apt update`: ``` E: The repository 'https://packages.matrix.org/debian trixie Release' does not have a Release file. ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `scripts-dev/build_debian_packages.py` Content: ``` 1 #!/usr/bin/env python3 2 3 # Build the Debian packages using Docker images. 4 # 5 # This script builds the Docker images and then executes them sequentially, each 6 # one building a Debian package for the targeted operating system. It is 7 # designed to be a "single command" to produce all the images. 8 # 9 # By default, builds for all known distributions, but a list of distributions 10 # can be passed on the commandline for debugging. 11 12 import argparse 13 import json 14 import os 15 import signal 16 import subprocess 17 import sys 18 import threading 19 from concurrent.futures import ThreadPoolExecutor 20 from types import FrameType 21 from typing import Collection, Optional, Sequence, Set 22 23 # These are expanded inside the dockerfile to be a fully qualified image name. 24 # e.g. docker.io/library/debian:bullseye 25 # 26 # If an EOL is forced by a Python version and we're dropping support for it, make sure 27 # to remove references to the distibution across Synapse (search for "bullseye" for 28 # example) 29 DISTS = ( 30 "debian:bullseye", # (EOL ~2024-07) (our EOL forced by Python 3.9 is 2025-10-05) 31 "debian:bookworm", # (EOL not specified yet) (our EOL forced by Python 3.11 is 2027-10-24) 32 "debian:sid", # (EOL not specified yet) (our EOL forced by Python 3.11 is 2027-10-24) 33 "ubuntu:focal", # 20.04 LTS (EOL 2025-04) (our EOL forced by Python 3.8 is 2024-10-14) 34 "ubuntu:jammy", # 22.04 LTS (EOL 2027-04) (our EOL forced by Python 3.10 is 2026-10-04) 35 "ubuntu:kinetic", # 22.10 (EOL 2023-07-20) (our EOL forced by Python 3.10 is 2026-10-04) 36 "ubuntu:lunar", # 23.04 (EOL 2024-01) (our EOL forced by Python 3.11 is 2027-10-24) 37 ) 38 39 DESC = """\ 40 Builds .debs for synapse, using a Docker image for the build environment. 41 42 By default, builds for all known distributions, but a list of distributions 43 can be passed on the commandline for debugging. 44 """ 45 46 projdir = os.path.dirname(os.path.dirname(os.path.realpath(__file__))) 47 48 49 class Builder(object): 50 def __init__( 51 self, 52 redirect_stdout: bool = False, 53 docker_build_args: Optional[Sequence[str]] = None, 54 ): 55 self.redirect_stdout = redirect_stdout 56 self._docker_build_args = tuple(docker_build_args or ()) 57 self.active_containers: Set[str] = set() 58 self._lock = threading.Lock() 59 self._failed = False 60 61 def run_build(self, dist: str, skip_tests: bool = False) -> None: 62 """Build deb for a single distribution""" 63 64 if self._failed: 65 print("not building %s due to earlier failure" % (dist,)) 66 raise Exception("failed") 67 68 try: 69 self._inner_build(dist, skip_tests) 70 except Exception as e: 71 print("build of %s failed: %s" % (dist, e), file=sys.stderr) 72 self._failed = True 73 raise 74 75 def _inner_build(self, dist: str, skip_tests: bool = False) -> None: 76 tag = dist.split(":", 1)[1] 77 78 # Make the dir where the debs will live. 79 # 80 # Note that we deliberately put this outside the source tree, otherwise 81 # we tend to get source packages which are full of debs. (We could hack 82 # around that with more magic in the build_debian.sh script, but that 83 # doesn't solve the problem for natively-run dpkg-buildpakage). 84 debsdir = os.path.join(projdir, "../debs") 85 os.makedirs(debsdir, exist_ok=True) 86 87 if self.redirect_stdout: 88 logfile = os.path.join(debsdir, "%s.buildlog" % (tag,)) 89 print("building %s: directing output to %s" % (dist, logfile)) 90 stdout = open(logfile, "w") 91 else: 92 stdout = None 93 94 # first build a docker image for the build environment 95 build_args = ( 96 ( 97 "docker", 98 "build", 99 "--tag", 100 "dh-venv-builder:" + tag, 101 "--build-arg", 102 "distro=" + dist, 103 "-f", 104 "docker/Dockerfile-dhvirtualenv", 105 ) 106 + self._docker_build_args 107 + ("docker",) 108 ) 109 110 subprocess.check_call( 111 build_args, 112 stdout=stdout, 113 stderr=subprocess.STDOUT, 114 cwd=projdir, 115 ) 116 117 container_name = "synapse_build_" + tag 118 with self._lock: 119 self.active_containers.add(container_name) 120 121 # then run the build itself 122 subprocess.check_call( 123 [ 124 "docker", 125 "run", 126 "--rm", 127 "--name", 128 container_name, 129 "--volume=" + projdir + ":/synapse/source:ro", 130 "--volume=" + debsdir + ":/debs", 131 "-e", 132 "TARGET_USERID=%i" % (os.getuid(),), 133 "-e", 134 "TARGET_GROUPID=%i" % (os.getgid(),), 135 "-e", 136 "DEB_BUILD_OPTIONS=%s" % ("nocheck" if skip_tests else ""), 137 "dh-venv-builder:" + tag, 138 ], 139 stdout=stdout, 140 stderr=subprocess.STDOUT, 141 ) 142 143 with self._lock: 144 self.active_containers.remove(container_name) 145 146 if stdout is not None: 147 stdout.close() 148 print("Completed build of %s" % (dist,)) 149 150 def kill_containers(self) -> None: 151 with self._lock: 152 active = list(self.active_containers) 153 154 for c in active: 155 print("killing container %s" % (c,)) 156 subprocess.run( 157 [ 158 "docker", 159 "kill", 160 c, 161 ], 162 stdout=subprocess.DEVNULL, 163 ) 164 with self._lock: 165 self.active_containers.remove(c) 166 167 168 def run_builds( 169 builder: Builder, dists: Collection[str], jobs: int = 1, skip_tests: bool = False 170 ) -> None: 171 def sig(signum: int, _frame: Optional[FrameType]) -> None: 172 print("Caught SIGINT") 173 builder.kill_containers() 174 175 signal.signal(signal.SIGINT, sig) 176 177 with ThreadPoolExecutor(max_workers=jobs) as e: 178 res = e.map(lambda dist: builder.run_build(dist, skip_tests), dists) 179 180 # make sure we consume the iterable so that exceptions are raised. 181 for _ in res: 182 pass 183 184 185 if __name__ == "__main__": 186 parser = argparse.ArgumentParser( 187 description=DESC, 188 ) 189 parser.add_argument( 190 "-j", 191 "--jobs", 192 type=int, 193 default=1, 194 help="specify the number of builds to run in parallel", 195 ) 196 parser.add_argument( 197 "--no-check", 198 action="store_true", 199 help="skip running tests after building", 200 ) 201 parser.add_argument( 202 "--docker-build-arg", 203 action="append", 204 help="specify an argument to pass to docker build", 205 ) 206 parser.add_argument( 207 "--show-dists-json", 208 action="store_true", 209 help="instead of building the packages, just list the dists to build for, as a json array", 210 ) 211 parser.add_argument( 212 "dist", 213 nargs="*", 214 default=DISTS, 215 help="a list of distributions to build for. Default: %(default)s", 216 ) 217 args = parser.parse_args() 218 if args.show_dists_json: 219 print(json.dumps(DISTS)) 220 else: 221 builder = Builder( 222 redirect_stdout=(args.jobs > 1), docker_build_args=args.docker_build_arg 223 ) 224 run_builds( 225 builder, 226 dists=args.dist, 227 jobs=args.jobs, 228 skip_tests=args.no_check, 229 ) 230 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/scripts-dev/build_debian_packages.py b/scripts-dev/build_debian_packages.py --- a/scripts-dev/build_debian_packages.py +++ b/scripts-dev/build_debian_packages.py @@ -34,6 +34,7 @@ "ubuntu:jammy", # 22.04 LTS (EOL 2027-04) (our EOL forced by Python 3.10 is 2026-10-04) "ubuntu:kinetic", # 22.10 (EOL 2023-07-20) (our EOL forced by Python 3.10 is 2026-10-04) "ubuntu:lunar", # 23.04 (EOL 2024-01) (our EOL forced by Python 3.11 is 2027-10-24) + "debian:trixie", # (EOL not specified yet) ) DESC = """\
{"golden_diff": "diff --git a/scripts-dev/build_debian_packages.py b/scripts-dev/build_debian_packages.py\n--- a/scripts-dev/build_debian_packages.py\n+++ b/scripts-dev/build_debian_packages.py\n@@ -34,6 +34,7 @@\n \"ubuntu:jammy\", # 22.04 LTS (EOL 2027-04) (our EOL forced by Python 3.10 is 2026-10-04)\n \"ubuntu:kinetic\", # 22.10 (EOL 2023-07-20) (our EOL forced by Python 3.10 is 2026-10-04)\n \"ubuntu:lunar\", # 23.04 (EOL 2024-01) (our EOL forced by Python 3.11 is 2027-10-24)\n+ \"debian:trixie\", # (EOL not specified yet)\n )\n \n DESC = \"\"\"\\\n", "issue": "Build packages for Debian Trixie\nPlease can we publish packages to the apt repository for [Debian Trixie (13)](https://wiki.debian.org/DebianTrixie) which is the current testing release at the time of writing. It became the current testing release on 2023-06-10.\r\n\r\nI run debian testing on the server I run synapse on and the change from bookworm to trixie has meant that I now get errors on `apt update`:\r\n\r\n```\r\nE: The repository 'https://packages.matrix.org/debian trixie Release' does not have a Release file.\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Build the Debian packages using Docker images.\n#\n# This script builds the Docker images and then executes them sequentially, each\n# one building a Debian package for the targeted operating system. It is\n# designed to be a \"single command\" to produce all the images.\n#\n# By default, builds for all known distributions, but a list of distributions\n# can be passed on the commandline for debugging.\n\nimport argparse\nimport json\nimport os\nimport signal\nimport subprocess\nimport sys\nimport threading\nfrom concurrent.futures import ThreadPoolExecutor\nfrom types import FrameType\nfrom typing import Collection, Optional, Sequence, Set\n\n# These are expanded inside the dockerfile to be a fully qualified image name.\n# e.g. docker.io/library/debian:bullseye\n#\n# If an EOL is forced by a Python version and we're dropping support for it, make sure\n# to remove references to the distibution across Synapse (search for \"bullseye\" for\n# example)\nDISTS = (\n \"debian:bullseye\", # (EOL ~2024-07) (our EOL forced by Python 3.9 is 2025-10-05)\n \"debian:bookworm\", # (EOL not specified yet) (our EOL forced by Python 3.11 is 2027-10-24)\n \"debian:sid\", # (EOL not specified yet) (our EOL forced by Python 3.11 is 2027-10-24)\n \"ubuntu:focal\", # 20.04 LTS (EOL 2025-04) (our EOL forced by Python 3.8 is 2024-10-14)\n \"ubuntu:jammy\", # 22.04 LTS (EOL 2027-04) (our EOL forced by Python 3.10 is 2026-10-04)\n \"ubuntu:kinetic\", # 22.10 (EOL 2023-07-20) (our EOL forced by Python 3.10 is 2026-10-04)\n \"ubuntu:lunar\", # 23.04 (EOL 2024-01) (our EOL forced by Python 3.11 is 2027-10-24)\n)\n\nDESC = \"\"\"\\\nBuilds .debs for synapse, using a Docker image for the build environment.\n\nBy default, builds for all known distributions, but a list of distributions\ncan be passed on the commandline for debugging.\n\"\"\"\n\nprojdir = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))\n\n\nclass Builder(object):\n def __init__(\n self,\n redirect_stdout: bool = False,\n docker_build_args: Optional[Sequence[str]] = None,\n ):\n self.redirect_stdout = redirect_stdout\n self._docker_build_args = tuple(docker_build_args or ())\n self.active_containers: Set[str] = set()\n self._lock = threading.Lock()\n self._failed = False\n\n def run_build(self, dist: str, skip_tests: bool = False) -> None:\n \"\"\"Build deb for a single distribution\"\"\"\n\n if self._failed:\n print(\"not building %s due to earlier failure\" % (dist,))\n raise Exception(\"failed\")\n\n try:\n self._inner_build(dist, skip_tests)\n except Exception as e:\n print(\"build of %s failed: %s\" % (dist, e), file=sys.stderr)\n self._failed = True\n raise\n\n def _inner_build(self, dist: str, skip_tests: bool = False) -> None:\n tag = dist.split(\":\", 1)[1]\n\n # Make the dir where the debs will live.\n #\n # Note that we deliberately put this outside the source tree, otherwise\n # we tend to get source packages which are full of debs. (We could hack\n # around that with more magic in the build_debian.sh script, but that\n # doesn't solve the problem for natively-run dpkg-buildpakage).\n debsdir = os.path.join(projdir, \"../debs\")\n os.makedirs(debsdir, exist_ok=True)\n\n if self.redirect_stdout:\n logfile = os.path.join(debsdir, \"%s.buildlog\" % (tag,))\n print(\"building %s: directing output to %s\" % (dist, logfile))\n stdout = open(logfile, \"w\")\n else:\n stdout = None\n\n # first build a docker image for the build environment\n build_args = (\n (\n \"docker\",\n \"build\",\n \"--tag\",\n \"dh-venv-builder:\" + tag,\n \"--build-arg\",\n \"distro=\" + dist,\n \"-f\",\n \"docker/Dockerfile-dhvirtualenv\",\n )\n + self._docker_build_args\n + (\"docker\",)\n )\n\n subprocess.check_call(\n build_args,\n stdout=stdout,\n stderr=subprocess.STDOUT,\n cwd=projdir,\n )\n\n container_name = \"synapse_build_\" + tag\n with self._lock:\n self.active_containers.add(container_name)\n\n # then run the build itself\n subprocess.check_call(\n [\n \"docker\",\n \"run\",\n \"--rm\",\n \"--name\",\n container_name,\n \"--volume=\" + projdir + \":/synapse/source:ro\",\n \"--volume=\" + debsdir + \":/debs\",\n \"-e\",\n \"TARGET_USERID=%i\" % (os.getuid(),),\n \"-e\",\n \"TARGET_GROUPID=%i\" % (os.getgid(),),\n \"-e\",\n \"DEB_BUILD_OPTIONS=%s\" % (\"nocheck\" if skip_tests else \"\"),\n \"dh-venv-builder:\" + tag,\n ],\n stdout=stdout,\n stderr=subprocess.STDOUT,\n )\n\n with self._lock:\n self.active_containers.remove(container_name)\n\n if stdout is not None:\n stdout.close()\n print(\"Completed build of %s\" % (dist,))\n\n def kill_containers(self) -> None:\n with self._lock:\n active = list(self.active_containers)\n\n for c in active:\n print(\"killing container %s\" % (c,))\n subprocess.run(\n [\n \"docker\",\n \"kill\",\n c,\n ],\n stdout=subprocess.DEVNULL,\n )\n with self._lock:\n self.active_containers.remove(c)\n\n\ndef run_builds(\n builder: Builder, dists: Collection[str], jobs: int = 1, skip_tests: bool = False\n) -> None:\n def sig(signum: int, _frame: Optional[FrameType]) -> None:\n print(\"Caught SIGINT\")\n builder.kill_containers()\n\n signal.signal(signal.SIGINT, sig)\n\n with ThreadPoolExecutor(max_workers=jobs) as e:\n res = e.map(lambda dist: builder.run_build(dist, skip_tests), dists)\n\n # make sure we consume the iterable so that exceptions are raised.\n for _ in res:\n pass\n\n\nif __name__ == \"__main__\":\n parser = argparse.ArgumentParser(\n description=DESC,\n )\n parser.add_argument(\n \"-j\",\n \"--jobs\",\n type=int,\n default=1,\n help=\"specify the number of builds to run in parallel\",\n )\n parser.add_argument(\n \"--no-check\",\n action=\"store_true\",\n help=\"skip running tests after building\",\n )\n parser.add_argument(\n \"--docker-build-arg\",\n action=\"append\",\n help=\"specify an argument to pass to docker build\",\n )\n parser.add_argument(\n \"--show-dists-json\",\n action=\"store_true\",\n help=\"instead of building the packages, just list the dists to build for, as a json array\",\n )\n parser.add_argument(\n \"dist\",\n nargs=\"*\",\n default=DISTS,\n help=\"a list of distributions to build for. Default: %(default)s\",\n )\n args = parser.parse_args()\n if args.show_dists_json:\n print(json.dumps(DISTS))\n else:\n builder = Builder(\n redirect_stdout=(args.jobs > 1), docker_build_args=args.docker_build_arg\n )\n run_builds(\n builder,\n dists=args.dist,\n jobs=args.jobs,\n skip_tests=args.no_check,\n )\n", "path": "scripts-dev/build_debian_packages.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\n# Build the Debian packages using Docker images.\n#\n# This script builds the Docker images and then executes them sequentially, each\n# one building a Debian package for the targeted operating system. It is\n# designed to be a \"single command\" to produce all the images.\n#\n# By default, builds for all known distributions, but a list of distributions\n# can be passed on the commandline for debugging.\n\nimport argparse\nimport json\nimport os\nimport signal\nimport subprocess\nimport sys\nimport threading\nfrom concurrent.futures import ThreadPoolExecutor\nfrom types import FrameType\nfrom typing import Collection, Optional, Sequence, Set\n\n# These are expanded inside the dockerfile to be a fully qualified image name.\n# e.g. docker.io/library/debian:bullseye\n#\n# If an EOL is forced by a Python version and we're dropping support for it, make sure\n# to remove references to the distibution across Synapse (search for \"bullseye\" for\n# example)\nDISTS = (\n \"debian:bullseye\", # (EOL ~2024-07) (our EOL forced by Python 3.9 is 2025-10-05)\n \"debian:bookworm\", # (EOL not specified yet) (our EOL forced by Python 3.11 is 2027-10-24)\n \"debian:sid\", # (EOL not specified yet) (our EOL forced by Python 3.11 is 2027-10-24)\n \"ubuntu:focal\", # 20.04 LTS (EOL 2025-04) (our EOL forced by Python 3.8 is 2024-10-14)\n \"ubuntu:jammy\", # 22.04 LTS (EOL 2027-04) (our EOL forced by Python 3.10 is 2026-10-04)\n \"ubuntu:kinetic\", # 22.10 (EOL 2023-07-20) (our EOL forced by Python 3.10 is 2026-10-04)\n \"ubuntu:lunar\", # 23.04 (EOL 2024-01) (our EOL forced by Python 3.11 is 2027-10-24)\n \"debian:trixie\", # (EOL not specified yet)\n)\n\nDESC = \"\"\"\\\nBuilds .debs for synapse, using a Docker image for the build environment.\n\nBy default, builds for all known distributions, but a list of distributions\ncan be passed on the commandline for debugging.\n\"\"\"\n\nprojdir = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))\n\n\nclass Builder(object):\n def __init__(\n self,\n redirect_stdout: bool = False,\n docker_build_args: Optional[Sequence[str]] = None,\n ):\n self.redirect_stdout = redirect_stdout\n self._docker_build_args = tuple(docker_build_args or ())\n self.active_containers: Set[str] = set()\n self._lock = threading.Lock()\n self._failed = False\n\n def run_build(self, dist: str, skip_tests: bool = False) -> None:\n \"\"\"Build deb for a single distribution\"\"\"\n\n if self._failed:\n print(\"not building %s due to earlier failure\" % (dist,))\n raise Exception(\"failed\")\n\n try:\n self._inner_build(dist, skip_tests)\n except Exception as e:\n print(\"build of %s failed: %s\" % (dist, e), file=sys.stderr)\n self._failed = True\n raise\n\n def _inner_build(self, dist: str, skip_tests: bool = False) -> None:\n tag = dist.split(\":\", 1)[1]\n\n # Make the dir where the debs will live.\n #\n # Note that we deliberately put this outside the source tree, otherwise\n # we tend to get source packages which are full of debs. (We could hack\n # around that with more magic in the build_debian.sh script, but that\n # doesn't solve the problem for natively-run dpkg-buildpakage).\n debsdir = os.path.join(projdir, \"../debs\")\n os.makedirs(debsdir, exist_ok=True)\n\n if self.redirect_stdout:\n logfile = os.path.join(debsdir, \"%s.buildlog\" % (tag,))\n print(\"building %s: directing output to %s\" % (dist, logfile))\n stdout = open(logfile, \"w\")\n else:\n stdout = None\n\n # first build a docker image for the build environment\n build_args = (\n (\n \"docker\",\n \"build\",\n \"--tag\",\n \"dh-venv-builder:\" + tag,\n \"--build-arg\",\n \"distro=\" + dist,\n \"-f\",\n \"docker/Dockerfile-dhvirtualenv\",\n )\n + self._docker_build_args\n + (\"docker\",)\n )\n\n subprocess.check_call(\n build_args,\n stdout=stdout,\n stderr=subprocess.STDOUT,\n cwd=projdir,\n )\n\n container_name = \"synapse_build_\" + tag\n with self._lock:\n self.active_containers.add(container_name)\n\n # then run the build itself\n subprocess.check_call(\n [\n \"docker\",\n \"run\",\n \"--rm\",\n \"--name\",\n container_name,\n \"--volume=\" + projdir + \":/synapse/source:ro\",\n \"--volume=\" + debsdir + \":/debs\",\n \"-e\",\n \"TARGET_USERID=%i\" % (os.getuid(),),\n \"-e\",\n \"TARGET_GROUPID=%i\" % (os.getgid(),),\n \"-e\",\n \"DEB_BUILD_OPTIONS=%s\" % (\"nocheck\" if skip_tests else \"\"),\n \"dh-venv-builder:\" + tag,\n ],\n stdout=stdout,\n stderr=subprocess.STDOUT,\n )\n\n with self._lock:\n self.active_containers.remove(container_name)\n\n if stdout is not None:\n stdout.close()\n print(\"Completed build of %s\" % (dist,))\n\n def kill_containers(self) -> None:\n with self._lock:\n active = list(self.active_containers)\n\n for c in active:\n print(\"killing container %s\" % (c,))\n subprocess.run(\n [\n \"docker\",\n \"kill\",\n c,\n ],\n stdout=subprocess.DEVNULL,\n )\n with self._lock:\n self.active_containers.remove(c)\n\n\ndef run_builds(\n builder: Builder, dists: Collection[str], jobs: int = 1, skip_tests: bool = False\n) -> None:\n def sig(signum: int, _frame: Optional[FrameType]) -> None:\n print(\"Caught SIGINT\")\n builder.kill_containers()\n\n signal.signal(signal.SIGINT, sig)\n\n with ThreadPoolExecutor(max_workers=jobs) as e:\n res = e.map(lambda dist: builder.run_build(dist, skip_tests), dists)\n\n # make sure we consume the iterable so that exceptions are raised.\n for _ in res:\n pass\n\n\nif __name__ == \"__main__\":\n parser = argparse.ArgumentParser(\n description=DESC,\n )\n parser.add_argument(\n \"-j\",\n \"--jobs\",\n type=int,\n default=1,\n help=\"specify the number of builds to run in parallel\",\n )\n parser.add_argument(\n \"--no-check\",\n action=\"store_true\",\n help=\"skip running tests after building\",\n )\n parser.add_argument(\n \"--docker-build-arg\",\n action=\"append\",\n help=\"specify an argument to pass to docker build\",\n )\n parser.add_argument(\n \"--show-dists-json\",\n action=\"store_true\",\n help=\"instead of building the packages, just list the dists to build for, as a json array\",\n )\n parser.add_argument(\n \"dist\",\n nargs=\"*\",\n default=DISTS,\n help=\"a list of distributions to build for. Default: %(default)s\",\n )\n args = parser.parse_args()\n if args.show_dists_json:\n print(json.dumps(DISTS))\n else:\n builder = Builder(\n redirect_stdout=(args.jobs > 1), docker_build_args=args.docker_build_arg\n )\n run_builds(\n builder,\n dists=args.dist,\n jobs=args.jobs,\n skip_tests=args.no_check,\n )\n", "path": "scripts-dev/build_debian_packages.py"}]}
2,853
235
gh_patches_debug_21902
rasdani/github-patches
git_diff
pypa__pip-4035
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- pip on windows Prerequisites: Windows python 2.7 setuptools 3.6 easy_install pip Problem: we have a requirements file that contains "pyinstaller" package dependency. I suppose that it somehow depends on cryptography and it in its turn somehow depends on cffi package and that's why it tries to install it as well. For now we specify this depency in a following way: pyinstaller==2.1.1dev-89e99dd # from requirements file. Next we try to install our pack of requirements with pip -f {our additional private repository} -U -egg -r {requirements file name} From the build log screenshot attached you can see the following string(build_log attachment): [12:59:39]: [Step 7/15] Installed c:\users\administrator\envs\python27-32bit\lib\site-packages\cryptography-0.3-py2.7-win32.egg [12:59:39]: [Step 7/15] [12:59:39]: [Step 7/15] error: c:\buildagent\temp\buildtmp\easy_install-snsbak\cryptography-0.3\cffi-0.8.2-py2.7-win32.egg_cffi_backend.pyd: Access is denied I've done a few of investigation and attached a screenshot of related file activity with a highlighted important items (file_activity_list attachment). It seems that python cannod delete this _cffi_backend.pyd file. A few more investigation revealed that the file creation mode allows to delete this file(file_open_result attachment). From the event properties(event_properties attachment) I see that all related modules appear as if they are loaded using LoadLibray windows api, that's how it supposed to be done with python C extensions if we want to use this code. But it appears to me that someone forgot to unload all these modules before trying to delete them and that's why the file cannot be deleted. Please refer to _cffi_backend_loading attachment - it proves _cffi_backend.pyd is being loaded as a library(WinDbg screen). And the very next screenshot is the state of pip install command when WinDbg broke on module load - _break_python_screen attachment. Yet I pointed out the same problem if I just parse this requirements file with pkg_tools and specify install_requires setup's argument with these requirements - setup develop fails with the same result. As to me this problem more relates to easy_install/setuptools core. And yet to mention - without '--egg' argument the problem doesn't reproduce. ![file_activity_list](https://cloud.githubusercontent.com/assets/7242142/3082926/4def0fa4-e4db-11e3-8e14-37f52ea5b2ee.png) ![event_properties](https://cloud.githubusercontent.com/assets/7242142/3082956/ba432474-e4db-11e3-8d6f-5d0996c35a44.png) ![file_open_result](https://cloud.githubusercontent.com/assets/7242142/3082983/1417a8e4-e4dc-11e3-9289-0bf61c7cf2c9.png) ![build_log](https://cloud.githubusercontent.com/assets/7242142/3082887/b8bc80e2-e4da-11e3-945a-a0c1137bb6bb.png) ![_cffi_backend_loading](https://cloud.githubusercontent.com/assets/7242142/3160093/dddbe462-eb1a-11e3-8e08-48cfade1b261.png) ![_break_python_screen](https://cloud.githubusercontent.com/assets/7242142/3160112/29cc9c0e-eb1b-11e3-88a6-2601f7a8dbe5.png) --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `pip/_vendor/requests/__init__.py` Content: ``` 1 # -*- coding: utf-8 -*- 2 3 # __ 4 # /__) _ _ _ _ _/ _ 5 # / ( (- (/ (/ (- _) / _) 6 # / 7 8 """ 9 Requests HTTP library 10 ~~~~~~~~~~~~~~~~~~~~~ 11 12 Requests is an HTTP library, written in Python, for human beings. Basic GET 13 usage: 14 15 >>> import requests 16 >>> r = requests.get('https://www.python.org') 17 >>> r.status_code 18 200 19 >>> 'Python is a programming language' in r.content 20 True 21 22 ... or POST: 23 24 >>> payload = dict(key1='value1', key2='value2') 25 >>> r = requests.post('http://httpbin.org/post', data=payload) 26 >>> print(r.text) 27 { 28 ... 29 "form": { 30 "key2": "value2", 31 "key1": "value1" 32 }, 33 ... 34 } 35 36 The other HTTP methods are supported - see `requests.api`. Full documentation 37 is at <http://python-requests.org>. 38 39 :copyright: (c) 2016 by Kenneth Reitz. 40 :license: Apache 2.0, see LICENSE for more details. 41 42 """ 43 44 __title__ = 'requests' 45 __version__ = '2.10.0' 46 __build__ = 0x021000 47 __author__ = 'Kenneth Reitz' 48 __license__ = 'Apache 2.0' 49 __copyright__ = 'Copyright 2016 Kenneth Reitz' 50 51 # Attempt to enable urllib3's SNI support, if possible 52 try: 53 from .packages.urllib3.contrib import pyopenssl 54 pyopenssl.inject_into_urllib3() 55 except ImportError: 56 pass 57 58 import warnings 59 60 # urllib3's DependencyWarnings should be silenced. 61 from .packages.urllib3.exceptions import DependencyWarning 62 warnings.simplefilter('ignore', DependencyWarning) 63 64 from . import utils 65 from .models import Request, Response, PreparedRequest 66 from .api import request, get, head, post, patch, put, delete, options 67 from .sessions import session, Session 68 from .status_codes import codes 69 from .exceptions import ( 70 RequestException, Timeout, URLRequired, 71 TooManyRedirects, HTTPError, ConnectionError, 72 FileModeWarning, ConnectTimeout, ReadTimeout 73 ) 74 75 # Set default logging handler to avoid "No handler found" warnings. 76 import logging 77 try: # Python 2.7+ 78 from logging import NullHandler 79 except ImportError: 80 class NullHandler(logging.Handler): 81 def emit(self, record): 82 pass 83 84 logging.getLogger(__name__).addHandler(NullHandler()) 85 86 import warnings 87 88 # FileModeWarnings go off per the default. 89 warnings.simplefilter('default', FileModeWarning, append=True) 90 ``` Path: `pip/_vendor/requests/compat.py` Content: ``` 1 # -*- coding: utf-8 -*- 2 3 """ 4 pythoncompat 5 """ 6 7 from .packages import chardet 8 9 import sys 10 11 # ------- 12 # Pythons 13 # ------- 14 15 # Syntax sugar. 16 _ver = sys.version_info 17 18 #: Python 2.x? 19 is_py2 = (_ver[0] == 2) 20 21 #: Python 3.x? 22 is_py3 = (_ver[0] == 3) 23 24 try: 25 import simplejson as json 26 except (ImportError, SyntaxError): 27 # simplejson does not support Python 3.2, it throws a SyntaxError 28 # because of u'...' Unicode literals. 29 import json 30 31 # --------- 32 # Specifics 33 # --------- 34 35 if is_py2: 36 from urllib import quote, unquote, quote_plus, unquote_plus, urlencode, getproxies, proxy_bypass 37 from urlparse import urlparse, urlunparse, urljoin, urlsplit, urldefrag 38 from urllib2 import parse_http_list 39 import cookielib 40 from Cookie import Morsel 41 from StringIO import StringIO 42 from .packages.urllib3.packages.ordered_dict import OrderedDict 43 44 builtin_str = str 45 bytes = str 46 str = unicode 47 basestring = basestring 48 numeric_types = (int, long, float) 49 50 elif is_py3: 51 from urllib.parse import urlparse, urlunparse, urljoin, urlsplit, urlencode, quote, unquote, quote_plus, unquote_plus, urldefrag 52 from urllib.request import parse_http_list, getproxies, proxy_bypass 53 from http import cookiejar as cookielib 54 from http.cookies import Morsel 55 from io import StringIO 56 from collections import OrderedDict 57 58 builtin_str = str 59 str = str 60 bytes = bytes 61 basestring = (str, bytes) 62 numeric_types = (int, float) 63 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/pip/_vendor/requests/__init__.py b/pip/_vendor/requests/__init__.py --- a/pip/_vendor/requests/__init__.py +++ b/pip/_vendor/requests/__init__.py @@ -49,11 +49,13 @@ __copyright__ = 'Copyright 2016 Kenneth Reitz' # Attempt to enable urllib3's SNI support, if possible -try: - from .packages.urllib3.contrib import pyopenssl - pyopenssl.inject_into_urllib3() -except ImportError: - pass +# Note: Patched by pip to prevent using the PyOpenSSL module. On Windows this +# prevents upgrading cryptography. +# try: +# from .packages.urllib3.contrib import pyopenssl +# pyopenssl.inject_into_urllib3() +# except ImportError: +# pass import warnings diff --git a/pip/_vendor/requests/compat.py b/pip/_vendor/requests/compat.py --- a/pip/_vendor/requests/compat.py +++ b/pip/_vendor/requests/compat.py @@ -21,12 +21,14 @@ #: Python 3.x? is_py3 = (_ver[0] == 3) -try: - import simplejson as json -except (ImportError, SyntaxError): - # simplejson does not support Python 3.2, it throws a SyntaxError - # because of u'...' Unicode literals. - import json +# Note: We've patched out simplejson support in pip because it prevents +# upgrading simplejson on Windows. +# try: +# import simplejson as json +# except (ImportError, SyntaxError): +# # simplejson does not support Python 3.2, it throws a SyntaxError +# # because of u'...' Unicode literals. +import json # --------- # Specifics
{"golden_diff": "diff --git a/pip/_vendor/requests/__init__.py b/pip/_vendor/requests/__init__.py\n--- a/pip/_vendor/requests/__init__.py\n+++ b/pip/_vendor/requests/__init__.py\n@@ -49,11 +49,13 @@\n __copyright__ = 'Copyright 2016 Kenneth Reitz'\n \n # Attempt to enable urllib3's SNI support, if possible\n-try:\n- from .packages.urllib3.contrib import pyopenssl\n- pyopenssl.inject_into_urllib3()\n-except ImportError:\n- pass\n+# Note: Patched by pip to prevent using the PyOpenSSL module. On Windows this\n+# prevents upgrading cryptography.\n+# try:\n+# from .packages.urllib3.contrib import pyopenssl\n+# pyopenssl.inject_into_urllib3()\n+# except ImportError:\n+# pass\n \n import warnings\n \ndiff --git a/pip/_vendor/requests/compat.py b/pip/_vendor/requests/compat.py\n--- a/pip/_vendor/requests/compat.py\n+++ b/pip/_vendor/requests/compat.py\n@@ -21,12 +21,14 @@\n #: Python 3.x?\n is_py3 = (_ver[0] == 3)\n \n-try:\n- import simplejson as json\n-except (ImportError, SyntaxError):\n- # simplejson does not support Python 3.2, it throws a SyntaxError\n- # because of u'...' Unicode literals.\n- import json\n+# Note: We've patched out simplejson support in pip because it prevents\n+# upgrading simplejson on Windows.\n+# try:\n+# import simplejson as json\n+# except (ImportError, SyntaxError):\n+# # simplejson does not support Python 3.2, it throws a SyntaxError\n+# # because of u'...' Unicode literals.\n+import json\n \n # ---------\n # Specifics\n", "issue": "pip on windows\nPrerequisites:\nWindows\npython 2.7\nsetuptools 3.6\neasy_install\npip\n\nProblem:\nwe have a requirements file that contains \"pyinstaller\" package dependency. I suppose that it somehow depends on cryptography and it in its turn somehow depends on cffi package and that's why it tries to install it as well. For now we specify this depency in a following way:\n\npyinstaller==2.1.1dev-89e99dd # from requirements file.\n\nNext we try to install our pack of requirements with \n\npip -f {our additional private repository} -U -egg -r {requirements file name}\n\nFrom the build log screenshot attached you can see the following string(build_log attachment):\n\n[12:59:39]: [Step 7/15] Installed c:\\users\\administrator\\envs\\python27-32bit\\lib\\site-packages\\cryptography-0.3-py2.7-win32.egg\n[12:59:39]: [Step 7/15] \n[12:59:39]: [Step 7/15] error: c:\\buildagent\\temp\\buildtmp\\easy_install-snsbak\\cryptography-0.3\\cffi-0.8.2-py2.7-win32.egg_cffi_backend.pyd: Access is denied\n\nI've done a few of investigation and attached a screenshot of related file activity with a highlighted important items (file_activity_list attachment). It seems that python cannod delete this _cffi_backend.pyd file. A few more investigation revealed that the file creation mode allows to delete this file(file_open_result attachment). From the event properties(event_properties attachment) I see that all related modules appear as if they are loaded using LoadLibray windows api, that's how it supposed to be done with python C extensions if we want to use this code. But it appears to me that someone forgot to unload all these modules before trying to delete them and that's why the file cannot be deleted. Please refer to _cffi_backend_loading attachment - it proves _cffi_backend.pyd is being loaded as a library(WinDbg screen). And the very next screenshot is the state of pip install command when WinDbg broke on module load - _break_python_screen attachment. Yet I pointed out the same problem if I just parse this requirements file with pkg_tools and specify install_requires setup's argument with these requirements - setup develop fails with the same result. As to me this problem more relates to easy_install/setuptools core. And yet to mention - without '--egg' argument the problem doesn't reproduce.\n\n![file_activity_list](https://cloud.githubusercontent.com/assets/7242142/3082926/4def0fa4-e4db-11e3-8e14-37f52ea5b2ee.png)\n![event_properties](https://cloud.githubusercontent.com/assets/7242142/3082956/ba432474-e4db-11e3-8d6f-5d0996c35a44.png)\n![file_open_result](https://cloud.githubusercontent.com/assets/7242142/3082983/1417a8e4-e4dc-11e3-9289-0bf61c7cf2c9.png)\n\n![build_log](https://cloud.githubusercontent.com/assets/7242142/3082887/b8bc80e2-e4da-11e3-945a-a0c1137bb6bb.png)\n![_cffi_backend_loading](https://cloud.githubusercontent.com/assets/7242142/3160093/dddbe462-eb1a-11e3-8e08-48cfade1b261.png)\n![_break_python_screen](https://cloud.githubusercontent.com/assets/7242142/3160112/29cc9c0e-eb1b-11e3-88a6-2601f7a8dbe5.png)\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# __\n# /__) _ _ _ _ _/ _\n# / ( (- (/ (/ (- _) / _)\n# /\n\n\"\"\"\nRequests HTTP library\n~~~~~~~~~~~~~~~~~~~~~\n\nRequests is an HTTP library, written in Python, for human beings. Basic GET\nusage:\n\n >>> import requests\n >>> r = requests.get('https://www.python.org')\n >>> r.status_code\n 200\n >>> 'Python is a programming language' in r.content\n True\n\n... or POST:\n\n >>> payload = dict(key1='value1', key2='value2')\n >>> r = requests.post('http://httpbin.org/post', data=payload)\n >>> print(r.text)\n {\n ...\n \"form\": {\n \"key2\": \"value2\",\n \"key1\": \"value1\"\n },\n ...\n }\n\nThe other HTTP methods are supported - see `requests.api`. Full documentation\nis at <http://python-requests.org>.\n\n:copyright: (c) 2016 by Kenneth Reitz.\n:license: Apache 2.0, see LICENSE for more details.\n\n\"\"\"\n\n__title__ = 'requests'\n__version__ = '2.10.0'\n__build__ = 0x021000\n__author__ = 'Kenneth Reitz'\n__license__ = 'Apache 2.0'\n__copyright__ = 'Copyright 2016 Kenneth Reitz'\n\n# Attempt to enable urllib3's SNI support, if possible\ntry:\n from .packages.urllib3.contrib import pyopenssl\n pyopenssl.inject_into_urllib3()\nexcept ImportError:\n pass\n\nimport warnings\n\n# urllib3's DependencyWarnings should be silenced.\nfrom .packages.urllib3.exceptions import DependencyWarning\nwarnings.simplefilter('ignore', DependencyWarning)\n\nfrom . import utils\nfrom .models import Request, Response, PreparedRequest\nfrom .api import request, get, head, post, patch, put, delete, options\nfrom .sessions import session, Session\nfrom .status_codes import codes\nfrom .exceptions import (\n RequestException, Timeout, URLRequired,\n TooManyRedirects, HTTPError, ConnectionError,\n FileModeWarning, ConnectTimeout, ReadTimeout\n)\n\n# Set default logging handler to avoid \"No handler found\" warnings.\nimport logging\ntry: # Python 2.7+\n from logging import NullHandler\nexcept ImportError:\n class NullHandler(logging.Handler):\n def emit(self, record):\n pass\n\nlogging.getLogger(__name__).addHandler(NullHandler())\n\nimport warnings\n\n# FileModeWarnings go off per the default.\nwarnings.simplefilter('default', FileModeWarning, append=True)\n", "path": "pip/_vendor/requests/__init__.py"}, {"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\npythoncompat\n\"\"\"\n\nfrom .packages import chardet\n\nimport sys\n\n# -------\n# Pythons\n# -------\n\n# Syntax sugar.\n_ver = sys.version_info\n\n#: Python 2.x?\nis_py2 = (_ver[0] == 2)\n\n#: Python 3.x?\nis_py3 = (_ver[0] == 3)\n\ntry:\n import simplejson as json\nexcept (ImportError, SyntaxError):\n # simplejson does not support Python 3.2, it throws a SyntaxError\n # because of u'...' Unicode literals.\n import json\n\n# ---------\n# Specifics\n# ---------\n\nif is_py2:\n from urllib import quote, unquote, quote_plus, unquote_plus, urlencode, getproxies, proxy_bypass\n from urlparse import urlparse, urlunparse, urljoin, urlsplit, urldefrag\n from urllib2 import parse_http_list\n import cookielib\n from Cookie import Morsel\n from StringIO import StringIO\n from .packages.urllib3.packages.ordered_dict import OrderedDict\n\n builtin_str = str\n bytes = str\n str = unicode\n basestring = basestring\n numeric_types = (int, long, float)\n\nelif is_py3:\n from urllib.parse import urlparse, urlunparse, urljoin, urlsplit, urlencode, quote, unquote, quote_plus, unquote_plus, urldefrag\n from urllib.request import parse_http_list, getproxies, proxy_bypass\n from http import cookiejar as cookielib\n from http.cookies import Morsel\n from io import StringIO\n from collections import OrderedDict\n\n builtin_str = str\n str = str\n bytes = bytes\n basestring = (str, bytes)\n numeric_types = (int, float)\n", "path": "pip/_vendor/requests/compat.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# __\n# /__) _ _ _ _ _/ _\n# / ( (- (/ (/ (- _) / _)\n# /\n\n\"\"\"\nRequests HTTP library\n~~~~~~~~~~~~~~~~~~~~~\n\nRequests is an HTTP library, written in Python, for human beings. Basic GET\nusage:\n\n >>> import requests\n >>> r = requests.get('https://www.python.org')\n >>> r.status_code\n 200\n >>> 'Python is a programming language' in r.content\n True\n\n... or POST:\n\n >>> payload = dict(key1='value1', key2='value2')\n >>> r = requests.post('http://httpbin.org/post', data=payload)\n >>> print(r.text)\n {\n ...\n \"form\": {\n \"key2\": \"value2\",\n \"key1\": \"value1\"\n },\n ...\n }\n\nThe other HTTP methods are supported - see `requests.api`. Full documentation\nis at <http://python-requests.org>.\n\n:copyright: (c) 2016 by Kenneth Reitz.\n:license: Apache 2.0, see LICENSE for more details.\n\n\"\"\"\n\n__title__ = 'requests'\n__version__ = '2.10.0'\n__build__ = 0x021000\n__author__ = 'Kenneth Reitz'\n__license__ = 'Apache 2.0'\n__copyright__ = 'Copyright 2016 Kenneth Reitz'\n\n# Attempt to enable urllib3's SNI support, if possible\n# Note: Patched by pip to prevent using the PyOpenSSL module. On Windows this\n# prevents upgrading cryptography.\n# try:\n# from .packages.urllib3.contrib import pyopenssl\n# pyopenssl.inject_into_urllib3()\n# except ImportError:\n# pass\n\nimport warnings\n\n# urllib3's DependencyWarnings should be silenced.\nfrom .packages.urllib3.exceptions import DependencyWarning\nwarnings.simplefilter('ignore', DependencyWarning)\n\nfrom . import utils\nfrom .models import Request, Response, PreparedRequest\nfrom .api import request, get, head, post, patch, put, delete, options\nfrom .sessions import session, Session\nfrom .status_codes import codes\nfrom .exceptions import (\n RequestException, Timeout, URLRequired,\n TooManyRedirects, HTTPError, ConnectionError,\n FileModeWarning, ConnectTimeout, ReadTimeout\n)\n\n# Set default logging handler to avoid \"No handler found\" warnings.\nimport logging\ntry: # Python 2.7+\n from logging import NullHandler\nexcept ImportError:\n class NullHandler(logging.Handler):\n def emit(self, record):\n pass\n\nlogging.getLogger(__name__).addHandler(NullHandler())\n\nimport warnings\n\n# FileModeWarnings go off per the default.\nwarnings.simplefilter('default', FileModeWarning, append=True)\n", "path": "pip/_vendor/requests/__init__.py"}, {"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\npythoncompat\n\"\"\"\n\nfrom .packages import chardet\n\nimport sys\n\n# -------\n# Pythons\n# -------\n\n# Syntax sugar.\n_ver = sys.version_info\n\n#: Python 2.x?\nis_py2 = (_ver[0] == 2)\n\n#: Python 3.x?\nis_py3 = (_ver[0] == 3)\n\n# Note: We've patched out simplejson support in pip because it prevents\n# upgrading simplejson on Windows.\n# try:\n# import simplejson as json\n# except (ImportError, SyntaxError):\n# # simplejson does not support Python 3.2, it throws a SyntaxError\n# # because of u'...' Unicode literals.\nimport json\n\n# ---------\n# Specifics\n# ---------\n\nif is_py2:\n from urllib import quote, unquote, quote_plus, unquote_plus, urlencode, getproxies, proxy_bypass\n from urlparse import urlparse, urlunparse, urljoin, urlsplit, urldefrag\n from urllib2 import parse_http_list\n import cookielib\n from Cookie import Morsel\n from StringIO import StringIO\n from .packages.urllib3.packages.ordered_dict import OrderedDict\n\n builtin_str = str\n bytes = str\n str = unicode\n basestring = basestring\n numeric_types = (int, long, float)\n\nelif is_py3:\n from urllib.parse import urlparse, urlunparse, urljoin, urlsplit, urlencode, quote, unquote, quote_plus, unquote_plus, urldefrag\n from urllib.request import parse_http_list, getproxies, proxy_bypass\n from http import cookiejar as cookielib\n from http.cookies import Morsel\n from io import StringIO\n from collections import OrderedDict\n\n builtin_str = str\n str = str\n bytes = bytes\n basestring = (str, bytes)\n numeric_types = (int, float)\n", "path": "pip/_vendor/requests/compat.py"}]}
2,501
417
gh_patches_debug_24280
rasdani/github-patches
git_diff
zigpy__zha-device-handlers-1098
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [Device Support Request] Legrand Dimmer switch w/o neutral for sw_build_id=002e Hi, guy! I have Legrand Dimmer switch w/o neutral which is not recognized neither as DimmerWithoutNeutral nor as DimmerWithoutNeutral2. The device runs this firmware: ``` hw_version = 7 stack_version = 66 sw_build_id = 002e ``` In this firmware Legrand changed the device type for endpoint=242 from 0x0061 to 0x0066, and they also added a few more clusters: ``` endpoint=1: out: 0x0006, 0x0005 endpoint=242: in: 0x0021 ``` Here is a complete device signature ``` { "node_descriptor": "<NodeDescriptor byte1=17 byte2=64 mac_capability_flags=142 manufacturer_code=4129 maximum_buffer_size=89 maximum_incoming_transfer_size=63 server_mask=10752 maximum_outgoing_transfer_size=63 descriptor_capability_field=0>", "endpoints": { "1": { "profile_id": 260, "device_type": "0x0100", "in_clusters": [ "0x0000", "0x0003", "0x0004", "0x0005", "0x0006", "0x0008", "0x000f", "0xfc01" ], "out_clusters": [ "0x0000", "0x0019", "0xfc01" ] } }, "manufacturer": " Legrand", "model": " Dimmer switch w/o neutral", "class": "zigpy.device.Device" } ``` I've updated the definition of DimmerWithoutNeutral2 (see below) and now the device is properly recognized. ``` class DimmerWithoutNeutral2(DimmerWithoutNeutral): """Dimmer switch w/o neutral 2.""" signature = { # <SimpleDescriptor endpoint=1 profile=260 device_type=256 # device_version=1 # input_clusters=[0, 3, 4, 8, 6, 5, 15, 64513] # output_clusters=[0, 64513, 25]> MODELS_INFO: [(f" {LEGRAND}", " Dimmer switch w/o neutral")], ENDPOINTS: { 1: { PROFILE_ID: zha.PROFILE_ID, DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT, INPUT_CLUSTERS: [ Basic.cluster_id, Identify.cluster_id, Groups.cluster_id, OnOff.cluster_id, LevelControl.cluster_id, Scenes.cluster_id, BinaryInput.cluster_id, MANUFACTURER_SPECIFIC_CLUSTER_ID, ], OUTPUT_CLUSTERS: [ Basic.cluster_id, MANUFACTURER_SPECIFIC_CLUSTER_ID, Ota.cluster_id, OnOff.cluster_id, Scenes.cluster_id, ], }, 242: { PROFILE_ID: 41440, DEVICE_TYPE: 0x0066, INPUT_CLUSTERS: [0x0021], OUTPUT_CLUSTERS: [0x0021], }, }, } ``` Please add this quirk to master. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `zhaquirks/legrand/dimmer.py` Content: ``` 1 """Device handler for Legrand Dimmer switch w/o neutral.""" 2 from zigpy.profiles import zha 3 from zigpy.quirks import CustomCluster, CustomDevice 4 import zigpy.types as t 5 from zigpy.zcl.clusters.general import ( 6 Basic, 7 BinaryInput, 8 Groups, 9 Identify, 10 LevelControl, 11 OnOff, 12 Ota, 13 Scenes, 14 ) 15 from zigpy.zcl.clusters.manufacturer_specific import ManufacturerSpecificCluster 16 17 from zhaquirks.const import ( 18 DEVICE_TYPE, 19 ENDPOINTS, 20 INPUT_CLUSTERS, 21 MODELS_INFO, 22 OUTPUT_CLUSTERS, 23 PROFILE_ID, 24 ) 25 from zhaquirks.legrand import LEGRAND 26 27 MANUFACTURER_SPECIFIC_CLUSTER_ID = 0xFC01 # decimal = 64513 28 29 30 class LegrandCluster(CustomCluster, ManufacturerSpecificCluster): 31 """LegrandCluster.""" 32 33 cluster_id = MANUFACTURER_SPECIFIC_CLUSTER_ID 34 name = "LegrandCluster" 35 ep_attribute = "legrand_cluster" 36 manufacturer_attributes = { 37 0x0000: ("dimmer", t.data16), 38 0x0001: ("led_dark", t.Bool), 39 0x0002: ("led_on", t.Bool), 40 } 41 42 43 class DimmerWithoutNeutral(CustomDevice): 44 """Dimmer switch w/o neutral.""" 45 46 signature = { 47 # <SimpleDescriptor endpoint=1 profile=260 device_type=256 48 # device_version=1 49 # input_clusters=[0, 3, 4, 8, 6, 5, 15, 64513] 50 # output_clusters=[0, 64513, 25]> 51 MODELS_INFO: [(f" {LEGRAND}", " Dimmer switch w/o neutral")], 52 ENDPOINTS: { 53 1: { 54 PROFILE_ID: zha.PROFILE_ID, 55 DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT, 56 INPUT_CLUSTERS: [ 57 Basic.cluster_id, 58 Identify.cluster_id, 59 Groups.cluster_id, 60 OnOff.cluster_id, 61 LevelControl.cluster_id, 62 Scenes.cluster_id, 63 BinaryInput.cluster_id, 64 MANUFACTURER_SPECIFIC_CLUSTER_ID, 65 ], 66 OUTPUT_CLUSTERS: [ 67 Basic.cluster_id, 68 MANUFACTURER_SPECIFIC_CLUSTER_ID, 69 Ota.cluster_id, 70 ], 71 } 72 }, 73 } 74 75 replacement = { 76 ENDPOINTS: { 77 1: { 78 PROFILE_ID: zha.PROFILE_ID, 79 DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT, 80 INPUT_CLUSTERS: [ 81 Basic.cluster_id, 82 Identify.cluster_id, 83 Groups.cluster_id, 84 OnOff.cluster_id, 85 LevelControl.cluster_id, 86 Scenes.cluster_id, 87 BinaryInput.cluster_id, 88 LegrandCluster, 89 ], 90 OUTPUT_CLUSTERS: [Basic.cluster_id, LegrandCluster, Ota.cluster_id], 91 } 92 } 93 } 94 95 96 class DimmerWithoutNeutral2(DimmerWithoutNeutral): 97 """Dimmer switch w/o neutral 2.""" 98 99 signature = { 100 # <SimpleDescriptor endpoint=1 profile=260 device_type=256 101 # device_version=1 102 # input_clusters=[0, 3, 4, 8, 6, 5, 15, 64513] 103 # output_clusters=[0, 64513, 25]> 104 MODELS_INFO: [(f" {LEGRAND}", " Dimmer switch w/o neutral")], 105 ENDPOINTS: { 106 1: { 107 PROFILE_ID: zha.PROFILE_ID, 108 DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT, 109 INPUT_CLUSTERS: [ 110 Basic.cluster_id, 111 Identify.cluster_id, 112 Groups.cluster_id, 113 OnOff.cluster_id, 114 LevelControl.cluster_id, 115 Scenes.cluster_id, 116 BinaryInput.cluster_id, 117 MANUFACTURER_SPECIFIC_CLUSTER_ID, 118 ], 119 OUTPUT_CLUSTERS: [ 120 Basic.cluster_id, 121 MANUFACTURER_SPECIFIC_CLUSTER_ID, 122 Ota.cluster_id, 123 ], 124 }, 125 242: { 126 PROFILE_ID: 41440, 127 DEVICE_TYPE: 0x0061, 128 INPUT_CLUSTERS: [], 129 OUTPUT_CLUSTERS: [0x0021], 130 }, 131 }, 132 } 133 134 135 class DimmerWithNeutral(DimmerWithoutNeutral): 136 """Dimmer switch with neutral.""" 137 138 signature = { 139 # <SimpleDescriptor endpoint=1 profile=260 device_type=256 140 # device_version=1 141 # input_clusters=[0, 3, 4, 8, 6, 5, 15, 64513] 142 # output_clusters=[0, 25, 64513]> 143 MODELS_INFO: [(f" {LEGRAND}", " Dimmer switch with neutral")], 144 ENDPOINTS: { 145 1: { 146 PROFILE_ID: zha.PROFILE_ID, 147 DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT, 148 INPUT_CLUSTERS: [ 149 Basic.cluster_id, 150 Identify.cluster_id, 151 Groups.cluster_id, 152 OnOff.cluster_id, 153 LevelControl.cluster_id, 154 Scenes.cluster_id, 155 BinaryInput.cluster_id, 156 MANUFACTURER_SPECIFIC_CLUSTER_ID, 157 ], 158 OUTPUT_CLUSTERS: [ 159 Basic.cluster_id, 160 MANUFACTURER_SPECIFIC_CLUSTER_ID, 161 Ota.cluster_id, 162 ], 163 }, 164 242: { 165 PROFILE_ID: 41440, 166 DEVICE_TYPE: 0x0066, 167 INPUT_CLUSTERS: [0x0021], 168 OUTPUT_CLUSTERS: [0x0021], 169 }, 170 }, 171 } 172 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/zhaquirks/legrand/dimmer.py b/zhaquirks/legrand/dimmer.py --- a/zhaquirks/legrand/dimmer.py +++ b/zhaquirks/legrand/dimmer.py @@ -132,6 +132,47 @@ } +class DimmerWithoutNeutral3(DimmerWithoutNeutral): + """Dimmer switch w/o neutral (at least for firmware 0x2e3).""" + + signature = { + # <SimpleDescriptor endpoint=1 profile=260 device_type=256 + # device_version=1 + # input_clusters=[0, 3, 4, 5, 6, 8, 15, 64513] + # output_clusters=[0, 5, 6, 25, 64513]> + MODELS_INFO: [(f" {LEGRAND}", " Dimmer switch w/o neutral")], + ENDPOINTS: { + 1: { + PROFILE_ID: zha.PROFILE_ID, + DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT, + INPUT_CLUSTERS: [ + Basic.cluster_id, + Identify.cluster_id, + Groups.cluster_id, + OnOff.cluster_id, + LevelControl.cluster_id, + Scenes.cluster_id, + BinaryInput.cluster_id, + MANUFACTURER_SPECIFIC_CLUSTER_ID, + ], + OUTPUT_CLUSTERS: [ + Basic.cluster_id, + MANUFACTURER_SPECIFIC_CLUSTER_ID, + Ota.cluster_id, + OnOff.cluster_id, + Scenes.cluster_id, + ], + }, + 242: { + PROFILE_ID: 41440, + DEVICE_TYPE: 0x0066, + INPUT_CLUSTERS: [0x0021], + OUTPUT_CLUSTERS: [0x0021], + }, + }, + } + + class DimmerWithNeutral(DimmerWithoutNeutral): """Dimmer switch with neutral."""
{"golden_diff": "diff --git a/zhaquirks/legrand/dimmer.py b/zhaquirks/legrand/dimmer.py\n--- a/zhaquirks/legrand/dimmer.py\n+++ b/zhaquirks/legrand/dimmer.py\n@@ -132,6 +132,47 @@\n }\n \n \n+class DimmerWithoutNeutral3(DimmerWithoutNeutral):\n+ \"\"\"Dimmer switch w/o neutral (at least for firmware 0x2e3).\"\"\"\n+\n+ signature = {\n+ # <SimpleDescriptor endpoint=1 profile=260 device_type=256\n+ # device_version=1\n+ # input_clusters=[0, 3, 4, 5, 6, 8, 15, 64513]\n+ # output_clusters=[0, 5, 6, 25, 64513]>\n+ MODELS_INFO: [(f\" {LEGRAND}\", \" Dimmer switch w/o neutral\")],\n+ ENDPOINTS: {\n+ 1: {\n+ PROFILE_ID: zha.PROFILE_ID,\n+ DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,\n+ INPUT_CLUSTERS: [\n+ Basic.cluster_id,\n+ Identify.cluster_id,\n+ Groups.cluster_id,\n+ OnOff.cluster_id,\n+ LevelControl.cluster_id,\n+ Scenes.cluster_id,\n+ BinaryInput.cluster_id,\n+ MANUFACTURER_SPECIFIC_CLUSTER_ID,\n+ ],\n+ OUTPUT_CLUSTERS: [\n+ Basic.cluster_id,\n+ MANUFACTURER_SPECIFIC_CLUSTER_ID,\n+ Ota.cluster_id,\n+ OnOff.cluster_id,\n+ Scenes.cluster_id,\n+ ],\n+ },\n+ 242: {\n+ PROFILE_ID: 41440,\n+ DEVICE_TYPE: 0x0066,\n+ INPUT_CLUSTERS: [0x0021],\n+ OUTPUT_CLUSTERS: [0x0021],\n+ },\n+ },\n+ }\n+\n+\n class DimmerWithNeutral(DimmerWithoutNeutral):\n \"\"\"Dimmer switch with neutral.\"\"\"\n", "issue": "[Device Support Request] Legrand Dimmer switch w/o neutral for sw_build_id=002e\nHi, guy!\r\n\r\nI have Legrand Dimmer switch w/o neutral which is not recognized neither as DimmerWithoutNeutral nor as DimmerWithoutNeutral2.\r\n\r\nThe device runs this firmware:\r\n```\r\nhw_version = 7\r\nstack_version = 66\r\nsw_build_id = 002e\r\n```\r\n\r\nIn this firmware Legrand changed the device type for endpoint=242 from 0x0061 to 0x0066, and they also added a few more clusters:\r\n```\r\nendpoint=1: out: 0x0006, 0x0005\r\nendpoint=242: in: 0x0021\r\n```\r\n\r\nHere is a complete device signature\r\n```\r\n{\r\n \"node_descriptor\": \"<NodeDescriptor byte1=17 byte2=64 mac_capability_flags=142 manufacturer_code=4129 maximum_buffer_size=89 maximum_incoming_transfer_size=63 server_mask=10752 maximum_outgoing_transfer_size=63 descriptor_capability_field=0>\",\r\n \"endpoints\": {\r\n \"1\": {\r\n \"profile_id\": 260,\r\n \"device_type\": \"0x0100\",\r\n \"in_clusters\": [\r\n \"0x0000\",\r\n \"0x0003\",\r\n \"0x0004\",\r\n \"0x0005\",\r\n \"0x0006\",\r\n \"0x0008\",\r\n \"0x000f\",\r\n \"0xfc01\"\r\n ],\r\n \"out_clusters\": [\r\n \"0x0000\",\r\n \"0x0019\",\r\n \"0xfc01\"\r\n ]\r\n }\r\n },\r\n \"manufacturer\": \" Legrand\",\r\n \"model\": \" Dimmer switch w/o neutral\",\r\n \"class\": \"zigpy.device.Device\"\r\n}\r\n```\r\n\r\nI've updated the definition of DimmerWithoutNeutral2 (see below) and now the device is properly recognized.\r\n\r\n```\r\nclass DimmerWithoutNeutral2(DimmerWithoutNeutral):\r\n \"\"\"Dimmer switch w/o neutral 2.\"\"\"\r\n\r\n signature = {\r\n # <SimpleDescriptor endpoint=1 profile=260 device_type=256\r\n # device_version=1\r\n # input_clusters=[0, 3, 4, 8, 6, 5, 15, 64513]\r\n # output_clusters=[0, 64513, 25]>\r\n MODELS_INFO: [(f\" {LEGRAND}\", \" Dimmer switch w/o neutral\")],\r\n ENDPOINTS: {\r\n 1: {\r\n PROFILE_ID: zha.PROFILE_ID,\r\n DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,\r\n INPUT_CLUSTERS: [\r\n Basic.cluster_id,\r\n Identify.cluster_id,\r\n Groups.cluster_id,\r\n OnOff.cluster_id,\r\n LevelControl.cluster_id,\r\n Scenes.cluster_id,\r\n BinaryInput.cluster_id,\r\n MANUFACTURER_SPECIFIC_CLUSTER_ID,\r\n ],\r\n OUTPUT_CLUSTERS: [\r\n Basic.cluster_id,\r\n MANUFACTURER_SPECIFIC_CLUSTER_ID,\r\n Ota.cluster_id,\r\n OnOff.cluster_id,\r\n Scenes.cluster_id,\r\n ],\r\n },\r\n 242: {\r\n PROFILE_ID: 41440,\r\n DEVICE_TYPE: 0x0066,\r\n INPUT_CLUSTERS: [0x0021],\r\n OUTPUT_CLUSTERS: [0x0021],\r\n },\r\n },\r\n }\r\n```\r\n\r\nPlease add this quirk to master. \n", "before_files": [{"content": "\"\"\"Device handler for Legrand Dimmer switch w/o neutral.\"\"\"\nfrom zigpy.profiles import zha\nfrom zigpy.quirks import CustomCluster, CustomDevice\nimport zigpy.types as t\nfrom zigpy.zcl.clusters.general import (\n Basic,\n BinaryInput,\n Groups,\n Identify,\n LevelControl,\n OnOff,\n Ota,\n Scenes,\n)\nfrom zigpy.zcl.clusters.manufacturer_specific import ManufacturerSpecificCluster\n\nfrom zhaquirks.const import (\n DEVICE_TYPE,\n ENDPOINTS,\n INPUT_CLUSTERS,\n MODELS_INFO,\n OUTPUT_CLUSTERS,\n PROFILE_ID,\n)\nfrom zhaquirks.legrand import LEGRAND\n\nMANUFACTURER_SPECIFIC_CLUSTER_ID = 0xFC01 # decimal = 64513\n\n\nclass LegrandCluster(CustomCluster, ManufacturerSpecificCluster):\n \"\"\"LegrandCluster.\"\"\"\n\n cluster_id = MANUFACTURER_SPECIFIC_CLUSTER_ID\n name = \"LegrandCluster\"\n ep_attribute = \"legrand_cluster\"\n manufacturer_attributes = {\n 0x0000: (\"dimmer\", t.data16),\n 0x0001: (\"led_dark\", t.Bool),\n 0x0002: (\"led_on\", t.Bool),\n }\n\n\nclass DimmerWithoutNeutral(CustomDevice):\n \"\"\"Dimmer switch w/o neutral.\"\"\"\n\n signature = {\n # <SimpleDescriptor endpoint=1 profile=260 device_type=256\n # device_version=1\n # input_clusters=[0, 3, 4, 8, 6, 5, 15, 64513]\n # output_clusters=[0, 64513, 25]>\n MODELS_INFO: [(f\" {LEGRAND}\", \" Dimmer switch w/o neutral\")],\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n OnOff.cluster_id,\n LevelControl.cluster_id,\n Scenes.cluster_id,\n BinaryInput.cluster_id,\n MANUFACTURER_SPECIFIC_CLUSTER_ID,\n ],\n OUTPUT_CLUSTERS: [\n Basic.cluster_id,\n MANUFACTURER_SPECIFIC_CLUSTER_ID,\n Ota.cluster_id,\n ],\n }\n },\n }\n\n replacement = {\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n OnOff.cluster_id,\n LevelControl.cluster_id,\n Scenes.cluster_id,\n BinaryInput.cluster_id,\n LegrandCluster,\n ],\n OUTPUT_CLUSTERS: [Basic.cluster_id, LegrandCluster, Ota.cluster_id],\n }\n }\n }\n\n\nclass DimmerWithoutNeutral2(DimmerWithoutNeutral):\n \"\"\"Dimmer switch w/o neutral 2.\"\"\"\n\n signature = {\n # <SimpleDescriptor endpoint=1 profile=260 device_type=256\n # device_version=1\n # input_clusters=[0, 3, 4, 8, 6, 5, 15, 64513]\n # output_clusters=[0, 64513, 25]>\n MODELS_INFO: [(f\" {LEGRAND}\", \" Dimmer switch w/o neutral\")],\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n OnOff.cluster_id,\n LevelControl.cluster_id,\n Scenes.cluster_id,\n BinaryInput.cluster_id,\n MANUFACTURER_SPECIFIC_CLUSTER_ID,\n ],\n OUTPUT_CLUSTERS: [\n Basic.cluster_id,\n MANUFACTURER_SPECIFIC_CLUSTER_ID,\n Ota.cluster_id,\n ],\n },\n 242: {\n PROFILE_ID: 41440,\n DEVICE_TYPE: 0x0061,\n INPUT_CLUSTERS: [],\n OUTPUT_CLUSTERS: [0x0021],\n },\n },\n }\n\n\nclass DimmerWithNeutral(DimmerWithoutNeutral):\n \"\"\"Dimmer switch with neutral.\"\"\"\n\n signature = {\n # <SimpleDescriptor endpoint=1 profile=260 device_type=256\n # device_version=1\n # input_clusters=[0, 3, 4, 8, 6, 5, 15, 64513]\n # output_clusters=[0, 25, 64513]>\n MODELS_INFO: [(f\" {LEGRAND}\", \" Dimmer switch with neutral\")],\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n OnOff.cluster_id,\n LevelControl.cluster_id,\n Scenes.cluster_id,\n BinaryInput.cluster_id,\n MANUFACTURER_SPECIFIC_CLUSTER_ID,\n ],\n OUTPUT_CLUSTERS: [\n Basic.cluster_id,\n MANUFACTURER_SPECIFIC_CLUSTER_ID,\n Ota.cluster_id,\n ],\n },\n 242: {\n PROFILE_ID: 41440,\n DEVICE_TYPE: 0x0066,\n INPUT_CLUSTERS: [0x0021],\n OUTPUT_CLUSTERS: [0x0021],\n },\n },\n }\n", "path": "zhaquirks/legrand/dimmer.py"}], "after_files": [{"content": "\"\"\"Device handler for Legrand Dimmer switch w/o neutral.\"\"\"\nfrom zigpy.profiles import zha\nfrom zigpy.quirks import CustomCluster, CustomDevice\nimport zigpy.types as t\nfrom zigpy.zcl.clusters.general import (\n Basic,\n BinaryInput,\n Groups,\n Identify,\n LevelControl,\n OnOff,\n Ota,\n Scenes,\n)\nfrom zigpy.zcl.clusters.manufacturer_specific import ManufacturerSpecificCluster\n\nfrom zhaquirks.const import (\n DEVICE_TYPE,\n ENDPOINTS,\n INPUT_CLUSTERS,\n MODELS_INFO,\n OUTPUT_CLUSTERS,\n PROFILE_ID,\n)\nfrom zhaquirks.legrand import LEGRAND\n\nMANUFACTURER_SPECIFIC_CLUSTER_ID = 0xFC01 # decimal = 64513\n\n\nclass LegrandCluster(CustomCluster, ManufacturerSpecificCluster):\n \"\"\"LegrandCluster.\"\"\"\n\n cluster_id = MANUFACTURER_SPECIFIC_CLUSTER_ID\n name = \"LegrandCluster\"\n ep_attribute = \"legrand_cluster\"\n manufacturer_attributes = {\n 0x0000: (\"dimmer\", t.data16),\n 0x0001: (\"led_dark\", t.Bool),\n 0x0002: (\"led_on\", t.Bool),\n }\n\n\nclass DimmerWithoutNeutral(CustomDevice):\n \"\"\"Dimmer switch w/o neutral.\"\"\"\n\n signature = {\n # <SimpleDescriptor endpoint=1 profile=260 device_type=256\n # device_version=1\n # input_clusters=[0, 3, 4, 8, 6, 5, 15, 64513]\n # output_clusters=[0, 64513, 25]>\n MODELS_INFO: [(f\" {LEGRAND}\", \" Dimmer switch w/o neutral\")],\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n OnOff.cluster_id,\n LevelControl.cluster_id,\n Scenes.cluster_id,\n BinaryInput.cluster_id,\n MANUFACTURER_SPECIFIC_CLUSTER_ID,\n ],\n OUTPUT_CLUSTERS: [\n Basic.cluster_id,\n MANUFACTURER_SPECIFIC_CLUSTER_ID,\n Ota.cluster_id,\n ],\n }\n },\n }\n\n replacement = {\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n OnOff.cluster_id,\n LevelControl.cluster_id,\n Scenes.cluster_id,\n BinaryInput.cluster_id,\n LegrandCluster,\n ],\n OUTPUT_CLUSTERS: [Basic.cluster_id, LegrandCluster, Ota.cluster_id],\n }\n }\n }\n\n\nclass DimmerWithoutNeutral2(DimmerWithoutNeutral):\n \"\"\"Dimmer switch w/o neutral 2.\"\"\"\n\n signature = {\n # <SimpleDescriptor endpoint=1 profile=260 device_type=256\n # device_version=1\n # input_clusters=[0, 3, 4, 8, 6, 5, 15, 64513]\n # output_clusters=[0, 64513, 25]>\n MODELS_INFO: [(f\" {LEGRAND}\", \" Dimmer switch w/o neutral\")],\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n OnOff.cluster_id,\n LevelControl.cluster_id,\n Scenes.cluster_id,\n BinaryInput.cluster_id,\n MANUFACTURER_SPECIFIC_CLUSTER_ID,\n ],\n OUTPUT_CLUSTERS: [\n Basic.cluster_id,\n MANUFACTURER_SPECIFIC_CLUSTER_ID,\n Ota.cluster_id,\n ],\n },\n 242: {\n PROFILE_ID: 41440,\n DEVICE_TYPE: 0x0061,\n INPUT_CLUSTERS: [],\n OUTPUT_CLUSTERS: [0x0021],\n },\n },\n }\n\n\nclass DimmerWithoutNeutral3(DimmerWithoutNeutral):\n \"\"\"Dimmer switch w/o neutral (at least for firmware 0x2e3).\"\"\"\n\n signature = {\n # <SimpleDescriptor endpoint=1 profile=260 device_type=256\n # device_version=1\n # input_clusters=[0, 3, 4, 5, 6, 8, 15, 64513]\n # output_clusters=[0, 5, 6, 25, 64513]>\n MODELS_INFO: [(f\" {LEGRAND}\", \" Dimmer switch w/o neutral\")],\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n OnOff.cluster_id,\n LevelControl.cluster_id,\n Scenes.cluster_id,\n BinaryInput.cluster_id,\n MANUFACTURER_SPECIFIC_CLUSTER_ID,\n ],\n OUTPUT_CLUSTERS: [\n Basic.cluster_id,\n MANUFACTURER_SPECIFIC_CLUSTER_ID,\n Ota.cluster_id,\n OnOff.cluster_id,\n Scenes.cluster_id,\n ],\n },\n 242: {\n PROFILE_ID: 41440,\n DEVICE_TYPE: 0x0066,\n INPUT_CLUSTERS: [0x0021],\n OUTPUT_CLUSTERS: [0x0021],\n },\n },\n }\n\n\nclass DimmerWithNeutral(DimmerWithoutNeutral):\n \"\"\"Dimmer switch with neutral.\"\"\"\n\n signature = {\n # <SimpleDescriptor endpoint=1 profile=260 device_type=256\n # device_version=1\n # input_clusters=[0, 3, 4, 8, 6, 5, 15, 64513]\n # output_clusters=[0, 25, 64513]>\n MODELS_INFO: [(f\" {LEGRAND}\", \" Dimmer switch with neutral\")],\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n OnOff.cluster_id,\n LevelControl.cluster_id,\n Scenes.cluster_id,\n BinaryInput.cluster_id,\n MANUFACTURER_SPECIFIC_CLUSTER_ID,\n ],\n OUTPUT_CLUSTERS: [\n Basic.cluster_id,\n MANUFACTURER_SPECIFIC_CLUSTER_ID,\n Ota.cluster_id,\n ],\n },\n 242: {\n PROFILE_ID: 41440,\n DEVICE_TYPE: 0x0066,\n INPUT_CLUSTERS: [0x0021],\n OUTPUT_CLUSTERS: [0x0021],\n },\n },\n }\n", "path": "zhaquirks/legrand/dimmer.py"}]}
2,731
468
gh_patches_debug_59429
rasdani/github-patches
git_diff
Textualize__rich-3105
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [BUG] `font-family` ignored in `html_export` due to user agent stylesheet for `<code>` - [X] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions. - [X] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md). **Describe the bug** Run this code: ```py import rich.console try: test = 1 raise Exception() except Exception: console = rich.console.Console(record=True) console.print_exception(show_locals=True) html = console.export_html(inline_styles=True) with open("test.html", "w") as html_file: html_file.write(html) ``` You will get an `test.html` output file. Open it in Chrome. I'm on macOS, and it shows up like this: ![image](https://github.com/Textualize/rich/assets/26592486/4b124132-b7a9-4156-bfd9-8912c65f2764) Notice the lines are not aligned properly on the right side. Here is why: ![image](https://github.com/Textualize/rich/assets/26592486/8d6e13e6-2124-46e2-972d-1d4a31256615) As you can see, Chrome's user agent stylesheet causes the `<code>` element to reset the `font-family` on the `<pre>` element back to `monospace`. All we need is to have Rich add a `font-family: inherit;` on the `<code>` element and everything is fine: ![image](https://github.com/Textualize/rich/assets/26592486/ed1c2e6e-7d89-4d39-8301-cc92679458d9) **Platform** <details> <summary>Click to expand</summary> What platform (Win/Linux/Mac) are you running on? What terminal software are you using? Mac with Chrome ``` ❯ python -m rich.diagnose ╭───────────────────────── <class 'rich.console.Console'> ─────────────────────────╮ │ A high level console interface. │ │ │ │ ╭──────────────────────────────────────────────────────────────────────────────╮ │ │ │ <console width=148 ColorSystem.TRUECOLOR> │ │ │ ╰──────────────────────────────────────────────────────────────────────────────╯ │ │ │ │ color_system = 'truecolor' │ │ encoding = 'utf-8' │ │ file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> │ │ height = 87 │ │ is_alt_screen = False │ │ is_dumb_terminal = False │ │ is_interactive = True │ │ is_jupyter = False │ │ is_terminal = True │ │ legacy_windows = False │ │ no_color = False │ │ options = ConsoleOptions( │ │ size=ConsoleDimensions(width=148, height=87), │ │ legacy_windows=False, │ │ min_width=1, │ │ max_width=148, │ │ is_terminal=True, │ │ encoding='utf-8', │ │ max_height=87, │ │ justify=None, │ │ overflow=None, │ │ no_wrap=False, │ │ highlight=None, │ │ markup=None, │ │ height=None │ │ ) │ │ quiet = False │ │ record = False │ │ safe_box = True │ │ size = ConsoleDimensions(width=148, height=87) │ │ soft_wrap = False │ │ stderr = False │ │ style = None │ │ tab_size = 8 │ │ width = 148 │ ╰──────────────────────────────────────────────────────────────────────────────────╯ ╭─── <class 'rich._windows.WindowsConsoleFeatures'> ────╮ │ Windows features available. │ │ │ │ ╭───────────────────────────────────────────────────╮ │ │ │ WindowsConsoleFeatures(vt=False, truecolor=False) │ │ │ ╰───────────────────────────────────────────────────╯ │ │ │ │ truecolor = False │ │ vt = False │ ╰───────────────────────────────────────────────────────╯ ╭────── Environment Variables ───────╮ │ { │ │ 'TERM': 'xterm-256color', │ │ 'COLORTERM': 'truecolor', │ │ 'CLICOLOR': None, │ │ 'NO_COLOR': None, │ │ 'TERM_PROGRAM': 'vscode', │ │ 'COLUMNS': None, │ │ 'LINES': None, │ │ 'JUPYTER_COLUMNS': None, │ │ 'JUPYTER_LINES': None, │ │ 'JPY_PARENT_PID': None, │ │ 'VSCODE_VERBOSE_LOGGING': None │ │ } │ ╰────────────────────────────────────╯ platform="Darwin" ❯ python -m pip freeze | grep rich rich==13.4.2 ``` </details> --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `rich/_export_format.py` Content: ``` 1 CONSOLE_HTML_FORMAT = """\ 2 <!DOCTYPE html> 3 <html> 4 <head> 5 <meta charset="UTF-8"> 6 <style> 7 {stylesheet} 8 body {{ 9 color: {foreground}; 10 background-color: {background}; 11 }} 12 </style> 13 </head> 14 <body> 15 <pre style="font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><code>{code}</code></pre> 16 </body> 17 </html> 18 """ 19 20 CONSOLE_SVG_FORMAT = """\ 21 <svg class="rich-terminal" viewBox="0 0 {width} {height}" xmlns="http://www.w3.org/2000/svg"> 22 <!-- Generated with Rich https://www.textualize.io --> 23 <style> 24 25 @font-face {{ 26 font-family: "Fira Code"; 27 src: local("FiraCode-Regular"), 28 url("https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff2/FiraCode-Regular.woff2") format("woff2"), 29 url("https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff/FiraCode-Regular.woff") format("woff"); 30 font-style: normal; 31 font-weight: 400; 32 }} 33 @font-face {{ 34 font-family: "Fira Code"; 35 src: local("FiraCode-Bold"), 36 url("https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff2/FiraCode-Bold.woff2") format("woff2"), 37 url("https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff/FiraCode-Bold.woff") format("woff"); 38 font-style: bold; 39 font-weight: 700; 40 }} 41 42 .{unique_id}-matrix {{ 43 font-family: Fira Code, monospace; 44 font-size: {char_height}px; 45 line-height: {line_height}px; 46 font-variant-east-asian: full-width; 47 }} 48 49 .{unique_id}-title {{ 50 font-size: 18px; 51 font-weight: bold; 52 font-family: arial; 53 }} 54 55 {styles} 56 </style> 57 58 <defs> 59 <clipPath id="{unique_id}-clip-terminal"> 60 <rect x="0" y="0" width="{terminal_width}" height="{terminal_height}" /> 61 </clipPath> 62 {lines} 63 </defs> 64 65 {chrome} 66 <g transform="translate({terminal_x}, {terminal_y})" clip-path="url(#{unique_id}-clip-terminal)"> 67 {backgrounds} 68 <g class="{unique_id}-matrix"> 69 {matrix} 70 </g> 71 </g> 72 </svg> 73 """ 74 75 _SVG_FONT_FAMILY = "Rich Fira Code" 76 _SVG_CLASSES_PREFIX = "rich-svg" 77 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/rich/_export_format.py b/rich/_export_format.py --- a/rich/_export_format.py +++ b/rich/_export_format.py @@ -12,7 +12,7 @@ </style> </head> <body> - <pre style="font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><code>{code}</code></pre> + <pre style="font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><code style="font-family:inherit">{code}</code></pre> </body> </html> """
{"golden_diff": "diff --git a/rich/_export_format.py b/rich/_export_format.py\n--- a/rich/_export_format.py\n+++ b/rich/_export_format.py\n@@ -12,7 +12,7 @@\n </style>\n </head>\n <body>\n- <pre style=\"font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><code>{code}</code></pre>\n+ <pre style=\"font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><code style=\"font-family:inherit\">{code}</code></pre>\n </body>\n </html>\n \"\"\"\n", "issue": "[BUG] `font-family` ignored in `html_export` due to user agent stylesheet for `<code>`\n- [X] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.\r\n- [X] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).\r\n\r\n**Describe the bug**\r\n\r\nRun this code:\r\n\r\n```py\r\nimport rich.console\r\n\r\ntry:\r\n test = 1\r\n raise Exception()\r\nexcept Exception:\r\n console = rich.console.Console(record=True)\r\n console.print_exception(show_locals=True)\r\n html = console.export_html(inline_styles=True)\r\n with open(\"test.html\", \"w\") as html_file:\r\n html_file.write(html)\r\n```\r\n\r\nYou will get an `test.html` output file. Open it in Chrome.\r\n\r\nI'm on macOS, and it shows up like this:\r\n\r\n![image](https://github.com/Textualize/rich/assets/26592486/4b124132-b7a9-4156-bfd9-8912c65f2764)\r\n\r\n\r\nNotice the lines are not aligned properly on the right side. Here is why:\r\n\r\n![image](https://github.com/Textualize/rich/assets/26592486/8d6e13e6-2124-46e2-972d-1d4a31256615)\r\n\r\nAs you can see, Chrome's user agent stylesheet causes the `<code>` element to reset the `font-family` on the `<pre>` element back to `monospace`. All we need is to have Rich add a `font-family: inherit;` on the `<code>` element and everything is fine:\r\n\r\n![image](https://github.com/Textualize/rich/assets/26592486/ed1c2e6e-7d89-4d39-8301-cc92679458d9)\r\n\r\n**Platform**\r\n<details>\r\n<summary>Click to expand</summary>\r\n\r\nWhat platform (Win/Linux/Mac) are you running on? What terminal software are you using?\r\nMac with Chrome\r\n\r\n```\r\n\u276f python -m rich.diagnose\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 <class 'rich.console.Console'> \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 A high level console interface. \u2502\r\n\u2502 \u2502\r\n\u2502 \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e \u2502\r\n\u2502 \u2502 <console width=148 ColorSystem.TRUECOLOR> \u2502 \u2502\r\n\u2502 \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \u2502\r\n\u2502 \u2502\r\n\u2502 color_system = 'truecolor' \u2502\r\n\u2502 encoding = 'utf-8' \u2502\r\n\u2502 file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> \u2502\r\n\u2502 height = 87 \u2502\r\n\u2502 is_alt_screen = False \u2502\r\n\u2502 is_dumb_terminal = False \u2502\r\n\u2502 is_interactive = True \u2502\r\n\u2502 is_jupyter = False \u2502\r\n\u2502 is_terminal = True \u2502\r\n\u2502 legacy_windows = False \u2502\r\n\u2502 no_color = False \u2502\r\n\u2502 options = ConsoleOptions( \u2502\r\n\u2502 size=ConsoleDimensions(width=148, height=87), \u2502\r\n\u2502 legacy_windows=False, \u2502\r\n\u2502 min_width=1, \u2502\r\n\u2502 max_width=148, \u2502\r\n\u2502 is_terminal=True, \u2502\r\n\u2502 encoding='utf-8', \u2502\r\n\u2502 max_height=87, \u2502\r\n\u2502 justify=None, \u2502\r\n\u2502 overflow=None, \u2502\r\n\u2502 no_wrap=False, \u2502\r\n\u2502 highlight=None, \u2502\r\n\u2502 markup=None, \u2502\r\n\u2502 height=None \u2502\r\n\u2502 ) \u2502\r\n\u2502 quiet = False \u2502\r\n\u2502 record = False \u2502\r\n\u2502 safe_box = True \u2502\r\n\u2502 size = ConsoleDimensions(width=148, height=87) \u2502\r\n\u2502 soft_wrap = False \u2502\r\n\u2502 stderr = False \u2502\r\n\u2502 style = None \u2502\r\n\u2502 tab_size = 8 \u2502\r\n\u2502 width = 148 \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\n\u256d\u2500\u2500\u2500 <class 'rich._windows.WindowsConsoleFeatures'> \u2500\u2500\u2500\u2500\u256e\r\n\u2502 Windows features available. \u2502\r\n\u2502 \u2502\r\n\u2502 \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e \u2502\r\n\u2502 \u2502 WindowsConsoleFeatures(vt=False, truecolor=False) \u2502 \u2502\r\n\u2502 \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \u2502\r\n\u2502 \u2502\r\n\u2502 truecolor = False \u2502\r\n\u2502 vt = False \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500 Environment Variables \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 { \u2502\r\n\u2502 'TERM': 'xterm-256color', \u2502\r\n\u2502 'COLORTERM': 'truecolor', \u2502\r\n\u2502 'CLICOLOR': None, \u2502\r\n\u2502 'NO_COLOR': None, \u2502\r\n\u2502 'TERM_PROGRAM': 'vscode', \u2502\r\n\u2502 'COLUMNS': None, \u2502\r\n\u2502 'LINES': None, \u2502\r\n\u2502 'JUPYTER_COLUMNS': None, \u2502\r\n\u2502 'JUPYTER_LINES': None, \u2502\r\n\u2502 'JPY_PARENT_PID': None, \u2502\r\n\u2502 'VSCODE_VERBOSE_LOGGING': None \u2502\r\n\u2502 } \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\nplatform=\"Darwin\"\r\n\r\n\u276f python -m pip freeze | grep rich\r\nrich==13.4.2\r\n```\r\n\r\n</details>\r\n\n", "before_files": [{"content": "CONSOLE_HTML_FORMAT = \"\"\"\\\n<!DOCTYPE html>\n<html>\n<head>\n<meta charset=\"UTF-8\">\n<style>\n{stylesheet}\nbody {{\n color: {foreground};\n background-color: {background};\n}}\n</style>\n</head>\n<body>\n <pre style=\"font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><code>{code}</code></pre>\n</body>\n</html>\n\"\"\"\n\nCONSOLE_SVG_FORMAT = \"\"\"\\\n<svg class=\"rich-terminal\" viewBox=\"0 0 {width} {height}\" xmlns=\"http://www.w3.org/2000/svg\">\n <!-- Generated with Rich https://www.textualize.io -->\n <style>\n\n @font-face {{\n font-family: \"Fira Code\";\n src: local(\"FiraCode-Regular\"),\n url(\"https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff2/FiraCode-Regular.woff2\") format(\"woff2\"),\n url(\"https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff/FiraCode-Regular.woff\") format(\"woff\");\n font-style: normal;\n font-weight: 400;\n }}\n @font-face {{\n font-family: \"Fira Code\";\n src: local(\"FiraCode-Bold\"),\n url(\"https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff2/FiraCode-Bold.woff2\") format(\"woff2\"),\n url(\"https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff/FiraCode-Bold.woff\") format(\"woff\");\n font-style: bold;\n font-weight: 700;\n }}\n\n .{unique_id}-matrix {{\n font-family: Fira Code, monospace;\n font-size: {char_height}px;\n line-height: {line_height}px;\n font-variant-east-asian: full-width;\n }}\n\n .{unique_id}-title {{\n font-size: 18px;\n font-weight: bold;\n font-family: arial;\n }}\n\n {styles}\n </style>\n\n <defs>\n <clipPath id=\"{unique_id}-clip-terminal\">\n <rect x=\"0\" y=\"0\" width=\"{terminal_width}\" height=\"{terminal_height}\" />\n </clipPath>\n {lines}\n </defs>\n\n {chrome}\n <g transform=\"translate({terminal_x}, {terminal_y})\" clip-path=\"url(#{unique_id}-clip-terminal)\">\n {backgrounds}\n <g class=\"{unique_id}-matrix\">\n {matrix}\n </g>\n </g>\n</svg>\n\"\"\"\n\n_SVG_FONT_FAMILY = \"Rich Fira Code\"\n_SVG_CLASSES_PREFIX = \"rich-svg\"\n", "path": "rich/_export_format.py"}], "after_files": [{"content": "CONSOLE_HTML_FORMAT = \"\"\"\\\n<!DOCTYPE html>\n<html>\n<head>\n<meta charset=\"UTF-8\">\n<style>\n{stylesheet}\nbody {{\n color: {foreground};\n background-color: {background};\n}}\n</style>\n</head>\n<body>\n <pre style=\"font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><code style=\"font-family:inherit\">{code}</code></pre>\n</body>\n</html>\n\"\"\"\n\nCONSOLE_SVG_FORMAT = \"\"\"\\\n<svg class=\"rich-terminal\" viewBox=\"0 0 {width} {height}\" xmlns=\"http://www.w3.org/2000/svg\">\n <!-- Generated with Rich https://www.textualize.io -->\n <style>\n\n @font-face {{\n font-family: \"Fira Code\";\n src: local(\"FiraCode-Regular\"),\n url(\"https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff2/FiraCode-Regular.woff2\") format(\"woff2\"),\n url(\"https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff/FiraCode-Regular.woff\") format(\"woff\");\n font-style: normal;\n font-weight: 400;\n }}\n @font-face {{\n font-family: \"Fira Code\";\n src: local(\"FiraCode-Bold\"),\n url(\"https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff2/FiraCode-Bold.woff2\") format(\"woff2\"),\n url(\"https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff/FiraCode-Bold.woff\") format(\"woff\");\n font-style: bold;\n font-weight: 700;\n }}\n\n .{unique_id}-matrix {{\n font-family: Fira Code, monospace;\n font-size: {char_height}px;\n line-height: {line_height}px;\n font-variant-east-asian: full-width;\n }}\n\n .{unique_id}-title {{\n font-size: 18px;\n font-weight: bold;\n font-family: arial;\n }}\n\n {styles}\n </style>\n\n <defs>\n <clipPath id=\"{unique_id}-clip-terminal\">\n <rect x=\"0\" y=\"0\" width=\"{terminal_width}\" height=\"{terminal_height}\" />\n </clipPath>\n {lines}\n </defs>\n\n {chrome}\n <g transform=\"translate({terminal_x}, {terminal_y})\" clip-path=\"url(#{unique_id}-clip-terminal)\">\n {backgrounds}\n <g class=\"{unique_id}-matrix\">\n {matrix}\n </g>\n </g>\n</svg>\n\"\"\"\n\n_SVG_FONT_FAMILY = \"Rich Fira Code\"\n_SVG_CLASSES_PREFIX = \"rich-svg\"\n", "path": "rich/_export_format.py"}]}
2,314
140
gh_patches_debug_13093
rasdani/github-patches
git_diff
pre-commit__pre-commit-1363
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- cache + virtualenv>=20 / python_venv + moving executables -> File not found: ... python failure mode looks something like this: ``` Check for added large files..............................................Failed - hook id: check-added-large-files - exit code: 1 Executable `/Users/runner/.cache/pre-commit/repo14qw_y0i/py_env-python3.8/bin/python` not found ``` currently this is a common failure for github actions caches, there's ~2 ways to work around this: 1. [add `$(which python)`](https://github.com/pre-commit/action/commit/ee269b64a608de770696d23079f46238c2f7ab5a) to the pre-commit cache key 2. [manually bump](https://github.com/pypa/pip/pull/7750/files) the pre-commit cache key but pre-commit should more gracefully detect this in the [`healthy()`](https://github.com/pre-commit/pre-commit/blob/0a8ba31b9b6656d90f94fc368b47acb502cea44d/pre_commit/languages/python.py#L160-L168) function (which is designed to catch these sorts of system breakages) --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `pre_commit/languages/python.py` Content: ``` 1 import contextlib 2 import functools 3 import os 4 import sys 5 from typing import Callable 6 from typing import ContextManager 7 from typing import Generator 8 from typing import Optional 9 from typing import Sequence 10 from typing import Tuple 11 12 import pre_commit.constants as C 13 from pre_commit.envcontext import envcontext 14 from pre_commit.envcontext import PatchesT 15 from pre_commit.envcontext import UNSET 16 from pre_commit.envcontext import Var 17 from pre_commit.hook import Hook 18 from pre_commit.languages import helpers 19 from pre_commit.parse_shebang import find_executable 20 from pre_commit.prefix import Prefix 21 from pre_commit.util import CalledProcessError 22 from pre_commit.util import clean_path_on_failure 23 from pre_commit.util import cmd_output 24 from pre_commit.util import cmd_output_b 25 26 ENVIRONMENT_DIR = 'py_env' 27 28 29 def bin_dir(venv: str) -> str: 30 """On windows there's a different directory for the virtualenv""" 31 bin_part = 'Scripts' if os.name == 'nt' else 'bin' 32 return os.path.join(venv, bin_part) 33 34 35 def get_env_patch(venv: str) -> PatchesT: 36 return ( 37 ('PYTHONHOME', UNSET), 38 ('VIRTUAL_ENV', venv), 39 ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))), 40 ) 41 42 43 def _find_by_py_launcher( 44 version: str, 45 ) -> Optional[str]: # pragma: no cover (windows only) 46 if version.startswith('python'): 47 num = version[len('python'):] 48 try: 49 cmd = ('py', f'-{num}', '-c', 'import sys; print(sys.executable)') 50 return cmd_output(*cmd)[1].strip() 51 except CalledProcessError: 52 pass 53 return None 54 55 56 def _find_by_sys_executable() -> Optional[str]: 57 def _norm(path: str) -> Optional[str]: 58 _, exe = os.path.split(path.lower()) 59 exe, _, _ = exe.partition('.exe') 60 if exe not in {'python', 'pythonw'} and find_executable(exe): 61 return exe 62 return None 63 64 # On linux, I see these common sys.executables: 65 # 66 # system `python`: /usr/bin/python -> python2.7 67 # system `python2`: /usr/bin/python2 -> python2.7 68 # virtualenv v: v/bin/python (will not return from this loop) 69 # virtualenv v -ppython2: v/bin/python -> python2 70 # virtualenv v -ppython2.7: v/bin/python -> python2.7 71 # virtualenv v -ppypy: v/bin/python -> v/bin/pypy 72 for path in (sys.executable, os.path.realpath(sys.executable)): 73 exe = _norm(path) 74 if exe: 75 return exe 76 return None 77 78 79 @functools.lru_cache(maxsize=1) 80 def get_default_version() -> str: # pragma: no cover (platform dependent) 81 # First attempt from `sys.executable` (or the realpath) 82 exe = _find_by_sys_executable() 83 if exe: 84 return exe 85 86 # Next try the `pythonX.X` executable 87 exe = f'python{sys.version_info[0]}.{sys.version_info[1]}' 88 if find_executable(exe): 89 return exe 90 91 if _find_by_py_launcher(exe): 92 return exe 93 94 # Give a best-effort try for windows 95 default_folder_name = exe.replace('.', '') 96 if os.path.exists(fr'C:\{default_folder_name}\python.exe'): 97 return exe 98 99 # We tried! 100 return C.DEFAULT 101 102 103 def _sys_executable_matches(version: str) -> bool: 104 if version == 'python': 105 return True 106 elif not version.startswith('python'): 107 return False 108 109 try: 110 info = tuple(int(p) for p in version[len('python'):].split('.')) 111 except ValueError: 112 return False 113 114 return sys.version_info[:len(info)] == info 115 116 117 def norm_version(version: str) -> str: 118 # first see if our current executable is appropriate 119 if _sys_executable_matches(version): 120 return sys.executable 121 122 if os.name == 'nt': # pragma: no cover (windows) 123 version_exec = _find_by_py_launcher(version) 124 if version_exec: 125 return version_exec 126 127 # Try looking up by name 128 version_exec = find_executable(version) 129 if version_exec and version_exec != version: 130 return version_exec 131 132 # If it is in the form pythonx.x search in the default 133 # place on windows 134 if version.startswith('python'): 135 default_folder_name = version.replace('.', '') 136 return fr'C:\{default_folder_name}\python.exe' 137 138 # Otherwise assume it is a path 139 return os.path.expanduser(version) 140 141 142 def py_interface( 143 _dir: str, 144 _make_venv: Callable[[str, str], None], 145 ) -> Tuple[ 146 Callable[[Prefix, str], ContextManager[None]], 147 Callable[[Prefix, str], bool], 148 Callable[[Hook, Sequence[str], bool], Tuple[int, bytes]], 149 Callable[[Prefix, str, Sequence[str]], None], 150 ]: 151 @contextlib.contextmanager 152 def in_env( 153 prefix: Prefix, 154 language_version: str, 155 ) -> Generator[None, None, None]: 156 envdir = prefix.path(helpers.environment_dir(_dir, language_version)) 157 with envcontext(get_env_patch(envdir)): 158 yield 159 160 def healthy(prefix: Prefix, language_version: str) -> bool: 161 with in_env(prefix, language_version): 162 retcode, _, _ = cmd_output_b( 163 'python', '-c', 164 'import ctypes, datetime, io, os, ssl, weakref', 165 cwd='/', 166 retcode=None, 167 ) 168 return retcode == 0 169 170 def run_hook( 171 hook: Hook, 172 file_args: Sequence[str], 173 color: bool, 174 ) -> Tuple[int, bytes]: 175 with in_env(hook.prefix, hook.language_version): 176 return helpers.run_xargs(hook, hook.cmd, file_args, color=color) 177 178 def install_environment( 179 prefix: Prefix, 180 version: str, 181 additional_dependencies: Sequence[str], 182 ) -> None: 183 additional_dependencies = tuple(additional_dependencies) 184 directory = helpers.environment_dir(_dir, version) 185 186 env_dir = prefix.path(directory) 187 with clean_path_on_failure(env_dir): 188 if version != C.DEFAULT: 189 python = norm_version(version) 190 else: 191 python = os.path.realpath(sys.executable) 192 _make_venv(env_dir, python) 193 with in_env(prefix, version): 194 helpers.run_setup_cmd( 195 prefix, ('pip', 'install', '.') + additional_dependencies, 196 ) 197 198 return in_env, healthy, run_hook, install_environment 199 200 201 def make_venv(envdir: str, python: str) -> None: 202 env = dict(os.environ, VIRTUALENV_NO_DOWNLOAD='1') 203 cmd = (sys.executable, '-mvirtualenv', envdir, '-p', python) 204 cmd_output_b(*cmd, env=env, cwd='/') 205 206 207 _interface = py_interface(ENVIRONMENT_DIR, make_venv) 208 in_env, healthy, run_hook, install_environment = _interface 209 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py --- a/pre_commit/languages/python.py +++ b/pre_commit/languages/python.py @@ -158,10 +158,12 @@ yield def healthy(prefix: Prefix, language_version: str) -> bool: + envdir = helpers.environment_dir(_dir, language_version) + exe_name = 'python.exe' if sys.platform == 'win32' else 'python' + py_exe = prefix.path(bin_dir(envdir), exe_name) with in_env(prefix, language_version): retcode, _, _ = cmd_output_b( - 'python', '-c', - 'import ctypes, datetime, io, os, ssl, weakref', + py_exe, '-c', 'import ctypes, datetime, io, os, ssl, weakref', cwd='/', retcode=None, )
{"golden_diff": "diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py\n--- a/pre_commit/languages/python.py\n+++ b/pre_commit/languages/python.py\n@@ -158,10 +158,12 @@\n yield\n \n def healthy(prefix: Prefix, language_version: str) -> bool:\n+ envdir = helpers.environment_dir(_dir, language_version)\n+ exe_name = 'python.exe' if sys.platform == 'win32' else 'python'\n+ py_exe = prefix.path(bin_dir(envdir), exe_name)\n with in_env(prefix, language_version):\n retcode, _, _ = cmd_output_b(\n- 'python', '-c',\n- 'import ctypes, datetime, io, os, ssl, weakref',\n+ py_exe, '-c', 'import ctypes, datetime, io, os, ssl, weakref',\n cwd='/',\n retcode=None,\n )\n", "issue": "cache + virtualenv>=20 / python_venv + moving executables -> File not found: ... python\nfailure mode looks something like this:\r\n\r\n```\r\nCheck for added large files..............................................Failed\r\n- hook id: check-added-large-files\r\n- exit code: 1\r\n\r\nExecutable `/Users/runner/.cache/pre-commit/repo14qw_y0i/py_env-python3.8/bin/python` not found\r\n```\r\n\r\ncurrently this is a common failure for github actions caches, there's ~2 ways to work around this:\r\n\r\n1. [add `$(which python)`](https://github.com/pre-commit/action/commit/ee269b64a608de770696d23079f46238c2f7ab5a) to the pre-commit cache key\r\n2. [manually bump](https://github.com/pypa/pip/pull/7750/files) the pre-commit cache key\r\n\r\nbut pre-commit should more gracefully detect this in the [`healthy()`](https://github.com/pre-commit/pre-commit/blob/0a8ba31b9b6656d90f94fc368b47acb502cea44d/pre_commit/languages/python.py#L160-L168) function (which is designed to catch these sorts of system breakages)\n", "before_files": [{"content": "import contextlib\nimport functools\nimport os\nimport sys\nfrom typing import Callable\nfrom typing import ContextManager\nfrom typing import Generator\nfrom typing import Optional\nfrom typing import Sequence\nfrom typing import Tuple\n\nimport pre_commit.constants as C\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import PatchesT\nfrom pre_commit.envcontext import UNSET\nfrom pre_commit.envcontext import Var\nfrom pre_commit.hook import Hook\nfrom pre_commit.languages import helpers\nfrom pre_commit.parse_shebang import find_executable\nfrom pre_commit.prefix import Prefix\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import cmd_output_b\n\nENVIRONMENT_DIR = 'py_env'\n\n\ndef bin_dir(venv: str) -> str:\n \"\"\"On windows there's a different directory for the virtualenv\"\"\"\n bin_part = 'Scripts' if os.name == 'nt' else 'bin'\n return os.path.join(venv, bin_part)\n\n\ndef get_env_patch(venv: str) -> PatchesT:\n return (\n ('PYTHONHOME', UNSET),\n ('VIRTUAL_ENV', venv),\n ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),\n )\n\n\ndef _find_by_py_launcher(\n version: str,\n) -> Optional[str]: # pragma: no cover (windows only)\n if version.startswith('python'):\n num = version[len('python'):]\n try:\n cmd = ('py', f'-{num}', '-c', 'import sys; print(sys.executable)')\n return cmd_output(*cmd)[1].strip()\n except CalledProcessError:\n pass\n return None\n\n\ndef _find_by_sys_executable() -> Optional[str]:\n def _norm(path: str) -> Optional[str]:\n _, exe = os.path.split(path.lower())\n exe, _, _ = exe.partition('.exe')\n if exe not in {'python', 'pythonw'} and find_executable(exe):\n return exe\n return None\n\n # On linux, I see these common sys.executables:\n #\n # system `python`: /usr/bin/python -> python2.7\n # system `python2`: /usr/bin/python2 -> python2.7\n # virtualenv v: v/bin/python (will not return from this loop)\n # virtualenv v -ppython2: v/bin/python -> python2\n # virtualenv v -ppython2.7: v/bin/python -> python2.7\n # virtualenv v -ppypy: v/bin/python -> v/bin/pypy\n for path in (sys.executable, os.path.realpath(sys.executable)):\n exe = _norm(path)\n if exe:\n return exe\n return None\n\n\[email protected]_cache(maxsize=1)\ndef get_default_version() -> str: # pragma: no cover (platform dependent)\n # First attempt from `sys.executable` (or the realpath)\n exe = _find_by_sys_executable()\n if exe:\n return exe\n\n # Next try the `pythonX.X` executable\n exe = f'python{sys.version_info[0]}.{sys.version_info[1]}'\n if find_executable(exe):\n return exe\n\n if _find_by_py_launcher(exe):\n return exe\n\n # Give a best-effort try for windows\n default_folder_name = exe.replace('.', '')\n if os.path.exists(fr'C:\\{default_folder_name}\\python.exe'):\n return exe\n\n # We tried!\n return C.DEFAULT\n\n\ndef _sys_executable_matches(version: str) -> bool:\n if version == 'python':\n return True\n elif not version.startswith('python'):\n return False\n\n try:\n info = tuple(int(p) for p in version[len('python'):].split('.'))\n except ValueError:\n return False\n\n return sys.version_info[:len(info)] == info\n\n\ndef norm_version(version: str) -> str:\n # first see if our current executable is appropriate\n if _sys_executable_matches(version):\n return sys.executable\n\n if os.name == 'nt': # pragma: no cover (windows)\n version_exec = _find_by_py_launcher(version)\n if version_exec:\n return version_exec\n\n # Try looking up by name\n version_exec = find_executable(version)\n if version_exec and version_exec != version:\n return version_exec\n\n # If it is in the form pythonx.x search in the default\n # place on windows\n if version.startswith('python'):\n default_folder_name = version.replace('.', '')\n return fr'C:\\{default_folder_name}\\python.exe'\n\n # Otherwise assume it is a path\n return os.path.expanduser(version)\n\n\ndef py_interface(\n _dir: str,\n _make_venv: Callable[[str, str], None],\n) -> Tuple[\n Callable[[Prefix, str], ContextManager[None]],\n Callable[[Prefix, str], bool],\n Callable[[Hook, Sequence[str], bool], Tuple[int, bytes]],\n Callable[[Prefix, str, Sequence[str]], None],\n]:\n @contextlib.contextmanager\n def in_env(\n prefix: Prefix,\n language_version: str,\n ) -> Generator[None, None, None]:\n envdir = prefix.path(helpers.environment_dir(_dir, language_version))\n with envcontext(get_env_patch(envdir)):\n yield\n\n def healthy(prefix: Prefix, language_version: str) -> bool:\n with in_env(prefix, language_version):\n retcode, _, _ = cmd_output_b(\n 'python', '-c',\n 'import ctypes, datetime, io, os, ssl, weakref',\n cwd='/',\n retcode=None,\n )\n return retcode == 0\n\n def run_hook(\n hook: Hook,\n file_args: Sequence[str],\n color: bool,\n ) -> Tuple[int, bytes]:\n with in_env(hook.prefix, hook.language_version):\n return helpers.run_xargs(hook, hook.cmd, file_args, color=color)\n\n def install_environment(\n prefix: Prefix,\n version: str,\n additional_dependencies: Sequence[str],\n ) -> None:\n additional_dependencies = tuple(additional_dependencies)\n directory = helpers.environment_dir(_dir, version)\n\n env_dir = prefix.path(directory)\n with clean_path_on_failure(env_dir):\n if version != C.DEFAULT:\n python = norm_version(version)\n else:\n python = os.path.realpath(sys.executable)\n _make_venv(env_dir, python)\n with in_env(prefix, version):\n helpers.run_setup_cmd(\n prefix, ('pip', 'install', '.') + additional_dependencies,\n )\n\n return in_env, healthy, run_hook, install_environment\n\n\ndef make_venv(envdir: str, python: str) -> None:\n env = dict(os.environ, VIRTUALENV_NO_DOWNLOAD='1')\n cmd = (sys.executable, '-mvirtualenv', envdir, '-p', python)\n cmd_output_b(*cmd, env=env, cwd='/')\n\n\n_interface = py_interface(ENVIRONMENT_DIR, make_venv)\nin_env, healthy, run_hook, install_environment = _interface\n", "path": "pre_commit/languages/python.py"}], "after_files": [{"content": "import contextlib\nimport functools\nimport os\nimport sys\nfrom typing import Callable\nfrom typing import ContextManager\nfrom typing import Generator\nfrom typing import Optional\nfrom typing import Sequence\nfrom typing import Tuple\n\nimport pre_commit.constants as C\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import PatchesT\nfrom pre_commit.envcontext import UNSET\nfrom pre_commit.envcontext import Var\nfrom pre_commit.hook import Hook\nfrom pre_commit.languages import helpers\nfrom pre_commit.parse_shebang import find_executable\nfrom pre_commit.prefix import Prefix\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import cmd_output_b\n\nENVIRONMENT_DIR = 'py_env'\n\n\ndef bin_dir(venv: str) -> str:\n \"\"\"On windows there's a different directory for the virtualenv\"\"\"\n bin_part = 'Scripts' if os.name == 'nt' else 'bin'\n return os.path.join(venv, bin_part)\n\n\ndef get_env_patch(venv: str) -> PatchesT:\n return (\n ('PYTHONHOME', UNSET),\n ('VIRTUAL_ENV', venv),\n ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),\n )\n\n\ndef _find_by_py_launcher(\n version: str,\n) -> Optional[str]: # pragma: no cover (windows only)\n if version.startswith('python'):\n num = version[len('python'):]\n try:\n cmd = ('py', f'-{num}', '-c', 'import sys; print(sys.executable)')\n return cmd_output(*cmd)[1].strip()\n except CalledProcessError:\n pass\n return None\n\n\ndef _find_by_sys_executable() -> Optional[str]:\n def _norm(path: str) -> Optional[str]:\n _, exe = os.path.split(path.lower())\n exe, _, _ = exe.partition('.exe')\n if exe not in {'python', 'pythonw'} and find_executable(exe):\n return exe\n return None\n\n # On linux, I see these common sys.executables:\n #\n # system `python`: /usr/bin/python -> python2.7\n # system `python2`: /usr/bin/python2 -> python2.7\n # virtualenv v: v/bin/python (will not return from this loop)\n # virtualenv v -ppython2: v/bin/python -> python2\n # virtualenv v -ppython2.7: v/bin/python -> python2.7\n # virtualenv v -ppypy: v/bin/python -> v/bin/pypy\n for path in (sys.executable, os.path.realpath(sys.executable)):\n exe = _norm(path)\n if exe:\n return exe\n return None\n\n\[email protected]_cache(maxsize=1)\ndef get_default_version() -> str: # pragma: no cover (platform dependent)\n # First attempt from `sys.executable` (or the realpath)\n exe = _find_by_sys_executable()\n if exe:\n return exe\n\n # Next try the `pythonX.X` executable\n exe = f'python{sys.version_info[0]}.{sys.version_info[1]}'\n if find_executable(exe):\n return exe\n\n if _find_by_py_launcher(exe):\n return exe\n\n # Give a best-effort try for windows\n default_folder_name = exe.replace('.', '')\n if os.path.exists(fr'C:\\{default_folder_name}\\python.exe'):\n return exe\n\n # We tried!\n return C.DEFAULT\n\n\ndef _sys_executable_matches(version: str) -> bool:\n if version == 'python':\n return True\n elif not version.startswith('python'):\n return False\n\n try:\n info = tuple(int(p) for p in version[len('python'):].split('.'))\n except ValueError:\n return False\n\n return sys.version_info[:len(info)] == info\n\n\ndef norm_version(version: str) -> str:\n # first see if our current executable is appropriate\n if _sys_executable_matches(version):\n return sys.executable\n\n if os.name == 'nt': # pragma: no cover (windows)\n version_exec = _find_by_py_launcher(version)\n if version_exec:\n return version_exec\n\n # Try looking up by name\n version_exec = find_executable(version)\n if version_exec and version_exec != version:\n return version_exec\n\n # If it is in the form pythonx.x search in the default\n # place on windows\n if version.startswith('python'):\n default_folder_name = version.replace('.', '')\n return fr'C:\\{default_folder_name}\\python.exe'\n\n # Otherwise assume it is a path\n return os.path.expanduser(version)\n\n\ndef py_interface(\n _dir: str,\n _make_venv: Callable[[str, str], None],\n) -> Tuple[\n Callable[[Prefix, str], ContextManager[None]],\n Callable[[Prefix, str], bool],\n Callable[[Hook, Sequence[str], bool], Tuple[int, bytes]],\n Callable[[Prefix, str, Sequence[str]], None],\n]:\n @contextlib.contextmanager\n def in_env(\n prefix: Prefix,\n language_version: str,\n ) -> Generator[None, None, None]:\n envdir = prefix.path(helpers.environment_dir(_dir, language_version))\n with envcontext(get_env_patch(envdir)):\n yield\n\n def healthy(prefix: Prefix, language_version: str) -> bool:\n envdir = helpers.environment_dir(_dir, language_version)\n exe_name = 'python.exe' if sys.platform == 'win32' else 'python'\n py_exe = prefix.path(bin_dir(envdir), exe_name)\n with in_env(prefix, language_version):\n retcode, _, _ = cmd_output_b(\n py_exe, '-c', 'import ctypes, datetime, io, os, ssl, weakref',\n cwd='/',\n retcode=None,\n )\n return retcode == 0\n\n def run_hook(\n hook: Hook,\n file_args: Sequence[str],\n color: bool,\n ) -> Tuple[int, bytes]:\n with in_env(hook.prefix, hook.language_version):\n return helpers.run_xargs(hook, hook.cmd, file_args, color=color)\n\n def install_environment(\n prefix: Prefix,\n version: str,\n additional_dependencies: Sequence[str],\n ) -> None:\n additional_dependencies = tuple(additional_dependencies)\n directory = helpers.environment_dir(_dir, version)\n\n env_dir = prefix.path(directory)\n with clean_path_on_failure(env_dir):\n if version != C.DEFAULT:\n python = norm_version(version)\n else:\n python = os.path.realpath(sys.executable)\n _make_venv(env_dir, python)\n with in_env(prefix, version):\n helpers.run_setup_cmd(\n prefix, ('pip', 'install', '.') + additional_dependencies,\n )\n\n return in_env, healthy, run_hook, install_environment\n\n\ndef make_venv(envdir: str, python: str) -> None:\n env = dict(os.environ, VIRTUALENV_NO_DOWNLOAD='1')\n cmd = (sys.executable, '-mvirtualenv', envdir, '-p', python)\n cmd_output_b(*cmd, env=env, cwd='/')\n\n\n_interface = py_interface(ENVIRONMENT_DIR, make_venv)\nin_env, healthy, run_hook, install_environment = _interface\n", "path": "pre_commit/languages/python.py"}]}
2,658
200
gh_patches_debug_8479
rasdani/github-patches
git_diff
spacetelescope__jwql-92
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Update environment.yml to update Django version When running the Django web server (on the `laurenmarietta/web-app-dev branch`) from the `jwql` environment on the VM, and I had to update Django from 1.11.8 to the latest version (2.0.5) to get rid of an error with Django. The version of Django in `environment.yml` should be specified to >=2.0.5 in the environment file in the future. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `setup.py` Content: ``` 1 import numpy as np 2 from setuptools import setup 3 from setuptools import find_packages 4 5 VERSION = '0.4.0' 6 7 AUTHORS = 'Matthew Bourque, Sara Ogaz, Joe Filippazzo, Bryan Hilbert, Misty Cracraft, Graham Kanarek' 8 AUTHORS += 'Johannes Sahlmann, Lauren Chambers, Catherine Martlin' 9 10 REQUIRES = ['astropy', 'astroquery', 'bokeh==0.12.5', 'django', 'matplotlib', 'numpy', 'python-dateutil', 'sphinx', 'sphinx-automodapi', 'sqlalchemy'] 11 12 setup( 13 name='jwql', 14 version=VERSION, 15 description='The JWST Quicklook Project', 16 url='https://github.com/spacetelescope/jwql.git', 17 author=AUTHORS, 18 author_email='[email protected]', 19 license='BSD', 20 keywords=['astronomy', 'python'], 21 classifiers=['Programming Language :: Python'], 22 packages=find_packages(), 23 install_requires=REQUIRES, 24 include_package_data=True, 25 include_dirs=[np.get_include()], 26 ) 27 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -7,7 +7,7 @@ AUTHORS = 'Matthew Bourque, Sara Ogaz, Joe Filippazzo, Bryan Hilbert, Misty Cracraft, Graham Kanarek' AUTHORS += 'Johannes Sahlmann, Lauren Chambers, Catherine Martlin' -REQUIRES = ['astropy', 'astroquery', 'bokeh==0.12.5', 'django', 'matplotlib', 'numpy', 'python-dateutil', 'sphinx', 'sphinx-automodapi', 'sqlalchemy'] +REQUIRES = ['astropy', 'astroquery', 'bokeh==0.12.5', 'django==2.0.5', 'matplotlib', 'numpy', 'python-dateutil', 'sphinx', 'sphinx-automodapi', 'sqlalchemy'] setup( name='jwql',
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -7,7 +7,7 @@\n AUTHORS = 'Matthew Bourque, Sara Ogaz, Joe Filippazzo, Bryan Hilbert, Misty Cracraft, Graham Kanarek'\n AUTHORS += 'Johannes Sahlmann, Lauren Chambers, Catherine Martlin'\n \n-REQUIRES = ['astropy', 'astroquery', 'bokeh==0.12.5', 'django', 'matplotlib', 'numpy', 'python-dateutil', 'sphinx', 'sphinx-automodapi', 'sqlalchemy']\n+REQUIRES = ['astropy', 'astroquery', 'bokeh==0.12.5', 'django==2.0.5', 'matplotlib', 'numpy', 'python-dateutil', 'sphinx', 'sphinx-automodapi', 'sqlalchemy']\n \n setup(\n name='jwql',\n", "issue": "Update environment.yml to update Django version\nWhen running the Django web server (on the `laurenmarietta/web-app-dev branch`) from the `jwql` environment on the VM, and I had to update Django from 1.11.8 to the latest version (2.0.5) to get rid of an error with Django.\r\n\r\nThe version of Django in `environment.yml` should be specified to >=2.0.5 in the environment file in the future.\n", "before_files": [{"content": "import numpy as np\nfrom setuptools import setup\nfrom setuptools import find_packages\n\nVERSION = '0.4.0'\n\nAUTHORS = 'Matthew Bourque, Sara Ogaz, Joe Filippazzo, Bryan Hilbert, Misty Cracraft, Graham Kanarek'\nAUTHORS += 'Johannes Sahlmann, Lauren Chambers, Catherine Martlin'\n\nREQUIRES = ['astropy', 'astroquery', 'bokeh==0.12.5', 'django', 'matplotlib', 'numpy', 'python-dateutil', 'sphinx', 'sphinx-automodapi', 'sqlalchemy']\n\nsetup(\n name='jwql',\n version=VERSION,\n description='The JWST Quicklook Project',\n url='https://github.com/spacetelescope/jwql.git',\n author=AUTHORS,\n author_email='[email protected]',\n license='BSD',\n keywords=['astronomy', 'python'],\n classifiers=['Programming Language :: Python'],\n packages=find_packages(),\n install_requires=REQUIRES,\n include_package_data=True,\n include_dirs=[np.get_include()],\n )\n", "path": "setup.py"}], "after_files": [{"content": "import numpy as np\nfrom setuptools import setup\nfrom setuptools import find_packages\n\nVERSION = '0.4.0'\n\nAUTHORS = 'Matthew Bourque, Sara Ogaz, Joe Filippazzo, Bryan Hilbert, Misty Cracraft, Graham Kanarek'\nAUTHORS += 'Johannes Sahlmann, Lauren Chambers, Catherine Martlin'\n\nREQUIRES = ['astropy', 'astroquery', 'bokeh==0.12.5', 'django==2.0.5', 'matplotlib', 'numpy', 'python-dateutil', 'sphinx', 'sphinx-automodapi', 'sqlalchemy']\n\nsetup(\n name='jwql',\n version=VERSION,\n description='The JWST Quicklook Project',\n url='https://github.com/spacetelescope/jwql.git',\n author=AUTHORS,\n author_email='[email protected]',\n license='BSD',\n keywords=['astronomy', 'python'],\n classifiers=['Programming Language :: Python'],\n packages=find_packages(),\n install_requires=REQUIRES,\n include_package_data=True,\n include_dirs=[np.get_include()],\n )\n", "path": "setup.py"}]}
639
201
gh_patches_debug_26629
rasdani/github-patches
git_diff
GoogleCloudPlatform__PerfKitBenchmarker-73
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- The cluster boot benchmark should the num_cpus function in parallel The cluster boot benchmark has the following code: > for vm in vms: > metadata = {'machine_type': vm.machine_type, 'num_cpus': vm.num_cpus, > 'machine_instance': vm_number} > value = vm.TimeToBoot() This looks great until you realize vm.num_cpus is a method on the virtual machine which in turn calls RemoteCommand leading to an ssh. When large number of VM's boot the result is a long set of serially run ssh's to each VM. This could be done a lot faster by moving the code into a method and then using RunThreaded. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `perfkitbenchmarker/benchmarks/cluster_boot_benchmark.py` Content: ``` 1 # Copyright 2014 Google Inc. All rights reserved. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 """Runs a cluster boot benchmark.""" 16 17 import logging 18 19 from perfkitbenchmarker import flags 20 21 FLAGS = flags.FLAGS 22 BENCHMARK_INFO = {'name': 'cluster boot', 23 'description': 'Create a cluster, record all times to boot', 24 'scratch_disk': False, 25 'num_machines': None} # Set in GetInfo() 26 27 28 def GetInfo(): 29 BENCHMARK_INFO['num_machines'] = FLAGS.num_vms 30 return BENCHMARK_INFO 31 32 33 def Prepare(unused_benchmark_spec): 34 pass 35 36 37 def Run(benchmark_spec): 38 """Measure the boot time for all VMs. 39 40 Args: 41 benchmark_spec: The benchmark specification. Contains all data that is 42 required to run the benchmark. 43 44 Returns: 45 A list of samples in the form of 3 or 4 tuples. The tuples contain 46 the sample metric (string), value (float), and unit (string). 47 If a 4th element is included, it is a dictionary of sample 48 metadata. 49 """ 50 51 samples = [] 52 vm_number = 0 53 logging.info('Boot Results:') 54 vms = benchmark_spec.vms 55 for vm in vms: 56 metadata = {'machine_type': vm.machine_type, 'num_cpus': vm.num_cpus, 57 'machine_instance': vm_number} 58 value = vm.TimeToBoot() 59 assert value is not None 60 samples.append(('Boot Time', value, 'seconds', metadata)) 61 vm_number += 1 62 logging.info(samples) 63 assert vm_number == benchmark_spec.num_vms 64 return samples 65 66 67 def Cleanup(unused_benchmark_spec): 68 pass 69 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/perfkitbenchmarker/benchmarks/cluster_boot_benchmark.py b/perfkitbenchmarker/benchmarks/cluster_boot_benchmark.py --- a/perfkitbenchmarker/benchmarks/cluster_boot_benchmark.py +++ b/perfkitbenchmarker/benchmarks/cluster_boot_benchmark.py @@ -17,6 +17,7 @@ import logging from perfkitbenchmarker import flags +from perfkitbenchmarker import vm_util FLAGS = flags.FLAGS BENCHMARK_INFO = {'name': 'cluster boot', @@ -34,6 +35,14 @@ pass +def _GetTimeToBoot(vm, vm_index, result_list): + metadata = {'machine_type': vm.machine_type, 'num_cpus': vm.num_cpus, + 'machine_instance': vm_index} + value = vm.TimeToBoot() + assert value is not None + result_list.append(('Boot Time', value, 'seconds', metadata)) + + def Run(benchmark_spec): """Measure the boot time for all VMs. @@ -49,18 +58,12 @@ """ samples = [] - vm_number = 0 logging.info('Boot Results:') vms = benchmark_spec.vms - for vm in vms: - metadata = {'machine_type': vm.machine_type, 'num_cpus': vm.num_cpus, - 'machine_instance': vm_number} - value = vm.TimeToBoot() - assert value is not None - samples.append(('Boot Time', value, 'seconds', metadata)) - vm_number += 1 + params = [((vm, i, samples), {}) for i, vm in enumerate(vms)] + vm_util.RunThreaded(_GetTimeToBoot, params) logging.info(samples) - assert vm_number == benchmark_spec.num_vms + assert len(samples) == benchmark_spec.num_vms return samples
{"golden_diff": "diff --git a/perfkitbenchmarker/benchmarks/cluster_boot_benchmark.py b/perfkitbenchmarker/benchmarks/cluster_boot_benchmark.py\n--- a/perfkitbenchmarker/benchmarks/cluster_boot_benchmark.py\n+++ b/perfkitbenchmarker/benchmarks/cluster_boot_benchmark.py\n@@ -17,6 +17,7 @@\n import logging\n \n from perfkitbenchmarker import flags\n+from perfkitbenchmarker import vm_util\n \n FLAGS = flags.FLAGS\n BENCHMARK_INFO = {'name': 'cluster boot',\n@@ -34,6 +35,14 @@\n pass\n \n \n+def _GetTimeToBoot(vm, vm_index, result_list):\n+ metadata = {'machine_type': vm.machine_type, 'num_cpus': vm.num_cpus,\n+ 'machine_instance': vm_index}\n+ value = vm.TimeToBoot()\n+ assert value is not None\n+ result_list.append(('Boot Time', value, 'seconds', metadata))\n+\n+\n def Run(benchmark_spec):\n \"\"\"Measure the boot time for all VMs.\n \n@@ -49,18 +58,12 @@\n \"\"\"\n \n samples = []\n- vm_number = 0\n logging.info('Boot Results:')\n vms = benchmark_spec.vms\n- for vm in vms:\n- metadata = {'machine_type': vm.machine_type, 'num_cpus': vm.num_cpus,\n- 'machine_instance': vm_number}\n- value = vm.TimeToBoot()\n- assert value is not None\n- samples.append(('Boot Time', value, 'seconds', metadata))\n- vm_number += 1\n+ params = [((vm, i, samples), {}) for i, vm in enumerate(vms)]\n+ vm_util.RunThreaded(_GetTimeToBoot, params)\n logging.info(samples)\n- assert vm_number == benchmark_spec.num_vms\n+ assert len(samples) == benchmark_spec.num_vms\n return samples\n", "issue": "The cluster boot benchmark should the num_cpus function in parallel\nThe cluster boot benchmark has the following code:\n\n> for vm in vms:\n> metadata = {'machine_type': vm.machine_type, 'num_cpus': vm.num_cpus,\n> 'machine_instance': vm_number}\n> value = vm.TimeToBoot()\n\nThis looks great until you realize vm.num_cpus is a method on the virtual machine which in turn calls RemoteCommand leading to an ssh. When large number of VM's boot the result is a long set of serially run ssh's to each VM. This could be done a lot faster by moving the code into a method and then using RunThreaded.\n\n", "before_files": [{"content": "# Copyright 2014 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Runs a cluster boot benchmark.\"\"\"\n\nimport logging\n\nfrom perfkitbenchmarker import flags\n\nFLAGS = flags.FLAGS\nBENCHMARK_INFO = {'name': 'cluster boot',\n 'description': 'Create a cluster, record all times to boot',\n 'scratch_disk': False,\n 'num_machines': None} # Set in GetInfo()\n\n\ndef GetInfo():\n BENCHMARK_INFO['num_machines'] = FLAGS.num_vms\n return BENCHMARK_INFO\n\n\ndef Prepare(unused_benchmark_spec):\n pass\n\n\ndef Run(benchmark_spec):\n \"\"\"Measure the boot time for all VMs.\n\n Args:\n benchmark_spec: The benchmark specification. Contains all data that is\n required to run the benchmark.\n\n Returns:\n A list of samples in the form of 3 or 4 tuples. The tuples contain\n the sample metric (string), value (float), and unit (string).\n If a 4th element is included, it is a dictionary of sample\n metadata.\n \"\"\"\n\n samples = []\n vm_number = 0\n logging.info('Boot Results:')\n vms = benchmark_spec.vms\n for vm in vms:\n metadata = {'machine_type': vm.machine_type, 'num_cpus': vm.num_cpus,\n 'machine_instance': vm_number}\n value = vm.TimeToBoot()\n assert value is not None\n samples.append(('Boot Time', value, 'seconds', metadata))\n vm_number += 1\n logging.info(samples)\n assert vm_number == benchmark_spec.num_vms\n return samples\n\n\ndef Cleanup(unused_benchmark_spec):\n pass\n", "path": "perfkitbenchmarker/benchmarks/cluster_boot_benchmark.py"}], "after_files": [{"content": "# Copyright 2014 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Runs a cluster boot benchmark.\"\"\"\n\nimport logging\n\nfrom perfkitbenchmarker import flags\nfrom perfkitbenchmarker import vm_util\n\nFLAGS = flags.FLAGS\nBENCHMARK_INFO = {'name': 'cluster boot',\n 'description': 'Create a cluster, record all times to boot',\n 'scratch_disk': False,\n 'num_machines': None} # Set in GetInfo()\n\n\ndef GetInfo():\n BENCHMARK_INFO['num_machines'] = FLAGS.num_vms\n return BENCHMARK_INFO\n\n\ndef Prepare(unused_benchmark_spec):\n pass\n\n\ndef _GetTimeToBoot(vm, vm_index, result_list):\n metadata = {'machine_type': vm.machine_type, 'num_cpus': vm.num_cpus,\n 'machine_instance': vm_index}\n value = vm.TimeToBoot()\n assert value is not None\n result_list.append(('Boot Time', value, 'seconds', metadata))\n\n\ndef Run(benchmark_spec):\n \"\"\"Measure the boot time for all VMs.\n\n Args:\n benchmark_spec: The benchmark specification. Contains all data that is\n required to run the benchmark.\n\n Returns:\n A list of samples in the form of 3 or 4 tuples. The tuples contain\n the sample metric (string), value (float), and unit (string).\n If a 4th element is included, it is a dictionary of sample\n metadata.\n \"\"\"\n\n samples = []\n logging.info('Boot Results:')\n vms = benchmark_spec.vms\n params = [((vm, i, samples), {}) for i, vm in enumerate(vms)]\n vm_util.RunThreaded(_GetTimeToBoot, params)\n logging.info(samples)\n assert len(samples) == benchmark_spec.num_vms\n return samples\n\n\ndef Cleanup(unused_benchmark_spec):\n pass\n", "path": "perfkitbenchmarker/benchmarks/cluster_boot_benchmark.py"}]}
1,023
425
gh_patches_debug_7671
rasdani/github-patches
git_diff
facebookresearch__mmf-159
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Minor bug in object detections in extract_features_vmb.py ## 🐛 Bug <!-- A clear and concise description of what the bug is. --> I believe this was introduced by the #127, but there's a slight numpy indexing issue that causes the objects to still be detected incorrectly. The line in question is [here](https://github.com/facebookresearch/pythia/blob/12f67cd4f67499814bb0b3665ff14dd635800f63/pythia/scripts/features/extract_features_vmb.py#L165). It currently reads ```python objects = torch.argmax(scores[keep_boxes][start_index:], dim=1) ``` However, `scores` is a tensor of `[num_objects, object_classes]`, so `start_index` should be indexing the second dimension. The updated line should be ```python objects = torch.argmax(scores[keep_boxes][:, start_index:], dim=1) ``` I can submit a pull request. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `pythia/scripts/features/extract_features_vmb.py` Content: ``` 1 # Requires vqa-maskrcnn-benchmark to be built and installed 2 # Category mapping for visual genome can be downloaded from 3 # https://dl.fbaipublicfiles.com/pythia/data/visual_genome_categories.json 4 # When the --background flag is set, the index saved with key "objects" in 5 # info_list will be +1 of the Visual Genome category mapping above and 0 6 # is the background class. When the --background flag is not set, the 7 # index saved with key "objects" in info list will match the Visual Genome 8 # category mapping. 9 import argparse 10 import glob 11 import os 12 13 import cv2 14 import numpy as np 15 import torch 16 from PIL import Image 17 18 from maskrcnn_benchmark.config import cfg 19 from maskrcnn_benchmark.layers import nms 20 from maskrcnn_benchmark.modeling.detector import build_detection_model 21 from maskrcnn_benchmark.structures.image_list import to_image_list 22 from maskrcnn_benchmark.utils.model_serialization import load_state_dict 23 from pythia.utils.general import download_file 24 25 26 class FeatureExtractor: 27 MODEL_URL = ( 28 "https://dl.fbaipublicfiles.com/pythia/detectron_model/detectron_model.pth" 29 ) 30 CONFIG_URL = ( 31 "https://dl.fbaipublicfiles.com/pythia/detectron_model/detectron_model.yaml" 32 ) 33 MAX_SIZE = 1333 34 MIN_SIZE = 800 35 36 def __init__(self): 37 self.args = self.get_parser().parse_args() 38 self.detection_model = self._build_detection_model() 39 40 os.makedirs(self.args.output_folder, exist_ok=True) 41 42 def _try_downloading_necessities(self): 43 if self.args.model_file is None: 44 print("Downloading model and configuration") 45 self.args.model_file = self.MODEL_URL.split("/")[-1] 46 self.args.config_file = self.CONFIG_URL.split("/")[-1] 47 download_file(self.MODEL_URL) 48 download_file(self.CONFIG_URL) 49 50 def get_parser(self): 51 parser = argparse.ArgumentParser() 52 parser.add_argument( 53 "--model_file", default=None, type=str, help="Detectron model file" 54 ) 55 parser.add_argument( 56 "--config_file", default=None, type=str, help="Detectron config file" 57 ) 58 parser.add_argument("--batch_size", type=int, default=2, help="Batch size") 59 parser.add_argument( 60 "--num_features", type=int, default=100, help="Number of features to extract." 61 ) 62 parser.add_argument( 63 "--output_folder", type=str, default="./output", help="Output folder" 64 ) 65 parser.add_argument("--image_dir", type=str, help="Image directory or file") 66 parser.add_argument( 67 "--feature_name", type=str, help="The name of the feature to extract", 68 default="fc6", 69 ) 70 parser.add_argument( 71 "--confidence_threshold", type=float, default=0, 72 help="Threshold of detection confidence above which boxes will be selected" 73 ) 74 parser.add_argument( 75 "--background", action="store_true", 76 help="The model will output predictions for the background class when set" 77 ) 78 return parser 79 80 def _build_detection_model(self): 81 cfg.merge_from_file(self.args.config_file) 82 cfg.freeze() 83 84 model = build_detection_model(cfg) 85 checkpoint = torch.load(self.args.model_file, map_location=torch.device("cpu")) 86 87 load_state_dict(model, checkpoint.pop("model")) 88 89 model.to("cuda") 90 model.eval() 91 return model 92 93 def _image_transform(self, path): 94 img = Image.open(path) 95 im = np.array(img).astype(np.float32) 96 # IndexError: too many indices for array, grayscale images 97 if len(im.shape) < 3: 98 im = np.repeat(im[:, :, np.newaxis], 3, axis=2) 99 im = im[:, :, ::-1] 100 im -= np.array([102.9801, 115.9465, 122.7717]) 101 im_shape = im.shape 102 im_height = im_shape[0] 103 im_width = im_shape[1] 104 im_size_min = np.min(im_shape[0:2]) 105 im_size_max = np.max(im_shape[0:2]) 106 107 # Scale based on minimum size 108 im_scale = self.MIN_SIZE / im_size_min 109 110 # Prevent the biggest axis from being more than max_size 111 # If bigger, scale it down 112 if np.round(im_scale * im_size_max) > self.MAX_SIZE: 113 im_scale = self.MAX_SIZE / im_size_max 114 115 im = cv2.resize( 116 im, None, None, fx=im_scale, fy=im_scale, interpolation=cv2.INTER_LINEAR 117 ) 118 img = torch.from_numpy(im).permute(2, 0, 1) 119 120 im_info = { 121 "width": im_width, 122 "height": im_height 123 } 124 125 return img, im_scale, im_info 126 127 def _process_feature_extraction( 128 self, output, im_scales, im_infos, feature_name="fc6", conf_thresh=0 129 ): 130 batch_size = len(output[0]["proposals"]) 131 n_boxes_per_image = [len(boxes) for boxes in output[0]["proposals"]] 132 score_list = output[0]["scores"].split(n_boxes_per_image) 133 score_list = [torch.nn.functional.softmax(x, -1) for x in score_list] 134 feats = output[0][feature_name].split(n_boxes_per_image) 135 cur_device = score_list[0].device 136 137 feat_list = [] 138 info_list = [] 139 140 for i in range(batch_size): 141 dets = output[0]["proposals"][i].bbox / im_scales[i] 142 scores = score_list[i] 143 max_conf = torch.zeros((scores.shape[0])).to(cur_device) 144 conf_thresh_tensor = torch.full_like(max_conf, conf_thresh) 145 start_index = 1 146 # Column 0 of the scores matrix is for the background class 147 if self.args.background: 148 start_index = 0 149 for cls_ind in range(start_index, scores.shape[1]): 150 cls_scores = scores[:, cls_ind] 151 keep = nms(dets, cls_scores, 0.5) 152 max_conf[keep] = torch.where( 153 # Better than max one till now and minimally greater than conf_thresh 154 (cls_scores[keep] > max_conf[keep]) & 155 (cls_scores[keep] > conf_thresh_tensor[keep]), 156 cls_scores[keep], max_conf[keep] 157 ) 158 159 sorted_scores, sorted_indices = torch.sort(max_conf, descending=True) 160 num_boxes = (sorted_scores[:self.args.num_features] != 0).sum() 161 keep_boxes = sorted_indices[:self.args.num_features] 162 feat_list.append(feats[i][keep_boxes]) 163 bbox = output[0]["proposals"][i][keep_boxes].bbox / im_scales[i] 164 # Predict the class label using the scores 165 objects = torch.argmax(scores[keep_boxes][start_index:], dim=1) 166 167 info_list.append( 168 { 169 "bbox": bbox.cpu().numpy(), 170 "num_boxes": num_boxes.item(), 171 "objects": objects.cpu().numpy(), 172 "image_width": im_infos[i]["width"], 173 "image_height": im_infos[i]["height"], 174 } 175 ) 176 177 return feat_list, info_list 178 179 def get_detectron_features(self, image_paths): 180 img_tensor, im_scales, im_infos = [], [], [] 181 182 for image_path in image_paths: 183 im, im_scale, im_info = self._image_transform(image_path) 184 img_tensor.append(im) 185 im_scales.append(im_scale) 186 im_infos.append(im_info) 187 188 # Image dimensions should be divisible by 32, to allow convolutions 189 # in detector to work 190 current_img_list = to_image_list(img_tensor, size_divisible=32) 191 current_img_list = current_img_list.to("cuda") 192 193 with torch.no_grad(): 194 output = self.detection_model(current_img_list) 195 196 feat_list = self._process_feature_extraction( 197 output, im_scales, im_infos, self.args.feature_name, 198 self.args.confidence_threshold 199 ) 200 201 return feat_list 202 203 def _chunks(self, array, chunk_size): 204 for i in range(0, len(array), chunk_size): 205 yield array[i : i + chunk_size] 206 207 def _save_feature(self, file_name, feature, info): 208 file_base_name = os.path.basename(file_name) 209 file_base_name = file_base_name.split(".")[0] 210 info_file_base_name = file_base_name + "_info.npy" 211 file_base_name = file_base_name + ".npy" 212 213 np.save( 214 os.path.join(self.args.output_folder, file_base_name), feature.cpu().numpy() 215 ) 216 np.save(os.path.join(self.args.output_folder, info_file_base_name), info) 217 218 def extract_features(self): 219 image_dir = self.args.image_dir 220 221 if os.path.isfile(image_dir): 222 features, infos = self.get_detectron_features([image_dir]) 223 self._save_feature(image_dir, features[0], infos[0]) 224 else: 225 files = glob.glob(os.path.join(image_dir, "*.jpg")) 226 for chunk in self._chunks(files, self.args.batch_size): 227 features, infos = self.get_detectron_features(chunk) 228 for idx, file_name in enumerate(chunk): 229 self._save_feature(file_name, features[idx], infos[idx]) 230 231 232 if __name__ == "__main__": 233 feature_extractor = FeatureExtractor() 234 feature_extractor.extract_features() 235 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/pythia/scripts/features/extract_features_vmb.py b/pythia/scripts/features/extract_features_vmb.py --- a/pythia/scripts/features/extract_features_vmb.py +++ b/pythia/scripts/features/extract_features_vmb.py @@ -162,7 +162,7 @@ feat_list.append(feats[i][keep_boxes]) bbox = output[0]["proposals"][i][keep_boxes].bbox / im_scales[i] # Predict the class label using the scores - objects = torch.argmax(scores[keep_boxes][start_index:], dim=1) + objects = torch.argmax(scores[keep_boxes][:, start_index:], dim=1) info_list.append( {
{"golden_diff": "diff --git a/pythia/scripts/features/extract_features_vmb.py b/pythia/scripts/features/extract_features_vmb.py\n--- a/pythia/scripts/features/extract_features_vmb.py\n+++ b/pythia/scripts/features/extract_features_vmb.py\n@@ -162,7 +162,7 @@\n feat_list.append(feats[i][keep_boxes])\n bbox = output[0][\"proposals\"][i][keep_boxes].bbox / im_scales[i]\n # Predict the class label using the scores\n- objects = torch.argmax(scores[keep_boxes][start_index:], dim=1)\n+ objects = torch.argmax(scores[keep_boxes][:, start_index:], dim=1)\n \n info_list.append(\n {\n", "issue": "Minor bug in object detections in extract_features_vmb.py\n## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\nI believe this was introduced by the #127, but there's a slight numpy indexing issue that causes the objects to still be detected incorrectly. \r\n\r\nThe line in question is [here](https://github.com/facebookresearch/pythia/blob/12f67cd4f67499814bb0b3665ff14dd635800f63/pythia/scripts/features/extract_features_vmb.py#L165). It currently reads\r\n\r\n```python\r\nobjects = torch.argmax(scores[keep_boxes][start_index:], dim=1)\r\n```\r\n\r\nHowever, `scores` is a tensor of `[num_objects, object_classes]`, so `start_index` should be indexing the second dimension. The updated line should be\r\n\r\n```python\r\nobjects = torch.argmax(scores[keep_boxes][:, start_index:], dim=1)\r\n```\r\n\r\nI can submit a pull request.\n", "before_files": [{"content": "# Requires vqa-maskrcnn-benchmark to be built and installed\n# Category mapping for visual genome can be downloaded from\n# https://dl.fbaipublicfiles.com/pythia/data/visual_genome_categories.json\n# When the --background flag is set, the index saved with key \"objects\" in\n# info_list will be +1 of the Visual Genome category mapping above and 0\n# is the background class. When the --background flag is not set, the\n# index saved with key \"objects\" in info list will match the Visual Genome\n# category mapping.\nimport argparse\nimport glob\nimport os\n\nimport cv2\nimport numpy as np\nimport torch\nfrom PIL import Image\n\nfrom maskrcnn_benchmark.config import cfg\nfrom maskrcnn_benchmark.layers import nms\nfrom maskrcnn_benchmark.modeling.detector import build_detection_model\nfrom maskrcnn_benchmark.structures.image_list import to_image_list\nfrom maskrcnn_benchmark.utils.model_serialization import load_state_dict\nfrom pythia.utils.general import download_file\n\n\nclass FeatureExtractor:\n MODEL_URL = (\n \"https://dl.fbaipublicfiles.com/pythia/detectron_model/detectron_model.pth\"\n )\n CONFIG_URL = (\n \"https://dl.fbaipublicfiles.com/pythia/detectron_model/detectron_model.yaml\"\n )\n MAX_SIZE = 1333\n MIN_SIZE = 800\n\n def __init__(self):\n self.args = self.get_parser().parse_args()\n self.detection_model = self._build_detection_model()\n\n os.makedirs(self.args.output_folder, exist_ok=True)\n\n def _try_downloading_necessities(self):\n if self.args.model_file is None:\n print(\"Downloading model and configuration\")\n self.args.model_file = self.MODEL_URL.split(\"/\")[-1]\n self.args.config_file = self.CONFIG_URL.split(\"/\")[-1]\n download_file(self.MODEL_URL)\n download_file(self.CONFIG_URL)\n\n def get_parser(self):\n parser = argparse.ArgumentParser()\n parser.add_argument(\n \"--model_file\", default=None, type=str, help=\"Detectron model file\"\n )\n parser.add_argument(\n \"--config_file\", default=None, type=str, help=\"Detectron config file\"\n )\n parser.add_argument(\"--batch_size\", type=int, default=2, help=\"Batch size\")\n parser.add_argument(\n \"--num_features\", type=int, default=100, help=\"Number of features to extract.\"\n )\n parser.add_argument(\n \"--output_folder\", type=str, default=\"./output\", help=\"Output folder\"\n )\n parser.add_argument(\"--image_dir\", type=str, help=\"Image directory or file\")\n parser.add_argument(\n \"--feature_name\", type=str, help=\"The name of the feature to extract\",\n default=\"fc6\",\n )\n parser.add_argument(\n \"--confidence_threshold\", type=float, default=0,\n help=\"Threshold of detection confidence above which boxes will be selected\"\n )\n parser.add_argument(\n \"--background\", action=\"store_true\",\n help=\"The model will output predictions for the background class when set\"\n )\n return parser\n\n def _build_detection_model(self):\n cfg.merge_from_file(self.args.config_file)\n cfg.freeze()\n\n model = build_detection_model(cfg)\n checkpoint = torch.load(self.args.model_file, map_location=torch.device(\"cpu\"))\n\n load_state_dict(model, checkpoint.pop(\"model\"))\n\n model.to(\"cuda\")\n model.eval()\n return model\n\n def _image_transform(self, path):\n img = Image.open(path)\n im = np.array(img).astype(np.float32)\n # IndexError: too many indices for array, grayscale images\n if len(im.shape) < 3:\n im = np.repeat(im[:, :, np.newaxis], 3, axis=2)\n im = im[:, :, ::-1]\n im -= np.array([102.9801, 115.9465, 122.7717])\n im_shape = im.shape\n im_height = im_shape[0]\n im_width = im_shape[1]\n im_size_min = np.min(im_shape[0:2])\n im_size_max = np.max(im_shape[0:2])\n\n # Scale based on minimum size\n im_scale = self.MIN_SIZE / im_size_min\n\n # Prevent the biggest axis from being more than max_size\n # If bigger, scale it down\n if np.round(im_scale * im_size_max) > self.MAX_SIZE:\n im_scale = self.MAX_SIZE / im_size_max\n\n im = cv2.resize(\n im, None, None, fx=im_scale, fy=im_scale, interpolation=cv2.INTER_LINEAR\n )\n img = torch.from_numpy(im).permute(2, 0, 1)\n\n im_info = {\n \"width\": im_width,\n \"height\": im_height\n }\n\n return img, im_scale, im_info\n\n def _process_feature_extraction(\n self, output, im_scales, im_infos, feature_name=\"fc6\", conf_thresh=0\n ):\n batch_size = len(output[0][\"proposals\"])\n n_boxes_per_image = [len(boxes) for boxes in output[0][\"proposals\"]]\n score_list = output[0][\"scores\"].split(n_boxes_per_image)\n score_list = [torch.nn.functional.softmax(x, -1) for x in score_list]\n feats = output[0][feature_name].split(n_boxes_per_image)\n cur_device = score_list[0].device\n\n feat_list = []\n info_list = []\n\n for i in range(batch_size):\n dets = output[0][\"proposals\"][i].bbox / im_scales[i]\n scores = score_list[i]\n max_conf = torch.zeros((scores.shape[0])).to(cur_device)\n conf_thresh_tensor = torch.full_like(max_conf, conf_thresh)\n start_index = 1\n # Column 0 of the scores matrix is for the background class\n if self.args.background:\n start_index = 0\n for cls_ind in range(start_index, scores.shape[1]):\n cls_scores = scores[:, cls_ind]\n keep = nms(dets, cls_scores, 0.5)\n max_conf[keep] = torch.where(\n # Better than max one till now and minimally greater than conf_thresh\n (cls_scores[keep] > max_conf[keep]) &\n (cls_scores[keep] > conf_thresh_tensor[keep]),\n cls_scores[keep], max_conf[keep]\n )\n\n sorted_scores, sorted_indices = torch.sort(max_conf, descending=True)\n num_boxes = (sorted_scores[:self.args.num_features] != 0).sum()\n keep_boxes = sorted_indices[:self.args.num_features]\n feat_list.append(feats[i][keep_boxes])\n bbox = output[0][\"proposals\"][i][keep_boxes].bbox / im_scales[i]\n # Predict the class label using the scores\n objects = torch.argmax(scores[keep_boxes][start_index:], dim=1)\n\n info_list.append(\n {\n \"bbox\": bbox.cpu().numpy(),\n \"num_boxes\": num_boxes.item(),\n \"objects\": objects.cpu().numpy(),\n \"image_width\": im_infos[i][\"width\"],\n \"image_height\": im_infos[i][\"height\"],\n }\n )\n\n return feat_list, info_list\n\n def get_detectron_features(self, image_paths):\n img_tensor, im_scales, im_infos = [], [], []\n\n for image_path in image_paths:\n im, im_scale, im_info = self._image_transform(image_path)\n img_tensor.append(im)\n im_scales.append(im_scale)\n im_infos.append(im_info)\n\n # Image dimensions should be divisible by 32, to allow convolutions\n # in detector to work\n current_img_list = to_image_list(img_tensor, size_divisible=32)\n current_img_list = current_img_list.to(\"cuda\")\n\n with torch.no_grad():\n output = self.detection_model(current_img_list)\n\n feat_list = self._process_feature_extraction(\n output, im_scales, im_infos, self.args.feature_name,\n self.args.confidence_threshold\n )\n\n return feat_list\n\n def _chunks(self, array, chunk_size):\n for i in range(0, len(array), chunk_size):\n yield array[i : i + chunk_size]\n\n def _save_feature(self, file_name, feature, info):\n file_base_name = os.path.basename(file_name)\n file_base_name = file_base_name.split(\".\")[0]\n info_file_base_name = file_base_name + \"_info.npy\"\n file_base_name = file_base_name + \".npy\"\n\n np.save(\n os.path.join(self.args.output_folder, file_base_name), feature.cpu().numpy()\n )\n np.save(os.path.join(self.args.output_folder, info_file_base_name), info)\n\n def extract_features(self):\n image_dir = self.args.image_dir\n\n if os.path.isfile(image_dir):\n features, infos = self.get_detectron_features([image_dir])\n self._save_feature(image_dir, features[0], infos[0])\n else:\n files = glob.glob(os.path.join(image_dir, \"*.jpg\"))\n for chunk in self._chunks(files, self.args.batch_size):\n features, infos = self.get_detectron_features(chunk)\n for idx, file_name in enumerate(chunk):\n self._save_feature(file_name, features[idx], infos[idx])\n\n\nif __name__ == \"__main__\":\n feature_extractor = FeatureExtractor()\n feature_extractor.extract_features()\n", "path": "pythia/scripts/features/extract_features_vmb.py"}], "after_files": [{"content": "# Requires vqa-maskrcnn-benchmark to be built and installed\n# Category mapping for visual genome can be downloaded from\n# https://dl.fbaipublicfiles.com/pythia/data/visual_genome_categories.json\n# When the --background flag is set, the index saved with key \"objects\" in\n# info_list will be +1 of the Visual Genome category mapping above and 0\n# is the background class. When the --background flag is not set, the\n# index saved with key \"objects\" in info list will match the Visual Genome\n# category mapping.\nimport argparse\nimport glob\nimport os\n\nimport cv2\nimport numpy as np\nimport torch\nfrom PIL import Image\n\nfrom maskrcnn_benchmark.config import cfg\nfrom maskrcnn_benchmark.layers import nms\nfrom maskrcnn_benchmark.modeling.detector import build_detection_model\nfrom maskrcnn_benchmark.structures.image_list import to_image_list\nfrom maskrcnn_benchmark.utils.model_serialization import load_state_dict\nfrom pythia.utils.general import download_file\n\n\nclass FeatureExtractor:\n MODEL_URL = (\n \"https://dl.fbaipublicfiles.com/pythia/detectron_model/detectron_model.pth\"\n )\n CONFIG_URL = (\n \"https://dl.fbaipublicfiles.com/pythia/detectron_model/detectron_model.yaml\"\n )\n MAX_SIZE = 1333\n MIN_SIZE = 800\n\n def __init__(self):\n self.args = self.get_parser().parse_args()\n self.detection_model = self._build_detection_model()\n\n os.makedirs(self.args.output_folder, exist_ok=True)\n\n def _try_downloading_necessities(self):\n if self.args.model_file is None:\n print(\"Downloading model and configuration\")\n self.args.model_file = self.MODEL_URL.split(\"/\")[-1]\n self.args.config_file = self.CONFIG_URL.split(\"/\")[-1]\n download_file(self.MODEL_URL)\n download_file(self.CONFIG_URL)\n\n def get_parser(self):\n parser = argparse.ArgumentParser()\n parser.add_argument(\n \"--model_file\", default=None, type=str, help=\"Detectron model file\"\n )\n parser.add_argument(\n \"--config_file\", default=None, type=str, help=\"Detectron config file\"\n )\n parser.add_argument(\"--batch_size\", type=int, default=2, help=\"Batch size\")\n parser.add_argument(\n \"--num_features\", type=int, default=100, help=\"Number of features to extract.\"\n )\n parser.add_argument(\n \"--output_folder\", type=str, default=\"./output\", help=\"Output folder\"\n )\n parser.add_argument(\"--image_dir\", type=str, help=\"Image directory or file\")\n parser.add_argument(\n \"--feature_name\", type=str, help=\"The name of the feature to extract\",\n default=\"fc6\",\n )\n parser.add_argument(\n \"--confidence_threshold\", type=float, default=0,\n help=\"Threshold of detection confidence above which boxes will be selected\"\n )\n parser.add_argument(\n \"--background\", action=\"store_true\",\n help=\"The model will output predictions for the background class when set\"\n )\n return parser\n\n def _build_detection_model(self):\n cfg.merge_from_file(self.args.config_file)\n cfg.freeze()\n\n model = build_detection_model(cfg)\n checkpoint = torch.load(self.args.model_file, map_location=torch.device(\"cpu\"))\n\n load_state_dict(model, checkpoint.pop(\"model\"))\n\n model.to(\"cuda\")\n model.eval()\n return model\n\n def _image_transform(self, path):\n img = Image.open(path)\n im = np.array(img).astype(np.float32)\n # IndexError: too many indices for array, grayscale images\n if len(im.shape) < 3:\n im = np.repeat(im[:, :, np.newaxis], 3, axis=2)\n im = im[:, :, ::-1]\n im -= np.array([102.9801, 115.9465, 122.7717])\n im_shape = im.shape\n im_height = im_shape[0]\n im_width = im_shape[1]\n im_size_min = np.min(im_shape[0:2])\n im_size_max = np.max(im_shape[0:2])\n\n # Scale based on minimum size\n im_scale = self.MIN_SIZE / im_size_min\n\n # Prevent the biggest axis from being more than max_size\n # If bigger, scale it down\n if np.round(im_scale * im_size_max) > self.MAX_SIZE:\n im_scale = self.MAX_SIZE / im_size_max\n\n im = cv2.resize(\n im, None, None, fx=im_scale, fy=im_scale, interpolation=cv2.INTER_LINEAR\n )\n img = torch.from_numpy(im).permute(2, 0, 1)\n\n im_info = {\n \"width\": im_width,\n \"height\": im_height\n }\n\n return img, im_scale, im_info\n\n def _process_feature_extraction(\n self, output, im_scales, im_infos, feature_name=\"fc6\", conf_thresh=0\n ):\n batch_size = len(output[0][\"proposals\"])\n n_boxes_per_image = [len(boxes) for boxes in output[0][\"proposals\"]]\n score_list = output[0][\"scores\"].split(n_boxes_per_image)\n score_list = [torch.nn.functional.softmax(x, -1) for x in score_list]\n feats = output[0][feature_name].split(n_boxes_per_image)\n cur_device = score_list[0].device\n\n feat_list = []\n info_list = []\n\n for i in range(batch_size):\n dets = output[0][\"proposals\"][i].bbox / im_scales[i]\n scores = score_list[i]\n max_conf = torch.zeros((scores.shape[0])).to(cur_device)\n conf_thresh_tensor = torch.full_like(max_conf, conf_thresh)\n start_index = 1\n # Column 0 of the scores matrix is for the background class\n if self.args.background:\n start_index = 0\n for cls_ind in range(start_index, scores.shape[1]):\n cls_scores = scores[:, cls_ind]\n keep = nms(dets, cls_scores, 0.5)\n max_conf[keep] = torch.where(\n # Better than max one till now and minimally greater than conf_thresh\n (cls_scores[keep] > max_conf[keep]) &\n (cls_scores[keep] > conf_thresh_tensor[keep]),\n cls_scores[keep], max_conf[keep]\n )\n\n sorted_scores, sorted_indices = torch.sort(max_conf, descending=True)\n num_boxes = (sorted_scores[:self.args.num_features] != 0).sum()\n keep_boxes = sorted_indices[:self.args.num_features]\n feat_list.append(feats[i][keep_boxes])\n bbox = output[0][\"proposals\"][i][keep_boxes].bbox / im_scales[i]\n # Predict the class label using the scores\n objects = torch.argmax(scores[keep_boxes][:, start_index:], dim=1)\n\n info_list.append(\n {\n \"bbox\": bbox.cpu().numpy(),\n \"num_boxes\": num_boxes.item(),\n \"objects\": objects.cpu().numpy(),\n \"image_width\": im_infos[i][\"width\"],\n \"image_height\": im_infos[i][\"height\"],\n }\n )\n\n return feat_list, info_list\n\n def get_detectron_features(self, image_paths):\n img_tensor, im_scales, im_infos = [], [], []\n\n for image_path in image_paths:\n im, im_scale, im_info = self._image_transform(image_path)\n img_tensor.append(im)\n im_scales.append(im_scale)\n im_infos.append(im_info)\n\n # Image dimensions should be divisible by 32, to allow convolutions\n # in detector to work\n current_img_list = to_image_list(img_tensor, size_divisible=32)\n current_img_list = current_img_list.to(\"cuda\")\n\n with torch.no_grad():\n output = self.detection_model(current_img_list)\n\n feat_list = self._process_feature_extraction(\n output, im_scales, im_infos, self.args.feature_name,\n self.args.confidence_threshold\n )\n\n return feat_list\n\n def _chunks(self, array, chunk_size):\n for i in range(0, len(array), chunk_size):\n yield array[i : i + chunk_size]\n\n def _save_feature(self, file_name, feature, info):\n file_base_name = os.path.basename(file_name)\n file_base_name = file_base_name.split(\".\")[0]\n info_file_base_name = file_base_name + \"_info.npy\"\n file_base_name = file_base_name + \".npy\"\n\n np.save(\n os.path.join(self.args.output_folder, file_base_name), feature.cpu().numpy()\n )\n np.save(os.path.join(self.args.output_folder, info_file_base_name), info)\n\n def extract_features(self):\n image_dir = self.args.image_dir\n\n if os.path.isfile(image_dir):\n features, infos = self.get_detectron_features([image_dir])\n self._save_feature(image_dir, features[0], infos[0])\n else:\n files = glob.glob(os.path.join(image_dir, \"*.jpg\"))\n for chunk in self._chunks(files, self.args.batch_size):\n features, infos = self.get_detectron_features(chunk)\n for idx, file_name in enumerate(chunk):\n self._save_feature(file_name, features[idx], infos[idx])\n\n\nif __name__ == \"__main__\":\n feature_extractor = FeatureExtractor()\n feature_extractor.extract_features()\n", "path": "pythia/scripts/features/extract_features_vmb.py"}]}
3,173
159
gh_patches_debug_33379
rasdani/github-patches
git_diff
nv-legate__cunumeric-272
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Handle ufunc coverage wrappers more robustly ref: https://github.com/nv-legate/cunumeric/pull/268/files#r846513290 As noted in the above conversation, the generic callable wrapping for adding coverage reporting is not sufficient. Numpy (and thus cunumeric) `ufunc` are objects with their own API (https://numpy.org/doc/stable/reference/ufuncs.html) and just using a plain function wrapper makes those methods invisible. Some requirements to decide first: * Do all the methods of a `ufunc` need to be included in coverage reporting? Or just its `__call__` If yes, we will need to resort to a wrapping object (and then: is it sufficient to just create a purpose-built `ufunc_wrapper` or do we need a generic forwarding wrapper?) If not, we may be able to just wrap and replace `__call__` using the function wrappers similar to the existing ones. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `cunumeric/coverage.py` Content: ``` 1 # Copyright 2021-2022 NVIDIA Corporation 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 # 15 from __future__ import annotations 16 17 import warnings 18 from functools import wraps 19 from types import FunctionType, MethodDescriptorType, MethodType, ModuleType 20 from typing import Any, Callable, Container, Optional, cast 21 22 from typing_extensions import Protocol 23 24 from .runtime import runtime 25 from .utils import find_last_user_frames, find_last_user_stacklevel 26 27 __all__ = ("clone_module",) 28 29 FALLBACK_WARNING = ( 30 "cuNumeric has not implemented {name} " 31 + "and is falling back to canonical numpy. " 32 + "You may notice significantly decreased performance " 33 + "for this function call." 34 ) 35 36 MOD_INTERNAL = {"__dir__", "__getattr__"} 37 38 NDARRAY_INTERNAL = { 39 "__array_finalize__", 40 "__array_function__", 41 "__array_interface__", 42 "__array_prepare__", 43 "__array_priority__", 44 "__array_struct__", 45 "__array_ufunc__", 46 "__array_wrap__", 47 } 48 49 50 def filter_namespace( 51 ns: dict[str, Any], 52 *, 53 omit_names: Optional[Container[str]] = None, 54 omit_types: tuple[type, ...] = (), 55 ) -> dict[str, Any]: 56 omit_names = omit_names or set() 57 return { 58 attr: value 59 for attr, value in ns.items() 60 if attr not in omit_names and not isinstance(value, omit_types) 61 } 62 63 64 class AnyCallable(Protocol): 65 def __call__(self, *args: Any, **kwargs: Any) -> Any: 66 ... 67 68 69 class CuWrapped(Protocol): 70 _cunumeric_implemented: bool 71 72 def __call__(self, *args: Any, **kwargs: Any) -> Any: 73 ... 74 75 76 def implemented( 77 func: AnyCallable, prefix: str, name: str, *, reporting: bool = True 78 ) -> CuWrapped: 79 name = f"{prefix}.{name}" 80 81 wrapper: CuWrapped 82 83 if reporting: 84 85 @wraps(func) 86 def wrapper(*args: Any, **kwargs: Any) -> Any: 87 location = find_last_user_frames(not runtime.report_dump_callstack) 88 runtime.record_api_call( 89 name=name, location=location, implemented=True 90 ) 91 return func(*args, **kwargs) 92 93 else: 94 95 wrapper = cast(CuWrapped, func) 96 97 wrapper._cunumeric_implemented = True 98 99 return wrapper 100 101 102 def unimplemented( 103 func: AnyCallable, prefix: str, name: str, *, reporting: bool = True 104 ) -> CuWrapped: 105 name = f"{prefix}.{name}" 106 107 wrapper: CuWrapped 108 109 if reporting: 110 111 @wraps(func) 112 def wrapper(*args: Any, **kwargs: Any) -> Any: 113 location = find_last_user_frames(not runtime.report_dump_callstack) 114 runtime.record_api_call( 115 name=name, location=location, implemented=False 116 ) 117 return func(*args, **kwargs) 118 119 else: 120 121 @wraps(func) 122 def wrapper(*args: Any, **kwargs: Any) -> Any: 123 stacklevel = find_last_user_stacklevel() 124 warnings.warn( 125 FALLBACK_WARNING.format(name=name), 126 stacklevel=stacklevel, 127 category=RuntimeWarning, 128 ) 129 return func(*args, **kwargs) 130 131 wrapper._cunumeric_implemented = False 132 133 return wrapper 134 135 136 def clone_module( 137 origin_module: ModuleType, new_globals: dict[str, Any] 138 ) -> None: 139 """Copy attributes from one module to another, excluding submodules 140 141 Function types are wrapped with a decorator to report API calls. All 142 other values are copied as-is. 143 144 Parameters 145 ---------- 146 origin_module : ModuleTpe 147 Existing module to clone attributes from 148 149 new_globals : dict 150 a globals() dict for the new module to clone into 151 152 Returns 153 ------- 154 None 155 156 """ 157 mod_name = origin_module.__name__ 158 159 missing = filter_namespace( 160 origin_module.__dict__, 161 omit_names=set(new_globals).union(MOD_INTERNAL), 162 omit_types=(ModuleType,), 163 ) 164 165 from numpy import ufunc as npufunc 166 167 from ._ufunc.ufunc import ufunc as lgufunc 168 169 reporting = runtime.report_coverage 170 171 for attr, value in new_globals.items(): 172 if isinstance(value, (FunctionType, lgufunc)): 173 wrapped = implemented( 174 cast(AnyCallable, value), mod_name, attr, reporting=reporting 175 ) 176 new_globals[attr] = wrapped 177 178 for attr, value in missing.items(): 179 if isinstance(value, (FunctionType, npufunc)): 180 wrapped = unimplemented(value, mod_name, attr, reporting=reporting) 181 new_globals[attr] = wrapped 182 else: 183 new_globals[attr] = value 184 185 186 def clone_class(origin_class: type) -> Callable[[type], type]: 187 """Copy attributes from one class to another 188 189 Method types are wrapped with a decorator to report API calls. All 190 other values are copied as-is. 191 192 Parameters 193 ---------- 194 origin_class : type 195 Existing class type to clone attributes from 196 197 """ 198 199 def should_wrap(obj: object) -> bool: 200 return isinstance( 201 obj, (FunctionType, MethodType, MethodDescriptorType) 202 ) 203 204 def decorator(cls: type) -> type: 205 class_name = f"{origin_class.__module__}.{origin_class.__name__}" 206 207 missing = filter_namespace( 208 origin_class.__dict__, 209 # this simply omits ndarray internal methods for any class. If 210 # we ever need to wrap more classes we may need to generalize to 211 # per-class specification of internal names to skip 212 omit_names=set(cls.__dict__).union(NDARRAY_INTERNAL), 213 ) 214 215 reporting = runtime.report_coverage 216 217 for attr, value in cls.__dict__.items(): 218 if should_wrap(value): 219 wrapped = implemented( 220 value, class_name, attr, reporting=reporting 221 ) 222 setattr(cls, attr, wrapped) 223 224 for attr, value in missing.items(): 225 if should_wrap(value): 226 wrapped = unimplemented( 227 value, class_name, attr, reporting=reporting 228 ) 229 setattr(cls, attr, wrapped) 230 else: 231 setattr(cls, attr, value) 232 233 return cls 234 235 return decorator 236 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/cunumeric/coverage.py b/cunumeric/coverage.py --- a/cunumeric/coverage.py +++ b/cunumeric/coverage.py @@ -24,7 +24,7 @@ from .runtime import runtime from .utils import find_last_user_frames, find_last_user_stacklevel -__all__ = ("clone_module",) +__all__ = ("clone_class", "clone_module") FALLBACK_WARNING = ( "cuNumeric has not implemented {name} " @@ -66,12 +66,9 @@ ... -class CuWrapped(Protocol): +class CuWrapped(AnyCallable, Protocol): _cunumeric_implemented: bool - def __call__(self, *args: Any, **kwargs: Any) -> Any: - ... - def implemented( func: AnyCallable, prefix: str, name: str, *, reporting: bool = True @@ -92,7 +89,9 @@ else: - wrapper = cast(CuWrapped, func) + @wraps(func) + def wrapper(*args: Any, **kwargs: Any) -> Any: + return func(*args, **kwargs) wrapper._cunumeric_implemented = True @@ -162,12 +161,10 @@ omit_types=(ModuleType,), ) - from numpy import ufunc as npufunc + reporting = runtime.report_coverage from ._ufunc.ufunc import ufunc as lgufunc - reporting = runtime.report_coverage - for attr, value in new_globals.items(): if isinstance(value, (FunctionType, lgufunc)): wrapped = implemented( @@ -175,6 +172,8 @@ ) new_globals[attr] = wrapped + from numpy import ufunc as npufunc + for attr, value in missing.items(): if isinstance(value, (FunctionType, npufunc)): wrapped = unimplemented(value, mod_name, attr, reporting=reporting)
{"golden_diff": "diff --git a/cunumeric/coverage.py b/cunumeric/coverage.py\n--- a/cunumeric/coverage.py\n+++ b/cunumeric/coverage.py\n@@ -24,7 +24,7 @@\n from .runtime import runtime\n from .utils import find_last_user_frames, find_last_user_stacklevel\n \n-__all__ = (\"clone_module\",)\n+__all__ = (\"clone_class\", \"clone_module\")\n \n FALLBACK_WARNING = (\n \"cuNumeric has not implemented {name} \"\n@@ -66,12 +66,9 @@\n ...\n \n \n-class CuWrapped(Protocol):\n+class CuWrapped(AnyCallable, Protocol):\n _cunumeric_implemented: bool\n \n- def __call__(self, *args: Any, **kwargs: Any) -> Any:\n- ...\n-\n \n def implemented(\n func: AnyCallable, prefix: str, name: str, *, reporting: bool = True\n@@ -92,7 +89,9 @@\n \n else:\n \n- wrapper = cast(CuWrapped, func)\n+ @wraps(func)\n+ def wrapper(*args: Any, **kwargs: Any) -> Any:\n+ return func(*args, **kwargs)\n \n wrapper._cunumeric_implemented = True\n \n@@ -162,12 +161,10 @@\n omit_types=(ModuleType,),\n )\n \n- from numpy import ufunc as npufunc\n+ reporting = runtime.report_coverage\n \n from ._ufunc.ufunc import ufunc as lgufunc\n \n- reporting = runtime.report_coverage\n-\n for attr, value in new_globals.items():\n if isinstance(value, (FunctionType, lgufunc)):\n wrapped = implemented(\n@@ -175,6 +172,8 @@\n )\n new_globals[attr] = wrapped\n \n+ from numpy import ufunc as npufunc\n+\n for attr, value in missing.items():\n if isinstance(value, (FunctionType, npufunc)):\n wrapped = unimplemented(value, mod_name, attr, reporting=reporting)\n", "issue": "Handle ufunc coverage wrappers more robustly\nref: https://github.com/nv-legate/cunumeric/pull/268/files#r846513290\r\n\r\nAs noted in the above conversation, the generic callable wrapping for adding coverage reporting is not sufficient. \r\n\r\nNumpy (and thus cunumeric) `ufunc` are objects with their own API (https://numpy.org/doc/stable/reference/ufuncs.html) and just using a plain function wrapper makes those methods invisible. Some requirements to decide first:\r\n\r\n* Do all the methods of a `ufunc` need to be included in coverage reporting? Or just its `__call__`\r\n\r\nIf yes, we will need to resort to a wrapping object (and then: is it sufficient to just create a purpose-built `ufunc_wrapper` or do we need a generic forwarding wrapper?)\r\n\r\nIf not, we may be able to just wrap and replace `__call__` using the function wrappers similar to the existing ones.\n", "before_files": [{"content": "# Copyright 2021-2022 NVIDIA Corporation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nfrom __future__ import annotations\n\nimport warnings\nfrom functools import wraps\nfrom types import FunctionType, MethodDescriptorType, MethodType, ModuleType\nfrom typing import Any, Callable, Container, Optional, cast\n\nfrom typing_extensions import Protocol\n\nfrom .runtime import runtime\nfrom .utils import find_last_user_frames, find_last_user_stacklevel\n\n__all__ = (\"clone_module\",)\n\nFALLBACK_WARNING = (\n \"cuNumeric has not implemented {name} \"\n + \"and is falling back to canonical numpy. \"\n + \"You may notice significantly decreased performance \"\n + \"for this function call.\"\n)\n\nMOD_INTERNAL = {\"__dir__\", \"__getattr__\"}\n\nNDARRAY_INTERNAL = {\n \"__array_finalize__\",\n \"__array_function__\",\n \"__array_interface__\",\n \"__array_prepare__\",\n \"__array_priority__\",\n \"__array_struct__\",\n \"__array_ufunc__\",\n \"__array_wrap__\",\n}\n\n\ndef filter_namespace(\n ns: dict[str, Any],\n *,\n omit_names: Optional[Container[str]] = None,\n omit_types: tuple[type, ...] = (),\n) -> dict[str, Any]:\n omit_names = omit_names or set()\n return {\n attr: value\n for attr, value in ns.items()\n if attr not in omit_names and not isinstance(value, omit_types)\n }\n\n\nclass AnyCallable(Protocol):\n def __call__(self, *args: Any, **kwargs: Any) -> Any:\n ...\n\n\nclass CuWrapped(Protocol):\n _cunumeric_implemented: bool\n\n def __call__(self, *args: Any, **kwargs: Any) -> Any:\n ...\n\n\ndef implemented(\n func: AnyCallable, prefix: str, name: str, *, reporting: bool = True\n) -> CuWrapped:\n name = f\"{prefix}.{name}\"\n\n wrapper: CuWrapped\n\n if reporting:\n\n @wraps(func)\n def wrapper(*args: Any, **kwargs: Any) -> Any:\n location = find_last_user_frames(not runtime.report_dump_callstack)\n runtime.record_api_call(\n name=name, location=location, implemented=True\n )\n return func(*args, **kwargs)\n\n else:\n\n wrapper = cast(CuWrapped, func)\n\n wrapper._cunumeric_implemented = True\n\n return wrapper\n\n\ndef unimplemented(\n func: AnyCallable, prefix: str, name: str, *, reporting: bool = True\n) -> CuWrapped:\n name = f\"{prefix}.{name}\"\n\n wrapper: CuWrapped\n\n if reporting:\n\n @wraps(func)\n def wrapper(*args: Any, **kwargs: Any) -> Any:\n location = find_last_user_frames(not runtime.report_dump_callstack)\n runtime.record_api_call(\n name=name, location=location, implemented=False\n )\n return func(*args, **kwargs)\n\n else:\n\n @wraps(func)\n def wrapper(*args: Any, **kwargs: Any) -> Any:\n stacklevel = find_last_user_stacklevel()\n warnings.warn(\n FALLBACK_WARNING.format(name=name),\n stacklevel=stacklevel,\n category=RuntimeWarning,\n )\n return func(*args, **kwargs)\n\n wrapper._cunumeric_implemented = False\n\n return wrapper\n\n\ndef clone_module(\n origin_module: ModuleType, new_globals: dict[str, Any]\n) -> None:\n \"\"\"Copy attributes from one module to another, excluding submodules\n\n Function types are wrapped with a decorator to report API calls. All\n other values are copied as-is.\n\n Parameters\n ----------\n origin_module : ModuleTpe\n Existing module to clone attributes from\n\n new_globals : dict\n a globals() dict for the new module to clone into\n\n Returns\n -------\n None\n\n \"\"\"\n mod_name = origin_module.__name__\n\n missing = filter_namespace(\n origin_module.__dict__,\n omit_names=set(new_globals).union(MOD_INTERNAL),\n omit_types=(ModuleType,),\n )\n\n from numpy import ufunc as npufunc\n\n from ._ufunc.ufunc import ufunc as lgufunc\n\n reporting = runtime.report_coverage\n\n for attr, value in new_globals.items():\n if isinstance(value, (FunctionType, lgufunc)):\n wrapped = implemented(\n cast(AnyCallable, value), mod_name, attr, reporting=reporting\n )\n new_globals[attr] = wrapped\n\n for attr, value in missing.items():\n if isinstance(value, (FunctionType, npufunc)):\n wrapped = unimplemented(value, mod_name, attr, reporting=reporting)\n new_globals[attr] = wrapped\n else:\n new_globals[attr] = value\n\n\ndef clone_class(origin_class: type) -> Callable[[type], type]:\n \"\"\"Copy attributes from one class to another\n\n Method types are wrapped with a decorator to report API calls. All\n other values are copied as-is.\n\n Parameters\n ----------\n origin_class : type\n Existing class type to clone attributes from\n\n \"\"\"\n\n def should_wrap(obj: object) -> bool:\n return isinstance(\n obj, (FunctionType, MethodType, MethodDescriptorType)\n )\n\n def decorator(cls: type) -> type:\n class_name = f\"{origin_class.__module__}.{origin_class.__name__}\"\n\n missing = filter_namespace(\n origin_class.__dict__,\n # this simply omits ndarray internal methods for any class. If\n # we ever need to wrap more classes we may need to generalize to\n # per-class specification of internal names to skip\n omit_names=set(cls.__dict__).union(NDARRAY_INTERNAL),\n )\n\n reporting = runtime.report_coverage\n\n for attr, value in cls.__dict__.items():\n if should_wrap(value):\n wrapped = implemented(\n value, class_name, attr, reporting=reporting\n )\n setattr(cls, attr, wrapped)\n\n for attr, value in missing.items():\n if should_wrap(value):\n wrapped = unimplemented(\n value, class_name, attr, reporting=reporting\n )\n setattr(cls, attr, wrapped)\n else:\n setattr(cls, attr, value)\n\n return cls\n\n return decorator\n", "path": "cunumeric/coverage.py"}], "after_files": [{"content": "# Copyright 2021-2022 NVIDIA Corporation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nfrom __future__ import annotations\n\nimport warnings\nfrom functools import wraps\nfrom types import FunctionType, MethodDescriptorType, MethodType, ModuleType\nfrom typing import Any, Callable, Container, Optional, cast\n\nfrom typing_extensions import Protocol\n\nfrom .runtime import runtime\nfrom .utils import find_last_user_frames, find_last_user_stacklevel\n\n__all__ = (\"clone_class\", \"clone_module\")\n\nFALLBACK_WARNING = (\n \"cuNumeric has not implemented {name} \"\n + \"and is falling back to canonical numpy. \"\n + \"You may notice significantly decreased performance \"\n + \"for this function call.\"\n)\n\nMOD_INTERNAL = {\"__dir__\", \"__getattr__\"}\n\nNDARRAY_INTERNAL = {\n \"__array_finalize__\",\n \"__array_function__\",\n \"__array_interface__\",\n \"__array_prepare__\",\n \"__array_priority__\",\n \"__array_struct__\",\n \"__array_ufunc__\",\n \"__array_wrap__\",\n}\n\n\ndef filter_namespace(\n ns: dict[str, Any],\n *,\n omit_names: Optional[Container[str]] = None,\n omit_types: tuple[type, ...] = (),\n) -> dict[str, Any]:\n omit_names = omit_names or set()\n return {\n attr: value\n for attr, value in ns.items()\n if attr not in omit_names and not isinstance(value, omit_types)\n }\n\n\nclass AnyCallable(Protocol):\n def __call__(self, *args: Any, **kwargs: Any) -> Any:\n ...\n\n\nclass CuWrapped(AnyCallable, Protocol):\n _cunumeric_implemented: bool\n\n\ndef implemented(\n func: AnyCallable, prefix: str, name: str, *, reporting: bool = True\n) -> CuWrapped:\n name = f\"{prefix}.{name}\"\n\n wrapper: CuWrapped\n\n if reporting:\n\n @wraps(func)\n def wrapper(*args: Any, **kwargs: Any) -> Any:\n location = find_last_user_frames(not runtime.report_dump_callstack)\n runtime.record_api_call(\n name=name, location=location, implemented=True\n )\n return func(*args, **kwargs)\n\n else:\n\n @wraps(func)\n def wrapper(*args: Any, **kwargs: Any) -> Any:\n return func(*args, **kwargs)\n\n wrapper._cunumeric_implemented = True\n\n return wrapper\n\n\ndef unimplemented(\n func: AnyCallable, prefix: str, name: str, *, reporting: bool = True\n) -> CuWrapped:\n name = f\"{prefix}.{name}\"\n\n wrapper: CuWrapped\n\n if reporting:\n\n @wraps(func)\n def wrapper(*args: Any, **kwargs: Any) -> Any:\n location = find_last_user_frames(not runtime.report_dump_callstack)\n runtime.record_api_call(\n name=name, location=location, implemented=False\n )\n return func(*args, **kwargs)\n\n else:\n\n @wraps(func)\n def wrapper(*args: Any, **kwargs: Any) -> Any:\n stacklevel = find_last_user_stacklevel()\n warnings.warn(\n FALLBACK_WARNING.format(name=name),\n stacklevel=stacklevel,\n category=RuntimeWarning,\n )\n return func(*args, **kwargs)\n\n wrapper._cunumeric_implemented = False\n\n return wrapper\n\n\ndef clone_module(\n origin_module: ModuleType, new_globals: dict[str, Any]\n) -> None:\n \"\"\"Copy attributes from one module to another, excluding submodules\n\n Function types are wrapped with a decorator to report API calls. All\n other values are copied as-is.\n\n Parameters\n ----------\n origin_module : ModuleTpe\n Existing module to clone attributes from\n\n new_globals : dict\n a globals() dict for the new module to clone into\n\n Returns\n -------\n None\n\n \"\"\"\n mod_name = origin_module.__name__\n\n missing = filter_namespace(\n origin_module.__dict__,\n omit_names=set(new_globals).union(MOD_INTERNAL),\n omit_types=(ModuleType,),\n )\n\n reporting = runtime.report_coverage\n\n from ._ufunc.ufunc import ufunc as lgufunc\n\n for attr, value in new_globals.items():\n if isinstance(value, (FunctionType, lgufunc)):\n wrapped = implemented(\n cast(AnyCallable, value), mod_name, attr, reporting=reporting\n )\n new_globals[attr] = wrapped\n\n from numpy import ufunc as npufunc\n\n for attr, value in missing.items():\n if isinstance(value, (FunctionType, npufunc)):\n wrapped = unimplemented(value, mod_name, attr, reporting=reporting)\n new_globals[attr] = wrapped\n else:\n new_globals[attr] = value\n\n\ndef clone_class(origin_class: type) -> Callable[[type], type]:\n \"\"\"Copy attributes from one class to another\n\n Method types are wrapped with a decorator to report API calls. All\n other values are copied as-is.\n\n Parameters\n ----------\n origin_class : type\n Existing class type to clone attributes from\n\n \"\"\"\n\n def should_wrap(obj: object) -> bool:\n return isinstance(\n obj, (FunctionType, MethodType, MethodDescriptorType)\n )\n\n def decorator(cls: type) -> type:\n class_name = f\"{origin_class.__module__}.{origin_class.__name__}\"\n\n missing = filter_namespace(\n origin_class.__dict__,\n # this simply omits ndarray internal methods for any class. If\n # we ever need to wrap more classes we may need to generalize to\n # per-class specification of internal names to skip\n omit_names=set(cls.__dict__).union(NDARRAY_INTERNAL),\n )\n\n reporting = runtime.report_coverage\n\n for attr, value in cls.__dict__.items():\n if should_wrap(value):\n wrapped = implemented(\n value, class_name, attr, reporting=reporting\n )\n setattr(cls, attr, wrapped)\n\n for attr, value in missing.items():\n if should_wrap(value):\n wrapped = unimplemented(\n value, class_name, attr, reporting=reporting\n )\n setattr(cls, attr, wrapped)\n else:\n setattr(cls, attr, value)\n\n return cls\n\n return decorator\n", "path": "cunumeric/coverage.py"}]}
2,563
449
gh_patches_debug_6992
rasdani/github-patches
git_diff
ansible__awx-13627
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Unable to use CCP lookup plugin with empty webservice_id ### Please confirm the following - [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html). - [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates. - [X] I understand that AWX is open source software provided for free and that I might not receive a timely response. ### Bug Summary When job uses the `CyberArk Central Credential Provider Lookup` credential plugin with an empty web service id, it fails with the exception: ``` Traceback (most recent call last): File "/awx_devel/awx/main/tasks/jobs.py", line 508, in run args = self.build_args(self.instance, private_data_dir, passwords) File "/awx_devel/awx/main/tasks/jobs.py", line 941, in build_args ssh_username = creds.get_input('username', default='') File "/awx_devel/awx/main/models/credential/__init__.py", line 275, in get_input return self._get_dynamic_input(field_name) File "/awx_devel/awx/main/models/credential/__init__.py", line 309, in _get_dynamic_input return input_source.get_input_value() File "/awx_devel/awx/main/models/credential/__init__.py", line 1250, in get_input_value return backend(**backend_kwargs) File "/awx_devel/awx/main/credential_plugins/aim.py", line 73, in aim_backend webservice_id = kwargs['webservice_id'] KeyError: 'webservice_id' ``` The issue is only reproducible if we create a CCP lookup credential using API and we do not provide the `webservice_id` key as the input. If you create CCP lookup with UI - everything works fine. ### AWX version devel ### Select the relevant components - [ ] UI - [X] API - [ ] Docs - [ ] Collection - [ ] CLI - [ ] Other ### Installation method docker development environment ### Modifications no ### Ansible version _No response_ ### Operating system _No response_ ### Web browser _No response_ ### Steps to reproduce 1. Create CyberArk Central Credential Provider Lookup credential. Do not provide the WebService ID value, keep it empty. I used API to create credetnail and the webservice_id was missing in the inputs: ``` inputs = { 'url': url, 'app_id': app_id, 'client_key': client_key, 'client_cert': client_cert, 'verify': verify } payload = factories.credential.payload( name=fauxfactory.gen_utf8(), description=fauxfactory.gen_utf8(), credential_type=cred_type, inputs=inputs ) ``` 2. Create Machine credential that uses the CCP lookup credential. Set proper Object query. 3. Create Job Template that uses this credential. Run the job. ### Expected results The lookup should use default webservice id: `AIMWebService` ### Actual results Exception occured. See description. ### Additional information _No response_ --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `awx/main/credential_plugins/aim.py` Content: ``` 1 from .plugin import CredentialPlugin, CertFiles, raise_for_status 2 3 from urllib.parse import quote, urlencode, urljoin 4 5 from django.utils.translation import gettext_lazy as _ 6 import requests 7 8 aim_inputs = { 9 'fields': [ 10 { 11 'id': 'url', 12 'label': _('CyberArk CCP URL'), 13 'type': 'string', 14 'format': 'url', 15 }, 16 { 17 'id': 'webservice_id', 18 'label': _('Web Service ID'), 19 'type': 'string', 20 'help_text': _('The CCP Web Service ID. Leave blank to default to AIMWebService.'), 21 }, 22 { 23 'id': 'app_id', 24 'label': _('Application ID'), 25 'type': 'string', 26 'secret': True, 27 }, 28 { 29 'id': 'client_key', 30 'label': _('Client Key'), 31 'type': 'string', 32 'secret': True, 33 'multiline': True, 34 }, 35 { 36 'id': 'client_cert', 37 'label': _('Client Certificate'), 38 'type': 'string', 39 'secret': True, 40 'multiline': True, 41 }, 42 { 43 'id': 'verify', 44 'label': _('Verify SSL Certificates'), 45 'type': 'boolean', 46 'default': True, 47 }, 48 ], 49 'metadata': [ 50 { 51 'id': 'object_query', 52 'label': _('Object Query'), 53 'type': 'string', 54 'help_text': _('Lookup query for the object. Ex: Safe=TestSafe;Object=testAccountName123'), 55 }, 56 {'id': 'object_query_format', 'label': _('Object Query Format'), 'type': 'string', 'default': 'Exact', 'choices': ['Exact', 'Regexp']}, 57 { 58 'id': 'reason', 59 'label': _('Reason'), 60 'type': 'string', 61 'help_text': _('Object request reason. This is only needed if it is required by the object\'s policy.'), 62 }, 63 ], 64 'required': ['url', 'app_id', 'object_query'], 65 } 66 67 68 def aim_backend(**kwargs): 69 url = kwargs['url'] 70 client_cert = kwargs.get('client_cert', None) 71 client_key = kwargs.get('client_key', None) 72 verify = kwargs['verify'] 73 webservice_id = kwargs['webservice_id'] 74 app_id = kwargs['app_id'] 75 object_query = kwargs['object_query'] 76 object_query_format = kwargs['object_query_format'] 77 reason = kwargs.get('reason', None) 78 if webservice_id == '': 79 webservice_id = 'AIMWebService' 80 81 query_params = { 82 'AppId': app_id, 83 'Query': object_query, 84 'QueryFormat': object_query_format, 85 } 86 if reason: 87 query_params['reason'] = reason 88 89 request_qs = '?' + urlencode(query_params, quote_via=quote) 90 request_url = urljoin(url, '/'.join([webservice_id, 'api', 'Accounts'])) 91 92 with CertFiles(client_cert, client_key) as cert: 93 res = requests.get( 94 request_url + request_qs, 95 timeout=30, 96 cert=cert, 97 verify=verify, 98 allow_redirects=False, 99 ) 100 raise_for_status(res) 101 return res.json()['Content'] 102 103 104 aim_plugin = CredentialPlugin('CyberArk Central Credential Provider Lookup', inputs=aim_inputs, backend=aim_backend) 105 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/awx/main/credential_plugins/aim.py b/awx/main/credential_plugins/aim.py --- a/awx/main/credential_plugins/aim.py +++ b/awx/main/credential_plugins/aim.py @@ -70,7 +70,7 @@ client_cert = kwargs.get('client_cert', None) client_key = kwargs.get('client_key', None) verify = kwargs['verify'] - webservice_id = kwargs['webservice_id'] + webservice_id = kwargs.get('webservice_id', '') app_id = kwargs['app_id'] object_query = kwargs['object_query'] object_query_format = kwargs['object_query_format']
{"golden_diff": "diff --git a/awx/main/credential_plugins/aim.py b/awx/main/credential_plugins/aim.py\n--- a/awx/main/credential_plugins/aim.py\n+++ b/awx/main/credential_plugins/aim.py\n@@ -70,7 +70,7 @@\n client_cert = kwargs.get('client_cert', None)\n client_key = kwargs.get('client_key', None)\n verify = kwargs['verify']\n- webservice_id = kwargs['webservice_id']\n+ webservice_id = kwargs.get('webservice_id', '')\n app_id = kwargs['app_id']\n object_query = kwargs['object_query']\n object_query_format = kwargs['object_query_format']\n", "issue": "Unable to use CCP lookup plugin with empty webservice_id\n### Please confirm the following\r\n\r\n- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).\r\n- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.\r\n- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.\r\n\r\n### Bug Summary\r\n\r\nWhen job uses the `CyberArk Central Credential Provider Lookup` credential plugin with an empty web service id, it fails with the exception: \r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/awx_devel/awx/main/tasks/jobs.py\", line 508, in run\r\n args = self.build_args(self.instance, private_data_dir, passwords)\r\n File \"/awx_devel/awx/main/tasks/jobs.py\", line 941, in build_args\r\n ssh_username = creds.get_input('username', default='')\r\n File \"/awx_devel/awx/main/models/credential/__init__.py\", line 275, in get_input\r\n return self._get_dynamic_input(field_name)\r\n File \"/awx_devel/awx/main/models/credential/__init__.py\", line 309, in _get_dynamic_input\r\n return input_source.get_input_value()\r\n File \"/awx_devel/awx/main/models/credential/__init__.py\", line 1250, in get_input_value\r\n return backend(**backend_kwargs)\r\n File \"/awx_devel/awx/main/credential_plugins/aim.py\", line 73, in aim_backend\r\n webservice_id = kwargs['webservice_id']\r\nKeyError: 'webservice_id'\r\n```\r\n\r\nThe issue is only reproducible if we create a CCP lookup credential using API and we do not provide the `webservice_id` key as the input. If you create CCP lookup with UI - everything works fine. \r\n\r\n### AWX version\r\n\r\ndevel\r\n\r\n### Select the relevant components\r\n\r\n- [ ] UI\r\n- [X] API\r\n- [ ] Docs\r\n- [ ] Collection\r\n- [ ] CLI\r\n- [ ] Other\r\n\r\n### Installation method\r\n\r\ndocker development environment\r\n\r\n### Modifications\r\n\r\nno\r\n\r\n### Ansible version\r\n\r\n_No response_\r\n\r\n### Operating system\r\n\r\n_No response_\r\n\r\n### Web browser\r\n\r\n_No response_\r\n\r\n### Steps to reproduce\r\n\r\n1. Create CyberArk Central Credential Provider Lookup credential. Do not provide the WebService ID value, keep it empty. I used API to create credetnail and the webservice_id was missing in the inputs: \r\n\r\n```\r\ninputs = {\r\n 'url': url,\r\n 'app_id': app_id,\r\n 'client_key': client_key,\r\n 'client_cert': client_cert,\r\n 'verify': verify\r\n}\r\n\r\npayload = factories.credential.payload(\r\n name=fauxfactory.gen_utf8(),\r\n description=fauxfactory.gen_utf8(),\r\n credential_type=cred_type,\r\n inputs=inputs\r\n)\r\n```\r\n\r\n2. Create Machine credential that uses the CCP lookup credential. Set proper Object query. \r\n3. Create Job Template that uses this credential. Run the job. \r\n\r\n\r\n\r\n### Expected results\r\n\r\nThe lookup should use default webservice id: `AIMWebService`\r\n\r\n### Actual results\r\n\r\nException occured. See description. \r\n\r\n\r\n\r\n### Additional information\r\n\r\n_No response_\n", "before_files": [{"content": "from .plugin import CredentialPlugin, CertFiles, raise_for_status\n\nfrom urllib.parse import quote, urlencode, urljoin\n\nfrom django.utils.translation import gettext_lazy as _\nimport requests\n\naim_inputs = {\n 'fields': [\n {\n 'id': 'url',\n 'label': _('CyberArk CCP URL'),\n 'type': 'string',\n 'format': 'url',\n },\n {\n 'id': 'webservice_id',\n 'label': _('Web Service ID'),\n 'type': 'string',\n 'help_text': _('The CCP Web Service ID. Leave blank to default to AIMWebService.'),\n },\n {\n 'id': 'app_id',\n 'label': _('Application ID'),\n 'type': 'string',\n 'secret': True,\n },\n {\n 'id': 'client_key',\n 'label': _('Client Key'),\n 'type': 'string',\n 'secret': True,\n 'multiline': True,\n },\n {\n 'id': 'client_cert',\n 'label': _('Client Certificate'),\n 'type': 'string',\n 'secret': True,\n 'multiline': True,\n },\n {\n 'id': 'verify',\n 'label': _('Verify SSL Certificates'),\n 'type': 'boolean',\n 'default': True,\n },\n ],\n 'metadata': [\n {\n 'id': 'object_query',\n 'label': _('Object Query'),\n 'type': 'string',\n 'help_text': _('Lookup query for the object. Ex: Safe=TestSafe;Object=testAccountName123'),\n },\n {'id': 'object_query_format', 'label': _('Object Query Format'), 'type': 'string', 'default': 'Exact', 'choices': ['Exact', 'Regexp']},\n {\n 'id': 'reason',\n 'label': _('Reason'),\n 'type': 'string',\n 'help_text': _('Object request reason. This is only needed if it is required by the object\\'s policy.'),\n },\n ],\n 'required': ['url', 'app_id', 'object_query'],\n}\n\n\ndef aim_backend(**kwargs):\n url = kwargs['url']\n client_cert = kwargs.get('client_cert', None)\n client_key = kwargs.get('client_key', None)\n verify = kwargs['verify']\n webservice_id = kwargs['webservice_id']\n app_id = kwargs['app_id']\n object_query = kwargs['object_query']\n object_query_format = kwargs['object_query_format']\n reason = kwargs.get('reason', None)\n if webservice_id == '':\n webservice_id = 'AIMWebService'\n\n query_params = {\n 'AppId': app_id,\n 'Query': object_query,\n 'QueryFormat': object_query_format,\n }\n if reason:\n query_params['reason'] = reason\n\n request_qs = '?' + urlencode(query_params, quote_via=quote)\n request_url = urljoin(url, '/'.join([webservice_id, 'api', 'Accounts']))\n\n with CertFiles(client_cert, client_key) as cert:\n res = requests.get(\n request_url + request_qs,\n timeout=30,\n cert=cert,\n verify=verify,\n allow_redirects=False,\n )\n raise_for_status(res)\n return res.json()['Content']\n\n\naim_plugin = CredentialPlugin('CyberArk Central Credential Provider Lookup', inputs=aim_inputs, backend=aim_backend)\n", "path": "awx/main/credential_plugins/aim.py"}], "after_files": [{"content": "from .plugin import CredentialPlugin, CertFiles, raise_for_status\n\nfrom urllib.parse import quote, urlencode, urljoin\n\nfrom django.utils.translation import gettext_lazy as _\nimport requests\n\naim_inputs = {\n 'fields': [\n {\n 'id': 'url',\n 'label': _('CyberArk CCP URL'),\n 'type': 'string',\n 'format': 'url',\n },\n {\n 'id': 'webservice_id',\n 'label': _('Web Service ID'),\n 'type': 'string',\n 'help_text': _('The CCP Web Service ID. Leave blank to default to AIMWebService.'),\n },\n {\n 'id': 'app_id',\n 'label': _('Application ID'),\n 'type': 'string',\n 'secret': True,\n },\n {\n 'id': 'client_key',\n 'label': _('Client Key'),\n 'type': 'string',\n 'secret': True,\n 'multiline': True,\n },\n {\n 'id': 'client_cert',\n 'label': _('Client Certificate'),\n 'type': 'string',\n 'secret': True,\n 'multiline': True,\n },\n {\n 'id': 'verify',\n 'label': _('Verify SSL Certificates'),\n 'type': 'boolean',\n 'default': True,\n },\n ],\n 'metadata': [\n {\n 'id': 'object_query',\n 'label': _('Object Query'),\n 'type': 'string',\n 'help_text': _('Lookup query for the object. Ex: Safe=TestSafe;Object=testAccountName123'),\n },\n {'id': 'object_query_format', 'label': _('Object Query Format'), 'type': 'string', 'default': 'Exact', 'choices': ['Exact', 'Regexp']},\n {\n 'id': 'reason',\n 'label': _('Reason'),\n 'type': 'string',\n 'help_text': _('Object request reason. This is only needed if it is required by the object\\'s policy.'),\n },\n ],\n 'required': ['url', 'app_id', 'object_query'],\n}\n\n\ndef aim_backend(**kwargs):\n url = kwargs['url']\n client_cert = kwargs.get('client_cert', None)\n client_key = kwargs.get('client_key', None)\n verify = kwargs['verify']\n webservice_id = kwargs.get('webservice_id', '')\n app_id = kwargs['app_id']\n object_query = kwargs['object_query']\n object_query_format = kwargs['object_query_format']\n reason = kwargs.get('reason', None)\n if webservice_id == '':\n webservice_id = 'AIMWebService'\n\n query_params = {\n 'AppId': app_id,\n 'Query': object_query,\n 'QueryFormat': object_query_format,\n }\n if reason:\n query_params['reason'] = reason\n\n request_qs = '?' + urlencode(query_params, quote_via=quote)\n request_url = urljoin(url, '/'.join([webservice_id, 'api', 'Accounts']))\n\n with CertFiles(client_cert, client_key) as cert:\n res = requests.get(\n request_url + request_qs,\n timeout=30,\n cert=cert,\n verify=verify,\n allow_redirects=False,\n )\n raise_for_status(res)\n return res.json()['Content']\n\n\naim_plugin = CredentialPlugin('CyberArk Central Credential Provider Lookup', inputs=aim_inputs, backend=aim_backend)\n", "path": "awx/main/credential_plugins/aim.py"}]}
1,921
150
gh_patches_debug_7696
rasdani/github-patches
git_diff
borgbackup__borg-6129
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- it's 2022 check misc. places in source, docs, readme, copyright, license, ... and update to 2022. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `docs/conf.py` Content: ``` 1 # documentation build configuration file, created by 2 # sphinx-quickstart on Sat Sep 10 18:18:25 2011. 3 # 4 # This file is execfile()d with the current directory set to its containing dir. 5 # 6 # Note that not all possible configuration values are present in this 7 # autogenerated file. 8 # 9 # All configuration values have a default; values that are commented out 10 # serve to show the default. 11 12 # If extensions (or modules to document with autodoc) are in another directory, 13 # add these directories to sys.path here. If the directory is relative to the 14 # documentation root, use os.path.abspath to make it absolute, like shown here. 15 import sys, os 16 sys.path.insert(0, os.path.abspath('../src')) 17 18 from borg import __version__ as sw_version 19 20 # -- General configuration ----------------------------------------------------- 21 22 # If your documentation needs a minimal Sphinx version, state it here. 23 #needs_sphinx = '1.0' 24 25 # Add any Sphinx extension module names here, as strings. They can be extensions 26 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. 27 extensions = [] 28 29 # Add any paths that contain templates here, relative to this directory. 30 templates_path = ['_templates'] 31 32 # The suffix of source filenames. 33 source_suffix = '.rst' 34 35 # The encoding of source files. 36 #source_encoding = 'utf-8-sig' 37 38 # The master toctree document. 39 master_doc = 'index' 40 41 # General information about the project. 42 project = 'Borg - Deduplicating Archiver' 43 copyright = u'2010-2014 Jonas Borgström, 2015-2021 The Borg Collective (see AUTHORS file)' 44 45 # The version info for the project you're documenting, acts as replacement for 46 # |version| and |release|, also used in various other places throughout the 47 # built documents. 48 # 49 # The short X.Y version. 50 split_char = '+' if '+' in sw_version else '-' 51 version = sw_version.split(split_char)[0] 52 # The full version, including alpha/beta/rc tags. 53 release = version 54 55 suppress_warnings = ['image.nonlocal_uri'] 56 57 # The language for content autogenerated by Sphinx. Refer to documentation 58 # for a list of supported languages. 59 #language = None 60 61 # There are two options for replacing |today|: either, you set today to some 62 # non-false value, then it is used: 63 #today = '' 64 # Else, today_fmt is used as the format for a strftime call. 65 today_fmt = '%Y-%m-%d' 66 67 # List of patterns, relative to source directory, that match files and 68 # directories to ignore when looking for source files. 69 exclude_patterns = ['_build'] 70 71 # The reST default role (used for this markup: `text`) to use for all documents. 72 #default_role = None 73 74 # The Borg docs contain no or very little Python docs. 75 # Thus, the primary domain is rst. 76 primary_domain = 'rst' 77 78 # If true, '()' will be appended to :func: etc. cross-reference text. 79 #add_function_parentheses = True 80 81 # If true, the current module name will be prepended to all description 82 # unit titles (such as .. function::). 83 #add_module_names = True 84 85 # If true, sectionauthor and moduleauthor directives will be shown in the 86 # output. They are ignored by default. 87 #show_authors = False 88 89 # The name of the Pygments (syntax highlighting) style to use. 90 pygments_style = 'sphinx' 91 92 # A list of ignored prefixes for module index sorting. 93 #modindex_common_prefix = [] 94 95 96 # -- Options for HTML output --------------------------------------------------- 97 98 # The theme to use for HTML and HTML Help pages. See the documentation for 99 # a list of builtin themes. 100 import guzzle_sphinx_theme 101 102 html_theme_path = guzzle_sphinx_theme.html_theme_path() 103 html_theme = 'guzzle_sphinx_theme' 104 105 106 def set_rst_settings(app): 107 app.env.settings.update({ 108 'field_name_limit': 0, 109 'option_limit': 0, 110 }) 111 112 113 def setup(app): 114 app.add_css_file('css/borg.css') 115 app.connect('builder-inited', set_rst_settings) 116 117 # Theme options are theme-specific and customize the look and feel of a theme 118 # further. For a list of options available for each theme, see the 119 # documentation. 120 html_theme_options = { 121 'project_nav_name': 'Borg %s' % version, 122 } 123 124 # Add any paths that contain custom themes here, relative to this directory. 125 #html_theme_path = ['_themes'] 126 127 # The name for this set of Sphinx documents. If None, it defaults to 128 # "<project> v<release> documentation". 129 #html_title = None 130 131 # A shorter title for the navigation bar. Default is the same as html_title. 132 #html_short_title = None 133 134 # The name of an image file (relative to this directory) to place at the top 135 # of the sidebar. 136 html_logo = '_static/logo.svg' 137 138 # The name of an image file (within the static path) to use as favicon of the 139 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 140 # pixels large. 141 html_favicon = '_static/favicon.ico' 142 143 # Add any paths that contain custom static files (such as style sheets) here, 144 # relative to this directory. They are copied after the builtin static files, 145 # so a file named "default.css" will overwrite the builtin "default.css". 146 html_static_path = ['borg_theme'] 147 148 html_extra_path = ['../src/borg/paperkey.html'] 149 150 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, 151 # using the given strftime format. 152 html_last_updated_fmt = '%Y-%m-%d' 153 154 # If true, SmartyPants will be used to convert quotes and dashes to 155 # typographically correct entities. 156 html_use_smartypants = True 157 158 # Custom sidebar templates, maps document names to template names. 159 html_sidebars = { 160 '**': ['logo-text.html', 'searchbox.html', 'globaltoc.html'], 161 } 162 163 # Additional templates that should be rendered to pages, maps page names to 164 # template names. 165 #html_additional_pages = {} 166 167 # If false, no module index is generated. 168 #html_domain_indices = True 169 170 # If false, no index is generated. 171 html_use_index = False 172 173 # If true, the index is split into individual pages for each letter. 174 #html_split_index = False 175 176 # If true, links to the reST sources are added to the pages. 177 html_show_sourcelink = False 178 179 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. 180 html_show_sphinx = False 181 182 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. 183 html_show_copyright = False 184 185 # If true, an OpenSearch description file will be output, and all pages will 186 # contain a <link> tag referring to it. The value of this option must be the 187 # base URL from which the finished HTML is served. 188 #html_use_opensearch = '' 189 190 # This is the file name suffix for HTML files (e.g. ".xhtml"). 191 #html_file_suffix = None 192 193 # Output file base name for HTML help builder. 194 htmlhelp_basename = 'borgdoc' 195 196 197 # -- Options for LaTeX output -------------------------------------------------- 198 199 # Grouping the document tree into LaTeX files. List of tuples 200 # (source start file, target name, title, author, documentclass [howto/manual]). 201 latex_documents = [ 202 ('book', 'Borg.tex', 'Borg Documentation', 203 'The Borg Collective', 'manual'), 204 ] 205 206 # The name of an image file (relative to this directory) to place at the top of 207 # the title page. 208 latex_logo = '_static/logo.pdf' 209 210 latex_elements = { 211 'papersize': 'a4paper', 212 'pointsize': '10pt', 213 'figure_align': 'H', 214 } 215 216 # For "manual" documents, if this is true, then toplevel headings are parts, 217 # not chapters. 218 #latex_use_parts = False 219 220 # If true, show page references after internal links. 221 #latex_show_pagerefs = False 222 223 # If true, show URL addresses after external links. 224 latex_show_urls = 'footnote' 225 226 # Additional stuff for the LaTeX preamble. 227 #latex_preamble = '' 228 229 # Documents to append as an appendix to all manuals. 230 latex_appendices = [ 231 'support', 232 'resources', 233 'changes', 234 'authors', 235 ] 236 237 # If false, no module index is generated. 238 #latex_domain_indices = True 239 240 241 # -- Options for manual page output -------------------------------------------- 242 243 # One entry per manual page. List of tuples 244 # (source start file, name, description, authors, manual section). 245 man_pages = [ 246 ('usage', 'borg', 247 'BorgBackup is a deduplicating backup program with optional compression and authenticated encryption.', 248 ['The Borg Collective (see AUTHORS file)'], 249 1), 250 ] 251 252 extensions = [ 253 'sphinx.ext.extlinks', 254 'sphinx.ext.autodoc', 255 'sphinx.ext.todo', 256 'sphinx.ext.coverage', 257 'sphinx.ext.viewcode', 258 ] 259 260 extlinks = { 261 'issue': ('https://github.com/borgbackup/borg/issues/%s', '#'), 262 'targz_url': ('https://pypi.python.org/packages/source/b/borgbackup/%%s-%s.tar.gz' % version, None), 263 } 264 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/docs/conf.py b/docs/conf.py --- a/docs/conf.py +++ b/docs/conf.py @@ -40,7 +40,7 @@ # General information about the project. project = 'Borg - Deduplicating Archiver' -copyright = u'2010-2014 Jonas Borgström, 2015-2021 The Borg Collective (see AUTHORS file)' +copyright = u'2010-2014 Jonas Borgström, 2015-2022 The Borg Collective (see AUTHORS file)' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -40,7 +40,7 @@\n \n # General information about the project.\n project = 'Borg - Deduplicating Archiver'\n-copyright = u'2010-2014 Jonas Borgstr\u00f6m, 2015-2021 The Borg Collective (see AUTHORS file)'\n+copyright = u'2010-2014 Jonas Borgstr\u00f6m, 2015-2022 The Borg Collective (see AUTHORS file)'\n \n # The version info for the project you're documenting, acts as replacement for\n # |version| and |release|, also used in various other places throughout the\n", "issue": "it's 2022\ncheck misc. places in source, docs, readme, copyright, license, ... and update to 2022.\n", "before_files": [{"content": "# documentation build configuration file, created by\n# sphinx-quickstart on Sat Sep 10 18:18:25 2011.\n#\n# This file is execfile()d with the current directory set to its containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nimport sys, os\nsys.path.insert(0, os.path.abspath('../src'))\n\nfrom borg import __version__ as sw_version\n\n# -- General configuration -----------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be extensions\n# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = []\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = 'Borg - Deduplicating Archiver'\ncopyright = u'2010-2014 Jonas Borgstr\u00f6m, 2015-2021 The Borg Collective (see AUTHORS file)'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nsplit_char = '+' if '+' in sw_version else '-'\nversion = sw_version.split(split_char)[0]\n# The full version, including alpha/beta/rc tags.\nrelease = version\n\nsuppress_warnings = ['image.nonlocal_uri']\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\ntoday_fmt = '%Y-%m-%d'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all documents.\n#default_role = None\n\n# The Borg docs contain no or very little Python docs.\n# Thus, the primary domain is rst.\nprimary_domain = 'rst'\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n\n# -- Options for HTML output ---------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nimport guzzle_sphinx_theme\n\nhtml_theme_path = guzzle_sphinx_theme.html_theme_path()\nhtml_theme = 'guzzle_sphinx_theme'\n\n\ndef set_rst_settings(app):\n app.env.settings.update({\n 'field_name_limit': 0,\n 'option_limit': 0,\n })\n\n\ndef setup(app):\n app.add_css_file('css/borg.css')\n app.connect('builder-inited', set_rst_settings)\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nhtml_theme_options = {\n 'project_nav_name': 'Borg %s' % version,\n}\n\n# Add any paths that contain custom themes here, relative to this directory.\n#html_theme_path = ['_themes']\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\nhtml_logo = '_static/logo.svg'\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\nhtml_favicon = '_static/favicon.ico'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['borg_theme']\n\nhtml_extra_path = ['../src/borg/paperkey.html']\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\nhtml_last_updated_fmt = '%Y-%m-%d'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\nhtml_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\nhtml_sidebars = {\n '**': ['logo-text.html', 'searchbox.html', 'globaltoc.html'],\n}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\nhtml_use_index = False\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\nhtml_show_sourcelink = False\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\nhtml_show_sphinx = False\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\nhtml_show_copyright = False\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'borgdoc'\n\n\n# -- Options for LaTeX output --------------------------------------------------\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title, author, documentclass [howto/manual]).\nlatex_documents = [\n ('book', 'Borg.tex', 'Borg Documentation',\n 'The Borg Collective', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\nlatex_logo = '_static/logo.pdf'\n\nlatex_elements = {\n 'papersize': 'a4paper',\n 'pointsize': '10pt',\n 'figure_align': 'H',\n}\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\nlatex_show_urls = 'footnote'\n\n# Additional stuff for the LaTeX preamble.\n#latex_preamble = ''\n\n# Documents to append as an appendix to all manuals.\nlatex_appendices = [\n 'support',\n 'resources',\n 'changes',\n 'authors',\n]\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output --------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n ('usage', 'borg',\n 'BorgBackup is a deduplicating backup program with optional compression and authenticated encryption.',\n ['The Borg Collective (see AUTHORS file)'],\n 1),\n]\n\nextensions = [\n 'sphinx.ext.extlinks',\n 'sphinx.ext.autodoc',\n 'sphinx.ext.todo',\n 'sphinx.ext.coverage',\n 'sphinx.ext.viewcode',\n]\n\nextlinks = {\n 'issue': ('https://github.com/borgbackup/borg/issues/%s', '#'),\n 'targz_url': ('https://pypi.python.org/packages/source/b/borgbackup/%%s-%s.tar.gz' % version, None),\n}\n", "path": "docs/conf.py"}], "after_files": [{"content": "# documentation build configuration file, created by\n# sphinx-quickstart on Sat Sep 10 18:18:25 2011.\n#\n# This file is execfile()d with the current directory set to its containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nimport sys, os\nsys.path.insert(0, os.path.abspath('../src'))\n\nfrom borg import __version__ as sw_version\n\n# -- General configuration -----------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be extensions\n# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = []\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = 'Borg - Deduplicating Archiver'\ncopyright = u'2010-2014 Jonas Borgstr\u00f6m, 2015-2022 The Borg Collective (see AUTHORS file)'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nsplit_char = '+' if '+' in sw_version else '-'\nversion = sw_version.split(split_char)[0]\n# The full version, including alpha/beta/rc tags.\nrelease = version\n\nsuppress_warnings = ['image.nonlocal_uri']\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\ntoday_fmt = '%Y-%m-%d'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all documents.\n#default_role = None\n\n# The Borg docs contain no or very little Python docs.\n# Thus, the primary domain is rst.\nprimary_domain = 'rst'\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n\n# -- Options for HTML output ---------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nimport guzzle_sphinx_theme\n\nhtml_theme_path = guzzle_sphinx_theme.html_theme_path()\nhtml_theme = 'guzzle_sphinx_theme'\n\n\ndef set_rst_settings(app):\n app.env.settings.update({\n 'field_name_limit': 0,\n 'option_limit': 0,\n })\n\n\ndef setup(app):\n app.add_css_file('css/borg.css')\n app.connect('builder-inited', set_rst_settings)\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nhtml_theme_options = {\n 'project_nav_name': 'Borg %s' % version,\n}\n\n# Add any paths that contain custom themes here, relative to this directory.\n#html_theme_path = ['_themes']\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\nhtml_logo = '_static/logo.svg'\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\nhtml_favicon = '_static/favicon.ico'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['borg_theme']\n\nhtml_extra_path = ['../src/borg/paperkey.html']\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\nhtml_last_updated_fmt = '%Y-%m-%d'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\nhtml_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\nhtml_sidebars = {\n '**': ['logo-text.html', 'searchbox.html', 'globaltoc.html'],\n}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\nhtml_use_index = False\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\nhtml_show_sourcelink = False\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\nhtml_show_sphinx = False\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\nhtml_show_copyright = False\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'borgdoc'\n\n\n# -- Options for LaTeX output --------------------------------------------------\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title, author, documentclass [howto/manual]).\nlatex_documents = [\n ('book', 'Borg.tex', 'Borg Documentation',\n 'The Borg Collective', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\nlatex_logo = '_static/logo.pdf'\n\nlatex_elements = {\n 'papersize': 'a4paper',\n 'pointsize': '10pt',\n 'figure_align': 'H',\n}\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\nlatex_show_urls = 'footnote'\n\n# Additional stuff for the LaTeX preamble.\n#latex_preamble = ''\n\n# Documents to append as an appendix to all manuals.\nlatex_appendices = [\n 'support',\n 'resources',\n 'changes',\n 'authors',\n]\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output --------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n ('usage', 'borg',\n 'BorgBackup is a deduplicating backup program with optional compression and authenticated encryption.',\n ['The Borg Collective (see AUTHORS file)'],\n 1),\n]\n\nextensions = [\n 'sphinx.ext.extlinks',\n 'sphinx.ext.autodoc',\n 'sphinx.ext.todo',\n 'sphinx.ext.coverage',\n 'sphinx.ext.viewcode',\n]\n\nextlinks = {\n 'issue': ('https://github.com/borgbackup/borg/issues/%s', '#'),\n 'targz_url': ('https://pypi.python.org/packages/source/b/borgbackup/%%s-%s.tar.gz' % version, None),\n}\n", "path": "docs/conf.py"}]}
3,045
165
gh_patches_debug_2021
rasdani/github-patches
git_diff
zigpy__zha-device-handlers-112
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Ikea group support bind method doesn't return status as expected https://github.com/dmulcahey/zha-device-handlers/blob/b5b383939944ff541ee38a94c7f4d6cf3edc611f/zhaquirks/ikea/__init__.py#L25 https://github.com/home-assistant/home-assistant/blob/a30c37017b7782473294d7999e85d7a369a0539a/homeassistant/components/zha/core/helpers.py#L56 reported by @Adminiuga we should return the status in [ ] so the bind helper in HA is happy. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `zhaquirks/ikea/__init__.py` Content: ``` 1 """Ikea module.""" 2 import logging 3 from zigpy.zcl.clusters.lightlink import LightLink 4 from zigpy.quirks import CustomCluster 5 6 _LOGGER = logging.getLogger(__name__) 7 8 9 class LightLinkCluster(CustomCluster, LightLink): 10 """Ikea LightLink cluster.""" 11 12 async def bind(self): 13 """Bind LightLink cluster to coordinator.""" 14 application = self._endpoint.device.application 15 try: 16 coordinator = application.get_device(application.ieee) 17 except KeyError: 18 _LOGGER.warning( 19 "Aborting - unable to locate required coordinator device." 20 ) 21 return 22 group_list = await self.get_group_identifiers(0) 23 group_record = group_list[2] 24 group_id = group_record[0].group_id 25 await coordinator.add_to_group(group_id) 26 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/zhaquirks/ikea/__init__.py b/zhaquirks/ikea/__init__.py --- a/zhaquirks/ikea/__init__.py +++ b/zhaquirks/ikea/__init__.py @@ -22,4 +22,5 @@ group_list = await self.get_group_identifiers(0) group_record = group_list[2] group_id = group_record[0].group_id - await coordinator.add_to_group(group_id) + status = await coordinator.add_to_group(group_id) + return [status]
{"golden_diff": "diff --git a/zhaquirks/ikea/__init__.py b/zhaquirks/ikea/__init__.py\n--- a/zhaquirks/ikea/__init__.py\n+++ b/zhaquirks/ikea/__init__.py\n@@ -22,4 +22,5 @@\n group_list = await self.get_group_identifiers(0)\n group_record = group_list[2]\n group_id = group_record[0].group_id\n- await coordinator.add_to_group(group_id)\n+ status = await coordinator.add_to_group(group_id)\n+ return [status]\n", "issue": "Ikea group support bind method doesn't return status as expected\nhttps://github.com/dmulcahey/zha-device-handlers/blob/b5b383939944ff541ee38a94c7f4d6cf3edc611f/zhaquirks/ikea/__init__.py#L25\r\n\r\nhttps://github.com/home-assistant/home-assistant/blob/a30c37017b7782473294d7999e85d7a369a0539a/homeassistant/components/zha/core/helpers.py#L56\r\n\r\nreported by @Adminiuga \r\n\r\nwe should return the status in [ ] so the bind helper in HA is happy.\n", "before_files": [{"content": "\"\"\"Ikea module.\"\"\"\nimport logging\nfrom zigpy.zcl.clusters.lightlink import LightLink\nfrom zigpy.quirks import CustomCluster\n\n_LOGGER = logging.getLogger(__name__)\n\n\nclass LightLinkCluster(CustomCluster, LightLink):\n \"\"\"Ikea LightLink cluster.\"\"\"\n\n async def bind(self):\n \"\"\"Bind LightLink cluster to coordinator.\"\"\"\n application = self._endpoint.device.application\n try:\n coordinator = application.get_device(application.ieee)\n except KeyError:\n _LOGGER.warning(\n \"Aborting - unable to locate required coordinator device.\"\n )\n return\n group_list = await self.get_group_identifiers(0)\n group_record = group_list[2]\n group_id = group_record[0].group_id\n await coordinator.add_to_group(group_id)\n", "path": "zhaquirks/ikea/__init__.py"}], "after_files": [{"content": "\"\"\"Ikea module.\"\"\"\nimport logging\nfrom zigpy.zcl.clusters.lightlink import LightLink\nfrom zigpy.quirks import CustomCluster\n\n_LOGGER = logging.getLogger(__name__)\n\n\nclass LightLinkCluster(CustomCluster, LightLink):\n \"\"\"Ikea LightLink cluster.\"\"\"\n\n async def bind(self):\n \"\"\"Bind LightLink cluster to coordinator.\"\"\"\n application = self._endpoint.device.application\n try:\n coordinator = application.get_device(application.ieee)\n except KeyError:\n _LOGGER.warning(\n \"Aborting - unable to locate required coordinator device.\"\n )\n return\n group_list = await self.get_group_identifiers(0)\n group_record = group_list[2]\n group_id = group_record[0].group_id\n status = await coordinator.add_to_group(group_id)\n return [status]\n", "path": "zhaquirks/ikea/__init__.py"}]}
643
130
gh_patches_debug_11706
rasdani/github-patches
git_diff
pypa__pip-11318
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Logging error when checking for new version of pip. ### Description When pip (22.1.2) checked for a new version it failed with an error. It's coming from the following function: https://github.com/pypa/pip/blob/c4606b3572529625762f0586dda134302cf6122c/src/pip/_internal/utils/entrypoints.py#L46-L62 The problem call is to `os.path.samefile` on line 58, where it compares the output of `shutil.which('pip')` to `<sys.prefix>/bin/pip` (in my case `/usr/bin/pip`). However, on my system, `pip` is installed to the user site-packages directory (so the binary is at `/home/domdf/.local/bin/pip`). The solution is to check whether the file exists before calling `samefile`. I have Python 3.7 and 3.9 installed to `/usr` alongside the system's Python 3.8, and the error is present with all three versions. ### Expected behavior Pip checks for a new version without an error. ### pip version 22.1.2 ### Python version 3.9.13 ### OS Ubuntu 20.04 ### How to Reproduce 1. `pip install pip==22.1.2` 2. `pip install pip` <- Any package will do. ### Output ```shell $ pip install pip Defaulting to user installation because normal site-packages is not writeable Requirement already satisfied: pip in ./.local/lib/python3.9/site-packages (22.1.2) --- Logging error --- Traceback (most recent call last): File "/home/domdf/.local/lib/python3.9/site-packages/pip/_internal/utils/logging.py", line 177, in emit self.console.print(renderable, overflow="ignore", crop=False, style=style) File "/home/domdf/.local/lib/python3.9/site-packages/pip/_vendor/rich/console.py", line 1752, in print extend(render(renderable, render_options)) File "/home/domdf/.local/lib/python3.9/site-packages/pip/_vendor/rich/console.py", line 1390, in render for render_output in iter_render: File "/home/domdf/.local/lib/python3.9/site-packages/pip/_internal/utils/logging.py", line 134, in __rich_console__ for line in lines: File "/home/domdf/.local/lib/python3.9/site-packages/pip/_vendor/rich/segment.py", line 245, in split_lines for segment in segments: File "/home/domdf/.local/lib/python3.9/site-packages/pip/_vendor/rich/console.py", line 1368, in render renderable = rich_cast(renderable) File "/home/domdf/.local/lib/python3.9/site-packages/pip/_vendor/rich/protocol.py", line 36, in rich_cast renderable = cast_method() File "/home/domdf/.local/lib/python3.9/site-packages/pip/_internal/self_outdated_check.py", line 130, in __rich__ pip_cmd = get_best_invocation_for_this_pip() File "/home/domdf/.local/lib/python3.9/site-packages/pip/_internal/utils/entrypoints.py", line 58, in get_best_invocation_for_this_pip if found_executable and os.path.samefile( File "/usr/lib/python3.9/genericpath.py", line 101, in samefile s2 = os.stat(f2) FileNotFoundError: [Errno 2] No such file or directory: '/usr/bin/pip' Call stack: File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/domdf/.local/lib/python3.9/site-packages/pip/__main__.py", line 31, in <module> sys.exit(_main()) File "/home/domdf/.local/lib/python3.9/site-packages/pip/_internal/cli/main.py", line 70, in main return command.main(cmd_args) File "/home/domdf/.local/lib/python3.9/site-packages/pip/_internal/cli/base_command.py", line 101, in main return self._main(args) File "/home/domdf/.local/lib/python3.9/site-packages/pip/_internal/cli/base_command.py", line 223, in _main self.handle_pip_version_check(options) File "/home/domdf/.local/lib/python3.9/site-packages/pip/_internal/cli/req_command.py", line 148, in handle_pip_version_check pip_self_version_check(session, options) File "/home/domdf/.local/lib/python3.9/site-packages/pip/_internal/self_outdated_check.py", line 237, in pip_self_version_check logger.info("[present-rich] %s", upgrade_prompt) File "/usr/lib/python3.9/logging/__init__.py", line 1446, in info self._log(INFO, msg, args, **kwargs) File "/usr/lib/python3.9/logging/__init__.py", line 1589, in _log self.handle(record) File "/usr/lib/python3.9/logging/__init__.py", line 1599, in handle self.callHandlers(record) File "/usr/lib/python3.9/logging/__init__.py", line 1661, in callHandlers hdlr.handle(record) File "/usr/lib/python3.9/logging/__init__.py", line 952, in handle self.emit(record) File "/home/domdf/.local/lib/python3.9/site-packages/pip/_internal/utils/logging.py", line 179, in emit self.handleError(record) Message: '[present-rich] %s' Arguments: (UpgradePrompt(old='22.1.2', new='22.2'),) ``` ### Code of Conduct - [X] I agree to follow the [PSF Code of Conduct](https://www.python.org/psf/conduct/). --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `src/pip/_internal/utils/entrypoints.py` Content: ``` 1 import itertools 2 import os 3 import shutil 4 import sys 5 from typing import List, Optional 6 7 from pip._internal.cli.main import main 8 from pip._internal.utils.compat import WINDOWS 9 10 _EXECUTABLE_NAMES = [ 11 "pip", 12 f"pip{sys.version_info.major}", 13 f"pip{sys.version_info.major}.{sys.version_info.minor}", 14 ] 15 if WINDOWS: 16 _allowed_extensions = {"", ".exe"} 17 _EXECUTABLE_NAMES = [ 18 "".join(parts) 19 for parts in itertools.product(_EXECUTABLE_NAMES, _allowed_extensions) 20 ] 21 22 23 def _wrapper(args: Optional[List[str]] = None) -> int: 24 """Central wrapper for all old entrypoints. 25 26 Historically pip has had several entrypoints defined. Because of issues 27 arising from PATH, sys.path, multiple Pythons, their interactions, and most 28 of them having a pip installed, users suffer every time an entrypoint gets 29 moved. 30 31 To alleviate this pain, and provide a mechanism for warning users and 32 directing them to an appropriate place for help, we now define all of 33 our old entrypoints as wrappers for the current one. 34 """ 35 sys.stderr.write( 36 "WARNING: pip is being invoked by an old script wrapper. This will " 37 "fail in a future version of pip.\n" 38 "Please see https://github.com/pypa/pip/issues/5599 for advice on " 39 "fixing the underlying issue.\n" 40 "To avoid this problem you can invoke Python with '-m pip' instead of " 41 "running pip directly.\n" 42 ) 43 return main(args) 44 45 46 def get_best_invocation_for_this_pip() -> str: 47 """Try to figure out the best way to invoke pip in the current environment.""" 48 binary_directory = "Scripts" if WINDOWS else "bin" 49 binary_prefix = os.path.join(sys.prefix, binary_directory) 50 51 # Try to use pip[X[.Y]] names, if those executables for this environment are 52 # the first on PATH with that name. 53 path_parts = os.path.normcase(os.environ.get("PATH", "")).split(os.pathsep) 54 exe_are_in_PATH = os.path.normcase(binary_prefix) in path_parts 55 if exe_are_in_PATH: 56 for exe_name in _EXECUTABLE_NAMES: 57 found_executable = shutil.which(exe_name) 58 if found_executable and os.path.samefile( 59 found_executable, 60 os.path.join(binary_prefix, exe_name), 61 ): 62 return exe_name 63 64 # Use the `-m` invocation, if there's no "nice" invocation. 65 return f"{get_best_invocation_for_this_python()} -m pip" 66 67 68 def get_best_invocation_for_this_python() -> str: 69 """Try to figure out the best way to invoke the current Python.""" 70 exe = sys.executable 71 exe_name = os.path.basename(exe) 72 73 # Try to use the basename, if it's the first executable. 74 found_executable = shutil.which(exe_name) 75 if found_executable and os.path.samefile(found_executable, exe): 76 return exe_name 77 78 # Use the full executable name, because we couldn't find something simpler. 79 return exe 80 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/src/pip/_internal/utils/entrypoints.py b/src/pip/_internal/utils/entrypoints.py --- a/src/pip/_internal/utils/entrypoints.py +++ b/src/pip/_internal/utils/entrypoints.py @@ -55,9 +55,14 @@ if exe_are_in_PATH: for exe_name in _EXECUTABLE_NAMES: found_executable = shutil.which(exe_name) - if found_executable and os.path.samefile( - found_executable, - os.path.join(binary_prefix, exe_name), + binary_executable = os.path.join(binary_prefix, exe_name) + if ( + found_executable + and os.path.exists(binary_executable) + and os.path.samefile( + found_executable, + binary_executable, + ) ): return exe_name
{"golden_diff": "diff --git a/src/pip/_internal/utils/entrypoints.py b/src/pip/_internal/utils/entrypoints.py\n--- a/src/pip/_internal/utils/entrypoints.py\n+++ b/src/pip/_internal/utils/entrypoints.py\n@@ -55,9 +55,14 @@\n if exe_are_in_PATH:\n for exe_name in _EXECUTABLE_NAMES:\n found_executable = shutil.which(exe_name)\n- if found_executable and os.path.samefile(\n- found_executable,\n- os.path.join(binary_prefix, exe_name),\n+ binary_executable = os.path.join(binary_prefix, exe_name)\n+ if (\n+ found_executable\n+ and os.path.exists(binary_executable)\n+ and os.path.samefile(\n+ found_executable,\n+ binary_executable,\n+ )\n ):\n return exe_name\n", "issue": "Logging error when checking for new version of pip.\n### Description\n\nWhen pip (22.1.2) checked for a new version it failed with an error. It's coming from the following function:\r\n\r\nhttps://github.com/pypa/pip/blob/c4606b3572529625762f0586dda134302cf6122c/src/pip/_internal/utils/entrypoints.py#L46-L62\r\n\r\nThe problem call is to `os.path.samefile` on line 58, where it compares the output of `shutil.which('pip')` to `<sys.prefix>/bin/pip` (in my case `/usr/bin/pip`). However, on my system, `pip` is installed to the user site-packages directory (so the binary is at `/home/domdf/.local/bin/pip`).\r\n\r\nThe solution is to check whether the file exists before calling `samefile`.\r\n\r\nI have Python 3.7 and 3.9 installed to `/usr` alongside the system's Python 3.8, and the error is present with all three versions.\n\n### Expected behavior\n\nPip checks for a new version without an error.\n\n### pip version\n\n22.1.2\n\n### Python version\n\n3.9.13\n\n### OS\n\nUbuntu 20.04\n\n### How to Reproduce\n\n1. `pip install pip==22.1.2`\r\n2. `pip install pip` <- Any package will do.\r\n\n\n### Output\n\n```shell\n$ pip install pip\r\nDefaulting to user installation because normal site-packages is not writeable\r\nRequirement already satisfied: pip in ./.local/lib/python3.9/site-packages (22.1.2)\r\n--- Logging error ---\r\nTraceback (most recent call last):\r\n File \"/home/domdf/.local/lib/python3.9/site-packages/pip/_internal/utils/logging.py\", line 177, in emit\r\n self.console.print(renderable, overflow=\"ignore\", crop=False, style=style)\r\n File \"/home/domdf/.local/lib/python3.9/site-packages/pip/_vendor/rich/console.py\", line 1752, in print\r\n extend(render(renderable, render_options))\r\n File \"/home/domdf/.local/lib/python3.9/site-packages/pip/_vendor/rich/console.py\", line 1390, in render\r\n for render_output in iter_render:\r\n File \"/home/domdf/.local/lib/python3.9/site-packages/pip/_internal/utils/logging.py\", line 134, in __rich_console__\r\n for line in lines:\r\n File \"/home/domdf/.local/lib/python3.9/site-packages/pip/_vendor/rich/segment.py\", line 245, in split_lines\r\n for segment in segments:\r\n File \"/home/domdf/.local/lib/python3.9/site-packages/pip/_vendor/rich/console.py\", line 1368, in render\r\n renderable = rich_cast(renderable)\r\n File \"/home/domdf/.local/lib/python3.9/site-packages/pip/_vendor/rich/protocol.py\", line 36, in rich_cast\r\n renderable = cast_method()\r\n File \"/home/domdf/.local/lib/python3.9/site-packages/pip/_internal/self_outdated_check.py\", line 130, in __rich__\r\n pip_cmd = get_best_invocation_for_this_pip()\r\n File \"/home/domdf/.local/lib/python3.9/site-packages/pip/_internal/utils/entrypoints.py\", line 58, in get_best_invocation_for_this_pip\r\n if found_executable and os.path.samefile(\r\n File \"/usr/lib/python3.9/genericpath.py\", line 101, in samefile\r\n s2 = os.stat(f2)\r\nFileNotFoundError: [Errno 2] No such file or directory: '/usr/bin/pip'\r\nCall stack:\r\n File \"/usr/lib/python3.9/runpy.py\", line 197, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"/usr/lib/python3.9/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/domdf/.local/lib/python3.9/site-packages/pip/__main__.py\", line 31, in <module>\r\n sys.exit(_main())\r\n File \"/home/domdf/.local/lib/python3.9/site-packages/pip/_internal/cli/main.py\", line 70, in main\r\n return command.main(cmd_args)\r\n File \"/home/domdf/.local/lib/python3.9/site-packages/pip/_internal/cli/base_command.py\", line 101, in main\r\n return self._main(args)\r\n File \"/home/domdf/.local/lib/python3.9/site-packages/pip/_internal/cli/base_command.py\", line 223, in _main\r\n self.handle_pip_version_check(options)\r\n File \"/home/domdf/.local/lib/python3.9/site-packages/pip/_internal/cli/req_command.py\", line 148, in handle_pip_version_check\r\n pip_self_version_check(session, options)\r\n File \"/home/domdf/.local/lib/python3.9/site-packages/pip/_internal/self_outdated_check.py\", line 237, in pip_self_version_check\r\n logger.info(\"[present-rich] %s\", upgrade_prompt)\r\n File \"/usr/lib/python3.9/logging/__init__.py\", line 1446, in info\r\n self._log(INFO, msg, args, **kwargs)\r\n File \"/usr/lib/python3.9/logging/__init__.py\", line 1589, in _log\r\n self.handle(record)\r\n File \"/usr/lib/python3.9/logging/__init__.py\", line 1599, in handle\r\n self.callHandlers(record)\r\n File \"/usr/lib/python3.9/logging/__init__.py\", line 1661, in callHandlers\r\n hdlr.handle(record)\r\n File \"/usr/lib/python3.9/logging/__init__.py\", line 952, in handle\r\n self.emit(record)\r\n File \"/home/domdf/.local/lib/python3.9/site-packages/pip/_internal/utils/logging.py\", line 179, in emit\r\n self.handleError(record)\r\nMessage: '[present-rich] %s'\r\nArguments: (UpgradePrompt(old='22.1.2', new='22.2'),)\n```\n\n\n### Code of Conduct\n\n- [X] I agree to follow the [PSF Code of Conduct](https://www.python.org/psf/conduct/).\n", "before_files": [{"content": "import itertools\nimport os\nimport shutil\nimport sys\nfrom typing import List, Optional\n\nfrom pip._internal.cli.main import main\nfrom pip._internal.utils.compat import WINDOWS\n\n_EXECUTABLE_NAMES = [\n \"pip\",\n f\"pip{sys.version_info.major}\",\n f\"pip{sys.version_info.major}.{sys.version_info.minor}\",\n]\nif WINDOWS:\n _allowed_extensions = {\"\", \".exe\"}\n _EXECUTABLE_NAMES = [\n \"\".join(parts)\n for parts in itertools.product(_EXECUTABLE_NAMES, _allowed_extensions)\n ]\n\n\ndef _wrapper(args: Optional[List[str]] = None) -> int:\n \"\"\"Central wrapper for all old entrypoints.\n\n Historically pip has had several entrypoints defined. Because of issues\n arising from PATH, sys.path, multiple Pythons, their interactions, and most\n of them having a pip installed, users suffer every time an entrypoint gets\n moved.\n\n To alleviate this pain, and provide a mechanism for warning users and\n directing them to an appropriate place for help, we now define all of\n our old entrypoints as wrappers for the current one.\n \"\"\"\n sys.stderr.write(\n \"WARNING: pip is being invoked by an old script wrapper. This will \"\n \"fail in a future version of pip.\\n\"\n \"Please see https://github.com/pypa/pip/issues/5599 for advice on \"\n \"fixing the underlying issue.\\n\"\n \"To avoid this problem you can invoke Python with '-m pip' instead of \"\n \"running pip directly.\\n\"\n )\n return main(args)\n\n\ndef get_best_invocation_for_this_pip() -> str:\n \"\"\"Try to figure out the best way to invoke pip in the current environment.\"\"\"\n binary_directory = \"Scripts\" if WINDOWS else \"bin\"\n binary_prefix = os.path.join(sys.prefix, binary_directory)\n\n # Try to use pip[X[.Y]] names, if those executables for this environment are\n # the first on PATH with that name.\n path_parts = os.path.normcase(os.environ.get(\"PATH\", \"\")).split(os.pathsep)\n exe_are_in_PATH = os.path.normcase(binary_prefix) in path_parts\n if exe_are_in_PATH:\n for exe_name in _EXECUTABLE_NAMES:\n found_executable = shutil.which(exe_name)\n if found_executable and os.path.samefile(\n found_executable,\n os.path.join(binary_prefix, exe_name),\n ):\n return exe_name\n\n # Use the `-m` invocation, if there's no \"nice\" invocation.\n return f\"{get_best_invocation_for_this_python()} -m pip\"\n\n\ndef get_best_invocation_for_this_python() -> str:\n \"\"\"Try to figure out the best way to invoke the current Python.\"\"\"\n exe = sys.executable\n exe_name = os.path.basename(exe)\n\n # Try to use the basename, if it's the first executable.\n found_executable = shutil.which(exe_name)\n if found_executable and os.path.samefile(found_executable, exe):\n return exe_name\n\n # Use the full executable name, because we couldn't find something simpler.\n return exe\n", "path": "src/pip/_internal/utils/entrypoints.py"}], "after_files": [{"content": "import itertools\nimport os\nimport shutil\nimport sys\nfrom typing import List, Optional\n\nfrom pip._internal.cli.main import main\nfrom pip._internal.utils.compat import WINDOWS\n\n_EXECUTABLE_NAMES = [\n \"pip\",\n f\"pip{sys.version_info.major}\",\n f\"pip{sys.version_info.major}.{sys.version_info.minor}\",\n]\nif WINDOWS:\n _allowed_extensions = {\"\", \".exe\"}\n _EXECUTABLE_NAMES = [\n \"\".join(parts)\n for parts in itertools.product(_EXECUTABLE_NAMES, _allowed_extensions)\n ]\n\n\ndef _wrapper(args: Optional[List[str]] = None) -> int:\n \"\"\"Central wrapper for all old entrypoints.\n\n Historically pip has had several entrypoints defined. Because of issues\n arising from PATH, sys.path, multiple Pythons, their interactions, and most\n of them having a pip installed, users suffer every time an entrypoint gets\n moved.\n\n To alleviate this pain, and provide a mechanism for warning users and\n directing them to an appropriate place for help, we now define all of\n our old entrypoints as wrappers for the current one.\n \"\"\"\n sys.stderr.write(\n \"WARNING: pip is being invoked by an old script wrapper. This will \"\n \"fail in a future version of pip.\\n\"\n \"Please see https://github.com/pypa/pip/issues/5599 for advice on \"\n \"fixing the underlying issue.\\n\"\n \"To avoid this problem you can invoke Python with '-m pip' instead of \"\n \"running pip directly.\\n\"\n )\n return main(args)\n\n\ndef get_best_invocation_for_this_pip() -> str:\n \"\"\"Try to figure out the best way to invoke pip in the current environment.\"\"\"\n binary_directory = \"Scripts\" if WINDOWS else \"bin\"\n binary_prefix = os.path.join(sys.prefix, binary_directory)\n\n # Try to use pip[X[.Y]] names, if those executables for this environment are\n # the first on PATH with that name.\n path_parts = os.path.normcase(os.environ.get(\"PATH\", \"\")).split(os.pathsep)\n exe_are_in_PATH = os.path.normcase(binary_prefix) in path_parts\n if exe_are_in_PATH:\n for exe_name in _EXECUTABLE_NAMES:\n found_executable = shutil.which(exe_name)\n binary_executable = os.path.join(binary_prefix, exe_name)\n if (\n found_executable\n and os.path.exists(binary_executable)\n and os.path.samefile(\n found_executable,\n binary_executable,\n )\n ):\n return exe_name\n\n # Use the `-m` invocation, if there's no \"nice\" invocation.\n return f\"{get_best_invocation_for_this_python()} -m pip\"\n\n\ndef get_best_invocation_for_this_python() -> str:\n \"\"\"Try to figure out the best way to invoke the current Python.\"\"\"\n exe = sys.executable\n exe_name = os.path.basename(exe)\n\n # Try to use the basename, if it's the first executable.\n found_executable = shutil.which(exe_name)\n if found_executable and os.path.samefile(found_executable, exe):\n return exe_name\n\n # Use the full executable name, because we couldn't find something simpler.\n return exe\n", "path": "src/pip/_internal/utils/entrypoints.py"}]}
2,539
188
gh_patches_debug_22275
rasdani/github-patches
git_diff
wemake-services__wemake-python-styleguide-39
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Refactor how version is defined Currently we just have a legacy `version.py` file with version inside it. It duplicates the version information from `pyproject.toml`. That's how it should be: https://github.com/sdispater/poetry/issues/273#issuecomment-401983643 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `wemake_python_styleguide/version.py` Content: ``` 1 # -*- coding: utf-8 -*- 2 3 __version__ = '0.0.2' # noqa 4 # TODO: resolve after https://github.com/sdispater/poetry/issues/273 5 ``` Path: `wemake_python_styleguide/checker.py` Content: ``` 1 # -*- coding: utf-8 -*- 2 3 from ast import Module 4 from typing import Generator, Tuple 5 6 from wemake_python_styleguide.version import __version__ 7 from wemake_python_styleguide.visitors.high_complexity import ComplexityVisitor 8 from wemake_python_styleguide.visitors.wrong_function_call import ( 9 WrongFunctionCallVisitor, 10 ) 11 from wemake_python_styleguide.visitors.wrong_import import WrongImportVisitor 12 from wemake_python_styleguide.visitors.wrong_keyword import ( 13 WrongKeywordVisitor, 14 WrongRaiseVisitor, 15 ) 16 from wemake_python_styleguide.visitors.wrong_name import ( 17 WrongModuleMetadataVisitor, 18 WrongNameVisitor, 19 ) 20 from wemake_python_styleguide.visitors.wrong_nested import WrongNestedVisitor 21 22 CheckResult = Tuple[int, int, str, type] 23 24 25 class Checker(object): 26 """ 27 Main checker class. 28 29 Runs all possible checks. 30 """ 31 32 name = 'wemake-python-styleguide' 33 version = __version__ 34 35 def __init__(self, tree: Module, filename: str = '-') -> None: 36 """Creates new checker instance.""" 37 self.tree = tree 38 self.filename = filename 39 40 self._visitors = ( 41 WrongRaiseVisitor, 42 WrongFunctionCallVisitor, 43 WrongImportVisitor, 44 WrongKeywordVisitor, 45 WrongNestedVisitor, 46 ComplexityVisitor, 47 WrongNameVisitor, 48 WrongModuleMetadataVisitor, 49 ) 50 51 def run(self) -> Generator[CheckResult, None, None]: 52 """ 53 Runs the checker. 54 55 This method is used by `flake8` API. 56 """ 57 for visitor_class in self._visitors: 58 visiter = visitor_class() 59 visiter.visit(self.tree) 60 61 for error in visiter.errors: 62 lineno, col_offset, message = error.node_items() 63 yield lineno, col_offset, message, type(self) 64 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/wemake_python_styleguide/checker.py b/wemake_python_styleguide/checker.py --- a/wemake_python_styleguide/checker.py +++ b/wemake_python_styleguide/checker.py @@ -3,7 +3,7 @@ from ast import Module from typing import Generator, Tuple -from wemake_python_styleguide.version import __version__ +from wemake_python_styleguide.version import version from wemake_python_styleguide.visitors.high_complexity import ComplexityVisitor from wemake_python_styleguide.visitors.wrong_function_call import ( WrongFunctionCallVisitor, @@ -30,7 +30,7 @@ """ name = 'wemake-python-styleguide' - version = __version__ + version = version def __init__(self, tree: Module, filename: str = '-') -> None: """Creates new checker instance.""" diff --git a/wemake_python_styleguide/version.py b/wemake_python_styleguide/version.py --- a/wemake_python_styleguide/version.py +++ b/wemake_python_styleguide/version.py @@ -1,4 +1,5 @@ # -*- coding: utf-8 -*- -__version__ = '0.0.2' # noqa -# TODO: resolve after https://github.com/sdispater/poetry/issues/273 +import pkg_resources + +version = pkg_resources.get_distribution('wemake-python-styleguide').version
{"golden_diff": "diff --git a/wemake_python_styleguide/checker.py b/wemake_python_styleguide/checker.py\n--- a/wemake_python_styleguide/checker.py\n+++ b/wemake_python_styleguide/checker.py\n@@ -3,7 +3,7 @@\n from ast import Module\n from typing import Generator, Tuple\n \n-from wemake_python_styleguide.version import __version__\n+from wemake_python_styleguide.version import version\n from wemake_python_styleguide.visitors.high_complexity import ComplexityVisitor\n from wemake_python_styleguide.visitors.wrong_function_call import (\n WrongFunctionCallVisitor,\n@@ -30,7 +30,7 @@\n \"\"\"\n \n name = 'wemake-python-styleguide'\n- version = __version__\n+ version = version\n \n def __init__(self, tree: Module, filename: str = '-') -> None:\n \"\"\"Creates new checker instance.\"\"\"\ndiff --git a/wemake_python_styleguide/version.py b/wemake_python_styleguide/version.py\n--- a/wemake_python_styleguide/version.py\n+++ b/wemake_python_styleguide/version.py\n@@ -1,4 +1,5 @@\n # -*- coding: utf-8 -*-\n \n-__version__ = '0.0.2' # noqa\n-# TODO: resolve after https://github.com/sdispater/poetry/issues/273\n+import pkg_resources\n+\n+version = pkg_resources.get_distribution('wemake-python-styleguide').version\n", "issue": "Refactor how version is defined\nCurrently we just have a legacy `version.py` file with version inside it.\r\nIt duplicates the version information from `pyproject.toml`.\r\n\r\nThat's how it should be: https://github.com/sdispater/poetry/issues/273#issuecomment-401983643\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n__version__ = '0.0.2' # noqa\n# TODO: resolve after https://github.com/sdispater/poetry/issues/273\n", "path": "wemake_python_styleguide/version.py"}, {"content": "# -*- coding: utf-8 -*-\n\nfrom ast import Module\nfrom typing import Generator, Tuple\n\nfrom wemake_python_styleguide.version import __version__\nfrom wemake_python_styleguide.visitors.high_complexity import ComplexityVisitor\nfrom wemake_python_styleguide.visitors.wrong_function_call import (\n WrongFunctionCallVisitor,\n)\nfrom wemake_python_styleguide.visitors.wrong_import import WrongImportVisitor\nfrom wemake_python_styleguide.visitors.wrong_keyword import (\n WrongKeywordVisitor,\n WrongRaiseVisitor,\n)\nfrom wemake_python_styleguide.visitors.wrong_name import (\n WrongModuleMetadataVisitor,\n WrongNameVisitor,\n)\nfrom wemake_python_styleguide.visitors.wrong_nested import WrongNestedVisitor\n\nCheckResult = Tuple[int, int, str, type]\n\n\nclass Checker(object):\n \"\"\"\n Main checker class.\n\n Runs all possible checks.\n \"\"\"\n\n name = 'wemake-python-styleguide'\n version = __version__\n\n def __init__(self, tree: Module, filename: str = '-') -> None:\n \"\"\"Creates new checker instance.\"\"\"\n self.tree = tree\n self.filename = filename\n\n self._visitors = (\n WrongRaiseVisitor,\n WrongFunctionCallVisitor,\n WrongImportVisitor,\n WrongKeywordVisitor,\n WrongNestedVisitor,\n ComplexityVisitor,\n WrongNameVisitor,\n WrongModuleMetadataVisitor,\n )\n\n def run(self) -> Generator[CheckResult, None, None]:\n \"\"\"\n Runs the checker.\n\n This method is used by `flake8` API.\n \"\"\"\n for visitor_class in self._visitors:\n visiter = visitor_class()\n visiter.visit(self.tree)\n\n for error in visiter.errors:\n lineno, col_offset, message = error.node_items()\n yield lineno, col_offset, message, type(self)\n", "path": "wemake_python_styleguide/checker.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport pkg_resources\n\nversion = pkg_resources.get_distribution('wemake-python-styleguide').version\n", "path": "wemake_python_styleguide/version.py"}, {"content": "# -*- coding: utf-8 -*-\n\nfrom ast import Module\nfrom typing import Generator, Tuple\n\nfrom wemake_python_styleguide.version import version\nfrom wemake_python_styleguide.visitors.high_complexity import ComplexityVisitor\nfrom wemake_python_styleguide.visitors.wrong_function_call import (\n WrongFunctionCallVisitor,\n)\nfrom wemake_python_styleguide.visitors.wrong_import import WrongImportVisitor\nfrom wemake_python_styleguide.visitors.wrong_keyword import (\n WrongKeywordVisitor,\n WrongRaiseVisitor,\n)\nfrom wemake_python_styleguide.visitors.wrong_name import (\n WrongModuleMetadataVisitor,\n WrongNameVisitor,\n)\nfrom wemake_python_styleguide.visitors.wrong_nested import WrongNestedVisitor\n\nCheckResult = Tuple[int, int, str, type]\n\n\nclass Checker(object):\n \"\"\"\n Main checker class.\n\n Runs all possible checks.\n \"\"\"\n\n name = 'wemake-python-styleguide'\n version = version\n\n def __init__(self, tree: Module, filename: str = '-') -> None:\n \"\"\"Creates new checker instance.\"\"\"\n self.tree = tree\n self.filename = filename\n\n self._visitors = (\n WrongRaiseVisitor,\n WrongFunctionCallVisitor,\n WrongImportVisitor,\n WrongKeywordVisitor,\n WrongNestedVisitor,\n ComplexityVisitor,\n WrongNameVisitor,\n WrongModuleMetadataVisitor,\n )\n\n def run(self) -> Generator[CheckResult, None, None]:\n \"\"\"\n Runs the checker.\n\n This method is used by `flake8` API.\n \"\"\"\n for visitor_class in self._visitors:\n visiter = visitor_class()\n visiter.visit(self.tree)\n\n for error in visiter.errors:\n lineno, col_offset, message = error.node_items()\n yield lineno, col_offset, message, type(self)\n", "path": "wemake_python_styleguide/checker.py"}]}
918
320
gh_patches_debug_4756
rasdani/github-patches
git_diff
keras-team__keras-core-579
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Using torch backend Using PyTorch backend. Epoch 1/3 32/32 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step - mean_absolute_error: 0.4083 - loss: 0.2566 Epoch 2/3 32/32 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - mean_absolute_error: 0.3805 - loss: 0.2151 Epoch 3/3 32/32 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - mean_absolute_error: 0.3704 - loss: 0.2056 Epoch 1/5 32/32 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.2699 - mae: 0.4200 Epoch 2/5 32/32 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.2409 - mae: 0.3940 Epoch 3/5 32/32 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.2271 - mae: 0.3856 Epoch 4/5 32/32 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.2174 - mae: 0.3785 Epoch 5/5 32/32 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.2120 - mae: 0.3699 Epoch 1/3 32/32 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - mean_absolute_error: 0.7020 - loss: 0.3334 Epoch 2/3 32/32 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - mean_absolute_error: 0.4075 - loss: 0.1271 Epoch 3/3 32/32 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - mean_absolute_error: 0.3776 - loss: 0.1010 32/32 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - mean_absolute_error: 0.8608 - loss: 0.9672 Traceback (most recent call last): File "E:\custom_train_step_in_torch.py", line 483, in <module> gan.fit(dataloader, epochs=1) File "C:\Python_310\lib\site-packages\keras_core\src\utils\traceback_utils.py", line 123, in error_handler raise e.with_traceback(filtered_tb) from None File "C:\Python_310\lib\site-packages\keras_core\src\utils\module_utils.py", line 26, in initialize raise ImportError( ImportError: This requires the tensorflow module. You can install it via `pip install tensorflow` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `keras_core/utils/module_utils.py` Content: ``` 1 import importlib 2 3 4 class LazyModule: 5 def __init__(self, name, pip_name=None): 6 self.name = name 7 pip_name = pip_name or name 8 self.pip_name = pip_name 9 self.module = None 10 self._available = None 11 12 @property 13 def available(self): 14 if self._available is None: 15 try: 16 self.initialize() 17 except ImportError: 18 self._available = False 19 self._available = True 20 return self._available 21 22 def initialize(self): 23 try: 24 self.module = importlib.import_module(self.name) 25 except ImportError: 26 raise ImportError( 27 f"This requires the {self.name} module. " 28 f"You can install it via `pip install {self.pip_name}`" 29 ) 30 31 def __getattr__(self, name): 32 if self.module is None: 33 self.initialize() 34 return getattr(self.module, name) 35 36 37 tensorflow = LazyModule("tensorflow") 38 gfile = LazyModule("tensorflow.io.gfile") 39 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/keras_core/utils/module_utils.py b/keras_core/utils/module_utils.py --- a/keras_core/utils/module_utils.py +++ b/keras_core/utils/module_utils.py @@ -14,9 +14,9 @@ if self._available is None: try: self.initialize() + self._available = True except ImportError: self._available = False - self._available = True return self._available def initialize(self):
{"golden_diff": "diff --git a/keras_core/utils/module_utils.py b/keras_core/utils/module_utils.py\n--- a/keras_core/utils/module_utils.py\n+++ b/keras_core/utils/module_utils.py\n@@ -14,9 +14,9 @@\n if self._available is None:\n try:\n self.initialize()\n+ self._available = True\n except ImportError:\n self._available = False\n- self._available = True\n return self._available\n \n def initialize(self):\n", "issue": "Using torch backend\nUsing PyTorch backend.\r\nEpoch 1/3\r\n32/32 \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 1s 4ms/step - mean_absolute_error: 0.4083 - loss: 0.2566\r\nEpoch 2/3\r\n32/32 \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 0s 4ms/step - mean_absolute_error: 0.3805 - loss: 0.2151\r\nEpoch 3/3\r\n32/32 \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 0s 4ms/step - mean_absolute_error: 0.3704 - loss: 0.2056\r\nEpoch 1/5\r\n32/32 \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 0s 4ms/step - loss: 0.2699 - mae: 0.4200\r\nEpoch 2/5\r\n32/32 \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 0s 4ms/step - loss: 0.2409 - mae: 0.3940\r\nEpoch 3/5\r\n32/32 \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 0s 4ms/step - loss: 0.2271 - mae: 0.3856\r\nEpoch 4/5\r\n32/32 \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 0s 4ms/step - loss: 0.2174 - mae: 0.3785\r\nEpoch 5/5\r\n32/32 \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 0s 4ms/step - loss: 0.2120 - mae: 0.3699\r\nEpoch 1/3\r\n32/32 \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 0s 4ms/step - mean_absolute_error: 0.7020 - loss: 0.3334\r\nEpoch 2/3\r\n32/32 \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 0s 4ms/step - mean_absolute_error: 0.4075 - loss: 0.1271\r\nEpoch 3/3\r\n32/32 \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 0s 4ms/step - mean_absolute_error: 0.3776 - loss: 0.1010\r\n32/32 \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 0s 2ms/step - mean_absolute_error: 0.8608 - loss: 0.9672\r\nTraceback (most recent call last):\r\n File \"E:\\custom_train_step_in_torch.py\", line 483, in <module>\r\n gan.fit(dataloader, epochs=1)\r\n File \"C:\\Python_310\\lib\\site-packages\\keras_core\\src\\utils\\traceback_utils.py\", line 123, in error_handler\r\n raise e.with_traceback(filtered_tb) from None\r\n File \"C:\\Python_310\\lib\\site-packages\\keras_core\\src\\utils\\module_utils.py\", line 26, in initialize\r\n raise ImportError(\r\nImportError: This requires the tensorflow module. You can install it via `pip install tensorflow`\n", "before_files": [{"content": "import importlib\n\n\nclass LazyModule:\n def __init__(self, name, pip_name=None):\n self.name = name\n pip_name = pip_name or name\n self.pip_name = pip_name\n self.module = None\n self._available = None\n\n @property\n def available(self):\n if self._available is None:\n try:\n self.initialize()\n except ImportError:\n self._available = False\n self._available = True\n return self._available\n\n def initialize(self):\n try:\n self.module = importlib.import_module(self.name)\n except ImportError:\n raise ImportError(\n f\"This requires the {self.name} module. \"\n f\"You can install it via `pip install {self.pip_name}`\"\n )\n\n def __getattr__(self, name):\n if self.module is None:\n self.initialize()\n return getattr(self.module, name)\n\n\ntensorflow = LazyModule(\"tensorflow\")\ngfile = LazyModule(\"tensorflow.io.gfile\")\n", "path": "keras_core/utils/module_utils.py"}], "after_files": [{"content": "import importlib\n\n\nclass LazyModule:\n def __init__(self, name, pip_name=None):\n self.name = name\n pip_name = pip_name or name\n self.pip_name = pip_name\n self.module = None\n self._available = None\n\n @property\n def available(self):\n if self._available is None:\n try:\n self.initialize()\n self._available = True\n except ImportError:\n self._available = False\n return self._available\n\n def initialize(self):\n try:\n self.module = importlib.import_module(self.name)\n except ImportError:\n raise ImportError(\n f\"This requires the {self.name} module. \"\n f\"You can install it via `pip install {self.pip_name}`\"\n )\n\n def __getattr__(self, name):\n if self.module is None:\n self.initialize()\n return getattr(self.module, name)\n\n\ntensorflow = LazyModule(\"tensorflow\")\ngfile = LazyModule(\"tensorflow.io.gfile\")\n", "path": "keras_core/utils/module_utils.py"}]}
1,345
109
gh_patches_debug_25289
rasdani/github-patches
git_diff
easybuilders__easybuild-framework-4551
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Environment variable change in module cray-libsci of CPE 23.12 Hi, I report a bug affecting EasyBuild on Cray systems (file [libsci.py](https://github.com/easybuilders/easybuild-framework/blob/develop/easybuild/toolchains/linalg/libsci.py)) with the Cray Programming Environment (CPE) 23.12. The bug should be fixed in CPE 24.03 according to HPE/Cray staff, therefore the impact is limited: - The environment variable name referenced in [line 68](https://github.com/easybuilders/easybuild-framework/blob/e4524c1c70e496e5886de7d4848bb8147eea84bd/easybuild/toolchains/linalg/libsci.py#L68) changed from `CRAY_LIBSCI_PREFIX_DIR` to `CRAY_PE_LIBSCI_PREFIX_DIR` - I have manually fixed [line 69](https://github.com/easybuilders/easybuild-framework/blob/e4524c1c70e496e5886de7d4848bb8147eea84bd/easybuild/toolchains/linalg/libsci.py#L69) using the workaround below: `root = os.getenv('CRAY_LIBSCI_PREFIX_DIR', None) or os.getenv('CRAY_PE_LIBSCI_PREFIX_DIR', None)` The environment variable name should be fixed back to the original one in CPE 24.03 (I did not have the chance to test it yet, though). Since CPE variable names change sometimes, it might be useful to give the option to read the `prefix` of the external module `cray-libsci` from a [metadata file](https://docs.easybuild.io/using-external-modules/?h=metadata#using_external_modules_metadata) instead of having it hard coded. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `easybuild/toolchains/linalg/libsci.py` Content: ``` 1 ## 2 # Copyright 2014-2024 Ghent University 3 # 4 # This file is part of EasyBuild, 5 # originally created by the HPC team of Ghent University (http://ugent.be/hpc/en), 6 # with support of Ghent University (http://ugent.be/hpc), 7 # the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be), 8 # Flemish Research Foundation (FWO) (http://www.fwo.be/en) 9 # and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en). 10 # 11 # https://github.com/easybuilders/easybuild 12 # 13 # EasyBuild is free software: you can redistribute it and/or modify 14 # it under the terms of the GNU General Public License as published by 15 # the Free Software Foundation v2. 16 # 17 # EasyBuild is distributed in the hope that it will be useful, 18 # but WITHOUT ANY WARRANTY; without even the implied warranty of 19 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 20 # GNU General Public License for more details. 21 # 22 # You should have received a copy of the GNU General Public License 23 # along with EasyBuild. If not, see <http://www.gnu.org/licenses/>. 24 ## 25 """ 26 Support for Cray's LibSci library, which provides BLAS/LAPACK support. 27 cfr. https://www.nersc.gov/users/software/programming-libraries/math-libraries/libsci/ 28 29 Authors: 30 31 * Petar Forai (IMP/IMBA, Austria) 32 * Kenneth Hoste (Ghent University) 33 """ 34 import os 35 36 from easybuild.tools.build_log import EasyBuildError 37 from easybuild.tools.toolchain.linalg import LinAlg 38 39 40 CRAY_LIBSCI_MODULE_NAME = 'cray-libsci' 41 TC_CONSTANT_CRAY_LIBSCI = 'CrayLibSci' 42 43 44 class LibSci(LinAlg): 45 """Support for Cray's LibSci library, which provides BLAS/LAPACK support.""" 46 # BLAS/LAPACK support 47 # via cray-libsci module, which gets loaded via the PrgEnv module 48 # see https://www.nersc.gov/users/software/programming-libraries/math-libraries/libsci/ 49 BLAS_MODULE_NAME = [CRAY_LIBSCI_MODULE_NAME] 50 51 # no need to specify libraries, compiler driver takes care of linking the right libraries 52 # FIXME: need to revisit this, on numpy we ended up with a serial BLAS through the wrapper. 53 BLAS_LIB = [''] 54 BLAS_LIB_MT = [''] 55 BLAS_FAMILY = TC_CONSTANT_CRAY_LIBSCI 56 57 LAPACK_MODULE_NAME = [CRAY_LIBSCI_MODULE_NAME] 58 LAPACK_IS_BLAS = True 59 LAPACK_FAMILY = TC_CONSTANT_CRAY_LIBSCI 60 61 BLACS_MODULE_NAME = [] 62 SCALAPACK_MODULE_NAME = [] 63 64 def _get_software_root(self, name, required=True): 65 """Get install prefix for specified software name; special treatment for Cray modules.""" 66 if name == 'cray-libsci': 67 # Cray-provided LibSci module 68 env_var = 'CRAY_LIBSCI_PREFIX_DIR' 69 root = os.getenv(env_var, None) 70 if root is None: 71 if required: 72 raise EasyBuildError("Failed to determine install prefix for %s via $%s", name, env_var) 73 else: 74 self.log.debug("Obtained install prefix for %s via $%s: %s", name, env_var, root) 75 else: 76 root = super(LibSci, self)._get_software_root(name, required=required) 77 78 return root 79 80 def _set_blacs_variables(self): 81 """Skip setting BLACS related variables""" 82 pass 83 84 def _set_scalapack_variables(self): 85 """Skip setting ScaLAPACK related variables""" 86 pass 87 88 def definition(self): 89 """ 90 Filter BLAS module from toolchain definition. 91 The cray-libsci module is loaded indirectly (and versionless) via the PrgEnv module, 92 and thus is not a direct toolchain component. 93 """ 94 tc_def = super(LibSci, self).definition() 95 tc_def['BLAS'] = [] 96 tc_def['LAPACK'] = [] 97 return tc_def 98 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/easybuild/toolchains/linalg/libsci.py b/easybuild/toolchains/linalg/libsci.py --- a/easybuild/toolchains/linalg/libsci.py +++ b/easybuild/toolchains/linalg/libsci.py @@ -65,13 +65,20 @@ """Get install prefix for specified software name; special treatment for Cray modules.""" if name == 'cray-libsci': # Cray-provided LibSci module - env_var = 'CRAY_LIBSCI_PREFIX_DIR' - root = os.getenv(env_var, None) + root = None + # consider both $CRAY_LIBSCI_PREFIX_DIR and $CRAY_PE_LIBSCI_PREFIX_DIR, + # cfr. https://github.com/easybuilders/easybuild-framework/issues/4536 + env_vars = ('CRAY_LIBSCI_PREFIX_DIR', 'CRAY_PE_LIBSCI_PREFIX_DIR') + for env_var in env_vars: + root = os.getenv(env_var, None) + if root is not None: + self.log.debug("Obtained install prefix for %s via $%s: %s", name, env_var, root) + break + if root is None: if required: - raise EasyBuildError("Failed to determine install prefix for %s via $%s", name, env_var) - else: - self.log.debug("Obtained install prefix for %s via $%s: %s", name, env_var, root) + env_vars_str = ', '.join('$' + e for e in env_vars) + raise EasyBuildError("Failed to determine install prefix for %s via $%s", name, env_vars_str) else: root = super(LibSci, self)._get_software_root(name, required=required)
{"golden_diff": "diff --git a/easybuild/toolchains/linalg/libsci.py b/easybuild/toolchains/linalg/libsci.py\n--- a/easybuild/toolchains/linalg/libsci.py\n+++ b/easybuild/toolchains/linalg/libsci.py\n@@ -65,13 +65,20 @@\n \"\"\"Get install prefix for specified software name; special treatment for Cray modules.\"\"\"\n if name == 'cray-libsci':\n # Cray-provided LibSci module\n- env_var = 'CRAY_LIBSCI_PREFIX_DIR'\n- root = os.getenv(env_var, None)\n+ root = None\n+ # consider both $CRAY_LIBSCI_PREFIX_DIR and $CRAY_PE_LIBSCI_PREFIX_DIR,\n+ # cfr. https://github.com/easybuilders/easybuild-framework/issues/4536\n+ env_vars = ('CRAY_LIBSCI_PREFIX_DIR', 'CRAY_PE_LIBSCI_PREFIX_DIR')\n+ for env_var in env_vars:\n+ root = os.getenv(env_var, None)\n+ if root is not None:\n+ self.log.debug(\"Obtained install prefix for %s via $%s: %s\", name, env_var, root)\n+ break\n+\n if root is None:\n if required:\n- raise EasyBuildError(\"Failed to determine install prefix for %s via $%s\", name, env_var)\n- else:\n- self.log.debug(\"Obtained install prefix for %s via $%s: %s\", name, env_var, root)\n+ env_vars_str = ', '.join('$' + e for e in env_vars)\n+ raise EasyBuildError(\"Failed to determine install prefix for %s via $%s\", name, env_vars_str)\n else:\n root = super(LibSci, self)._get_software_root(name, required=required)\n", "issue": "Environment variable change in module cray-libsci of CPE 23.12\nHi, I report a bug affecting EasyBuild on Cray systems (file [libsci.py](https://github.com/easybuilders/easybuild-framework/blob/develop/easybuild/toolchains/linalg/libsci.py)) with the Cray Programming Environment (CPE) 23.12. The bug should be fixed in CPE 24.03 according to HPE/Cray staff, therefore the impact is limited:\r\n- The environment variable name referenced in [line 68](https://github.com/easybuilders/easybuild-framework/blob/e4524c1c70e496e5886de7d4848bb8147eea84bd/easybuild/toolchains/linalg/libsci.py#L68) changed from `CRAY_LIBSCI_PREFIX_DIR` to `CRAY_PE_LIBSCI_PREFIX_DIR`\r\n- I have manually fixed [line 69](https://github.com/easybuilders/easybuild-framework/blob/e4524c1c70e496e5886de7d4848bb8147eea84bd/easybuild/toolchains/linalg/libsci.py#L69) using the workaround below:\r\n `root = os.getenv('CRAY_LIBSCI_PREFIX_DIR', None) or os.getenv('CRAY_PE_LIBSCI_PREFIX_DIR', None)`\r\n\r\nThe environment variable name should be fixed back to the original one in CPE 24.03 (I did not have the chance to test it yet, though). Since CPE variable names change sometimes, it might be useful to give the option to read the `prefix` of the external module `cray-libsci` from a [metadata file](https://docs.easybuild.io/using-external-modules/?h=metadata#using_external_modules_metadata) instead of having it hard coded.\r\n\n", "before_files": [{"content": "##\n# Copyright 2014-2024 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nSupport for Cray's LibSci library, which provides BLAS/LAPACK support.\ncfr. https://www.nersc.gov/users/software/programming-libraries/math-libraries/libsci/\n\nAuthors:\n\n* Petar Forai (IMP/IMBA, Austria)\n* Kenneth Hoste (Ghent University)\n\"\"\"\nimport os\n\nfrom easybuild.tools.build_log import EasyBuildError\nfrom easybuild.tools.toolchain.linalg import LinAlg\n\n\nCRAY_LIBSCI_MODULE_NAME = 'cray-libsci'\nTC_CONSTANT_CRAY_LIBSCI = 'CrayLibSci'\n\n\nclass LibSci(LinAlg):\n \"\"\"Support for Cray's LibSci library, which provides BLAS/LAPACK support.\"\"\"\n # BLAS/LAPACK support\n # via cray-libsci module, which gets loaded via the PrgEnv module\n # see https://www.nersc.gov/users/software/programming-libraries/math-libraries/libsci/\n BLAS_MODULE_NAME = [CRAY_LIBSCI_MODULE_NAME]\n\n # no need to specify libraries, compiler driver takes care of linking the right libraries\n # FIXME: need to revisit this, on numpy we ended up with a serial BLAS through the wrapper.\n BLAS_LIB = ['']\n BLAS_LIB_MT = ['']\n BLAS_FAMILY = TC_CONSTANT_CRAY_LIBSCI\n\n LAPACK_MODULE_NAME = [CRAY_LIBSCI_MODULE_NAME]\n LAPACK_IS_BLAS = True\n LAPACK_FAMILY = TC_CONSTANT_CRAY_LIBSCI\n\n BLACS_MODULE_NAME = []\n SCALAPACK_MODULE_NAME = []\n\n def _get_software_root(self, name, required=True):\n \"\"\"Get install prefix for specified software name; special treatment for Cray modules.\"\"\"\n if name == 'cray-libsci':\n # Cray-provided LibSci module\n env_var = 'CRAY_LIBSCI_PREFIX_DIR'\n root = os.getenv(env_var, None)\n if root is None:\n if required:\n raise EasyBuildError(\"Failed to determine install prefix for %s via $%s\", name, env_var)\n else:\n self.log.debug(\"Obtained install prefix for %s via $%s: %s\", name, env_var, root)\n else:\n root = super(LibSci, self)._get_software_root(name, required=required)\n\n return root\n\n def _set_blacs_variables(self):\n \"\"\"Skip setting BLACS related variables\"\"\"\n pass\n\n def _set_scalapack_variables(self):\n \"\"\"Skip setting ScaLAPACK related variables\"\"\"\n pass\n\n def definition(self):\n \"\"\"\n Filter BLAS module from toolchain definition.\n The cray-libsci module is loaded indirectly (and versionless) via the PrgEnv module,\n and thus is not a direct toolchain component.\n \"\"\"\n tc_def = super(LibSci, self).definition()\n tc_def['BLAS'] = []\n tc_def['LAPACK'] = []\n return tc_def\n", "path": "easybuild/toolchains/linalg/libsci.py"}], "after_files": [{"content": "##\n# Copyright 2014-2024 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nSupport for Cray's LibSci library, which provides BLAS/LAPACK support.\ncfr. https://www.nersc.gov/users/software/programming-libraries/math-libraries/libsci/\n\nAuthors:\n\n* Petar Forai (IMP/IMBA, Austria)\n* Kenneth Hoste (Ghent University)\n\"\"\"\nimport os\n\nfrom easybuild.tools.build_log import EasyBuildError\nfrom easybuild.tools.toolchain.linalg import LinAlg\n\n\nCRAY_LIBSCI_MODULE_NAME = 'cray-libsci'\nTC_CONSTANT_CRAY_LIBSCI = 'CrayLibSci'\n\n\nclass LibSci(LinAlg):\n \"\"\"Support for Cray's LibSci library, which provides BLAS/LAPACK support.\"\"\"\n # BLAS/LAPACK support\n # via cray-libsci module, which gets loaded via the PrgEnv module\n # see https://www.nersc.gov/users/software/programming-libraries/math-libraries/libsci/\n BLAS_MODULE_NAME = [CRAY_LIBSCI_MODULE_NAME]\n\n # no need to specify libraries, compiler driver takes care of linking the right libraries\n # FIXME: need to revisit this, on numpy we ended up with a serial BLAS through the wrapper.\n BLAS_LIB = ['']\n BLAS_LIB_MT = ['']\n BLAS_FAMILY = TC_CONSTANT_CRAY_LIBSCI\n\n LAPACK_MODULE_NAME = [CRAY_LIBSCI_MODULE_NAME]\n LAPACK_IS_BLAS = True\n LAPACK_FAMILY = TC_CONSTANT_CRAY_LIBSCI\n\n BLACS_MODULE_NAME = []\n SCALAPACK_MODULE_NAME = []\n\n def _get_software_root(self, name, required=True):\n \"\"\"Get install prefix for specified software name; special treatment for Cray modules.\"\"\"\n if name == 'cray-libsci':\n # Cray-provided LibSci module\n root = None\n # consider both $CRAY_LIBSCI_PREFIX_DIR and $CRAY_PE_LIBSCI_PREFIX_DIR,\n # cfr. https://github.com/easybuilders/easybuild-framework/issues/4536\n env_vars = ('CRAY_LIBSCI_PREFIX_DIR', 'CRAY_PE_LIBSCI_PREFIX_DIR')\n for env_var in env_vars:\n root = os.getenv(env_var, None)\n if root is not None:\n self.log.debug(\"Obtained install prefix for %s via $%s: %s\", name, env_var, root)\n break\n\n if root is None:\n if required:\n env_vars_str = ', '.join('$' + e for e in env_vars)\n raise EasyBuildError(\"Failed to determine install prefix for %s via $%s\", name, env_vars_str)\n else:\n root = super(LibSci, self)._get_software_root(name, required=required)\n\n return root\n\n def _set_blacs_variables(self):\n \"\"\"Skip setting BLACS related variables\"\"\"\n pass\n\n def _set_scalapack_variables(self):\n \"\"\"Skip setting ScaLAPACK related variables\"\"\"\n pass\n\n def definition(self):\n \"\"\"\n Filter BLAS module from toolchain definition.\n The cray-libsci module is loaded indirectly (and versionless) via the PrgEnv module,\n and thus is not a direct toolchain component.\n \"\"\"\n tc_def = super(LibSci, self).definition()\n tc_def['BLAS'] = []\n tc_def['LAPACK'] = []\n return tc_def\n", "path": "easybuild/toolchains/linalg/libsci.py"}]}
1,774
399
gh_patches_debug_1251
rasdani/github-patches
git_diff
chainer__chainer-987
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Fix the shape of return value of F.det Currently, return value of `det` is `xp.array` whose shape is `(1, )`, not a scalar. ``` In [16]: a = chainer.Variable(numpy.random.uniform(-1, 1, (3, 3)).astype(numpy.float32)) In [17]: chainer.functions.det(a).data Out[17]: array([-0.80874199], dtype=float32) ``` But the document says the return value should be `chainer.Variable` whose data have the shape `()`. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `chainer/functions/math/det.py` Content: ``` 1 import numpy 2 3 from chainer import cuda 4 from chainer import function 5 from chainer.functions.array import reshape 6 from chainer.functions.math import inv 7 from chainer.functions.math import matmul 8 from chainer import utils 9 from chainer.utils import type_check 10 11 12 def _det_gpu(b): 13 # We do a batched LU decomposition on the GPU to compute 14 # and compute the determinant by multiplying the diagonal. 15 # Change the shape of the array to be size=1 minibatch if necessary. 16 # Also copy the matrix as the elments will be modified in-place. 17 a = matmul._as_batch_mat(b).copy() 18 n = a.shape[1] 19 n_matrices = len(a) 20 # Pivot array 21 p = cuda.cupy.zeros((n_matrices, n), dtype='int32') 22 # Output array 23 # These arrays hold information on the execution success 24 # or if the matrix was singular. 25 info1 = cuda.cupy.zeros(n_matrices, dtype=numpy.intp) 26 ap = matmul._mat_ptrs(a) 27 _, lda = matmul._get_ld(a) 28 cuda.cublas.sgetrfBatched(cuda.Device().cublas_handle, n, ap.data.ptr, lda, 29 p.data.ptr, info1.data.ptr, n_matrices) 30 det = cuda.cupy.prod(a.diagonal(axis1=1, axis2=2), axis=1) 31 # The determinant is equal to the product of the diagonal entries 32 # of `a` where the sign of `a` is flipped depending on whether 33 # the pivot array is equal to its index. 34 rng = cuda.cupy.arange(1, n + 1, dtype='int32') 35 parity = cuda.cupy.sum(p != rng, axis=1) % 2 36 sign = 1. - 2. * parity.astype('float32') 37 success = cuda.cupy.all(info1 == 0) 38 return det * sign, success 39 40 41 class BatchDet(function.Function): 42 43 @property 44 def label(self): 45 return 'det' 46 47 def check_type_forward(self, in_types): 48 type_check.expect(in_types.size() == 1) 49 a_type, = in_types 50 a_type = matmul._convert_type(a_type) 51 type_check.expect(a_type.dtype.kind == 'f') 52 # Only a minibatch of 2D array shapes allowed. 53 type_check.expect(a_type.ndim == 3) 54 # Matrix inversion only allowed for square matrices 55 # so assert the last two dimensions are equal. 56 type_check.expect(a_type.shape[-1] == a_type.shape[-2]) 57 58 def forward_cpu(self, x): 59 self.detx = utils.force_array(numpy.linalg.det(x[0])) 60 return self.detx, 61 62 def forward_gpu(self, x): 63 self.detx, success = _det_gpu(x[0]) 64 if not success: 65 raise ValueError('Singular Matrix') 66 return self.detx, 67 68 def backward_cpu(self, x, gy): 69 x, = x 70 gy, = gy 71 grad = (gy[:, None, None] * self.detx[:, None, None] * 72 numpy.linalg.inv(x.transpose((0, 2, 1)))) 73 return utils.force_array(grad), 74 75 def backward_gpu(self, x, gy): 76 x, = x 77 gy, = gy 78 grad = (gy[:, None, None] * self.detx[:, None, None] * 79 inv._inv_gpu(x.transpose((0, 2, 1)))) 80 return utils.force_array(grad), 81 82 83 def batch_det(a): 84 """Computes the determinant of a batch of square matrices. 85 86 Args: 87 a (Variable): Input array to compute the determinant for. 88 The first dimension should iterate over each matrix and be 89 of the batchsize. 90 91 Returns: 92 ~chainer.Variable: vector of determinants for every matrix 93 in the batch. 94 95 """ 96 return BatchDet()(a) 97 98 99 def det(a): 100 """Computes the determinant of a single square matrix. 101 102 Args: 103 a (Variable): Input array to compute the determinant for. 104 105 Returns: 106 ~chainer.Variable: Scalar determinant of the matrix a. 107 108 """ 109 shape = (1, len(a.data), a.data.shape[1]) 110 batched_a = reshape.Reshape(shape)(a) 111 batched_det = BatchDet()(batched_a) 112 return reshape.Reshape((1, ))(batched_det) 113 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/chainer/functions/math/det.py b/chainer/functions/math/det.py --- a/chainer/functions/math/det.py +++ b/chainer/functions/math/det.py @@ -109,4 +109,4 @@ shape = (1, len(a.data), a.data.shape[1]) batched_a = reshape.Reshape(shape)(a) batched_det = BatchDet()(batched_a) - return reshape.Reshape((1, ))(batched_det) + return reshape.Reshape(())(batched_det)
{"golden_diff": "diff --git a/chainer/functions/math/det.py b/chainer/functions/math/det.py\n--- a/chainer/functions/math/det.py\n+++ b/chainer/functions/math/det.py\n@@ -109,4 +109,4 @@\n shape = (1, len(a.data), a.data.shape[1])\n batched_a = reshape.Reshape(shape)(a)\n batched_det = BatchDet()(batched_a)\n- return reshape.Reshape((1, ))(batched_det)\n+ return reshape.Reshape(())(batched_det)\n", "issue": "Fix the shape of return value of F.det\nCurrently, return value of `det` is `xp.array` whose shape is `(1, )`, not a scalar.\n\n```\nIn [16]: a = chainer.Variable(numpy.random.uniform(-1, 1, (3, 3)).astype(numpy.float32))\nIn [17]: chainer.functions.det(a).data\nOut[17]: array([-0.80874199], dtype=float32)\n```\n\nBut the document says the return value should be `chainer.Variable` whose data have the shape `()`.\n\n", "before_files": [{"content": "import numpy\n\nfrom chainer import cuda\nfrom chainer import function\nfrom chainer.functions.array import reshape\nfrom chainer.functions.math import inv\nfrom chainer.functions.math import matmul\nfrom chainer import utils\nfrom chainer.utils import type_check\n\n\ndef _det_gpu(b):\n # We do a batched LU decomposition on the GPU to compute\n # and compute the determinant by multiplying the diagonal.\n # Change the shape of the array to be size=1 minibatch if necessary.\n # Also copy the matrix as the elments will be modified in-place.\n a = matmul._as_batch_mat(b).copy()\n n = a.shape[1]\n n_matrices = len(a)\n # Pivot array\n p = cuda.cupy.zeros((n_matrices, n), dtype='int32')\n # Output array\n # These arrays hold information on the execution success\n # or if the matrix was singular.\n info1 = cuda.cupy.zeros(n_matrices, dtype=numpy.intp)\n ap = matmul._mat_ptrs(a)\n _, lda = matmul._get_ld(a)\n cuda.cublas.sgetrfBatched(cuda.Device().cublas_handle, n, ap.data.ptr, lda,\n p.data.ptr, info1.data.ptr, n_matrices)\n det = cuda.cupy.prod(a.diagonal(axis1=1, axis2=2), axis=1)\n # The determinant is equal to the product of the diagonal entries\n # of `a` where the sign of `a` is flipped depending on whether\n # the pivot array is equal to its index.\n rng = cuda.cupy.arange(1, n + 1, dtype='int32')\n parity = cuda.cupy.sum(p != rng, axis=1) % 2\n sign = 1. - 2. * parity.astype('float32')\n success = cuda.cupy.all(info1 == 0)\n return det * sign, success\n\n\nclass BatchDet(function.Function):\n\n @property\n def label(self):\n return 'det'\n\n def check_type_forward(self, in_types):\n type_check.expect(in_types.size() == 1)\n a_type, = in_types\n a_type = matmul._convert_type(a_type)\n type_check.expect(a_type.dtype.kind == 'f')\n # Only a minibatch of 2D array shapes allowed.\n type_check.expect(a_type.ndim == 3)\n # Matrix inversion only allowed for square matrices\n # so assert the last two dimensions are equal.\n type_check.expect(a_type.shape[-1] == a_type.shape[-2])\n\n def forward_cpu(self, x):\n self.detx = utils.force_array(numpy.linalg.det(x[0]))\n return self.detx,\n\n def forward_gpu(self, x):\n self.detx, success = _det_gpu(x[0])\n if not success:\n raise ValueError('Singular Matrix')\n return self.detx,\n\n def backward_cpu(self, x, gy):\n x, = x\n gy, = gy\n grad = (gy[:, None, None] * self.detx[:, None, None] *\n numpy.linalg.inv(x.transpose((0, 2, 1))))\n return utils.force_array(grad),\n\n def backward_gpu(self, x, gy):\n x, = x\n gy, = gy\n grad = (gy[:, None, None] * self.detx[:, None, None] *\n inv._inv_gpu(x.transpose((0, 2, 1))))\n return utils.force_array(grad),\n\n\ndef batch_det(a):\n \"\"\"Computes the determinant of a batch of square matrices.\n\n Args:\n a (Variable): Input array to compute the determinant for.\n The first dimension should iterate over each matrix and be\n of the batchsize.\n\n Returns:\n ~chainer.Variable: vector of determinants for every matrix\n in the batch.\n\n \"\"\"\n return BatchDet()(a)\n\n\ndef det(a):\n \"\"\"Computes the determinant of a single square matrix.\n\n Args:\n a (Variable): Input array to compute the determinant for.\n\n Returns:\n ~chainer.Variable: Scalar determinant of the matrix a.\n\n \"\"\"\n shape = (1, len(a.data), a.data.shape[1])\n batched_a = reshape.Reshape(shape)(a)\n batched_det = BatchDet()(batched_a)\n return reshape.Reshape((1, ))(batched_det)\n", "path": "chainer/functions/math/det.py"}], "after_files": [{"content": "import numpy\n\nfrom chainer import cuda\nfrom chainer import function\nfrom chainer.functions.array import reshape\nfrom chainer.functions.math import inv\nfrom chainer.functions.math import matmul\nfrom chainer import utils\nfrom chainer.utils import type_check\n\n\ndef _det_gpu(b):\n # We do a batched LU decomposition on the GPU to compute\n # and compute the determinant by multiplying the diagonal.\n # Change the shape of the array to be size=1 minibatch if necessary.\n # Also copy the matrix as the elments will be modified in-place.\n a = matmul._as_batch_mat(b).copy()\n n = a.shape[1]\n n_matrices = len(a)\n # Pivot array\n p = cuda.cupy.zeros((n_matrices, n), dtype='int32')\n # Output array\n # These arrays hold information on the execution success\n # or if the matrix was singular.\n info1 = cuda.cupy.zeros(n_matrices, dtype=numpy.intp)\n ap = matmul._mat_ptrs(a)\n _, lda = matmul._get_ld(a)\n cuda.cublas.sgetrfBatched(cuda.Device().cublas_handle, n, ap.data.ptr, lda,\n p.data.ptr, info1.data.ptr, n_matrices)\n det = cuda.cupy.prod(a.diagonal(axis1=1, axis2=2), axis=1)\n # The determinant is equal to the product of the diagonal entries\n # of `a` where the sign of `a` is flipped depending on whether\n # the pivot array is equal to its index.\n rng = cuda.cupy.arange(1, n + 1, dtype='int32')\n parity = cuda.cupy.sum(p != rng, axis=1) % 2\n sign = 1. - 2. * parity.astype('float32')\n success = cuda.cupy.all(info1 == 0)\n return det * sign, success\n\n\nclass BatchDet(function.Function):\n\n @property\n def label(self):\n return 'det'\n\n def check_type_forward(self, in_types):\n type_check.expect(in_types.size() == 1)\n a_type, = in_types\n a_type = matmul._convert_type(a_type)\n type_check.expect(a_type.dtype.kind == 'f')\n # Only a minibatch of 2D array shapes allowed.\n type_check.expect(a_type.ndim == 3)\n # Matrix inversion only allowed for square matrices\n # so assert the last two dimensions are equal.\n type_check.expect(a_type.shape[-1] == a_type.shape[-2])\n\n def forward_cpu(self, x):\n self.detx = utils.force_array(numpy.linalg.det(x[0]))\n return self.detx,\n\n def forward_gpu(self, x):\n self.detx, success = _det_gpu(x[0])\n if not success:\n raise ValueError('Singular Matrix')\n return self.detx,\n\n def backward_cpu(self, x, gy):\n x, = x\n gy, = gy\n grad = (gy[:, None, None] * self.detx[:, None, None] *\n numpy.linalg.inv(x.transpose((0, 2, 1))))\n return utils.force_array(grad),\n\n def backward_gpu(self, x, gy):\n x, = x\n gy, = gy\n grad = (gy[:, None, None] * self.detx[:, None, None] *\n inv._inv_gpu(x.transpose((0, 2, 1))))\n return utils.force_array(grad),\n\n\ndef batch_det(a):\n \"\"\"Computes the determinant of a batch of square matrices.\n\n Args:\n a (Variable): Input array to compute the determinant for.\n The first dimension should iterate over each matrix and be\n of the batchsize.\n\n Returns:\n ~chainer.Variable: vector of determinants for every matrix\n in the batch.\n\n \"\"\"\n return BatchDet()(a)\n\n\ndef det(a):\n \"\"\"Computes the determinant of a single square matrix.\n\n Args:\n a (Variable): Input array to compute the determinant for.\n\n Returns:\n ~chainer.Variable: Scalar determinant of the matrix a.\n\n \"\"\"\n shape = (1, len(a.data), a.data.shape[1])\n batched_a = reshape.Reshape(shape)(a)\n batched_det = BatchDet()(batched_a)\n return reshape.Reshape(())(batched_det)\n", "path": "chainer/functions/math/det.py"}]}
1,585
122
gh_patches_debug_9189
rasdani/github-patches
git_diff
googleapis__google-cloud-python-1505
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Project inference is broken under Python 3.5.1 Project inference takes place in [_determine_default_project()](https://github.com/GoogleCloudPlatform/gcloud-python/blob/91be6938b26ba9198082f457ae37fba81b8f5ea0/gcloud/_helpers.py#L189), which hands off to [_compute_engine_id()](https://github.com/GoogleCloudPlatform/gcloud-python/blob/91be6938b26ba9198082f457ae37fba81b8f5ea0/gcloud/_helpers.py#L151). That returns the correct value -- but as `bytes`. The `Client` class checks if the project value is a `str` (using `six.string_types`) and raises an error because it is not (that code is [here](https://github.com/GoogleCloudPlatform/gcloud-python/blob/91be6938b26ba9198082f457ae37fba81b8f5ea0/gcloud/client.py#L144)). --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `gcloud/client.py` Content: ``` 1 # Copyright 2015 Google Inc. All rights reserved. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 """Base classes for client used to interact with Google Cloud APIs.""" 16 17 import six 18 19 from gcloud._helpers import _determine_default_project 20 from gcloud.connection import Connection 21 from gcloud.credentials import get_credentials 22 from gcloud.credentials import get_for_service_account_json 23 from gcloud.credentials import get_for_service_account_p12 24 25 26 class _ClientFactoryMixin(object): 27 """Mixin to allow factories that create credentials. 28 29 .. note:: 30 31 This class is virtual. 32 """ 33 34 @classmethod 35 def from_service_account_json(cls, json_credentials_path, *args, **kwargs): 36 """Factory to retrieve JSON credentials while creating client. 37 38 :type json_credentials_path: string 39 :param json_credentials_path: The path to a private key file (this file 40 was given to you when you created the 41 service account). This file must contain 42 a JSON object with a private key and 43 other credentials information (downloaded 44 from the Google APIs console). 45 46 :type args: tuple 47 :param args: Remaining positional arguments to pass to constructor. 48 49 :type kwargs: dict 50 :param kwargs: Remaining keyword arguments to pass to constructor. 51 52 :rtype: :class:`gcloud.pubsub.client.Client` 53 :returns: The client created with the retrieved JSON credentials. 54 :raises: :class:`TypeError` if there is a conflict with the kwargs 55 and the credentials created by the factory. 56 """ 57 if 'credentials' in kwargs: 58 raise TypeError('credentials must not be in keyword arguments') 59 credentials = get_for_service_account_json(json_credentials_path) 60 kwargs['credentials'] = credentials 61 return cls(*args, **kwargs) 62 63 @classmethod 64 def from_service_account_p12(cls, client_email, private_key_path, 65 *args, **kwargs): 66 """Factory to retrieve P12 credentials while creating client. 67 68 .. note:: 69 Unless you have an explicit reason to use a PKCS12 key for your 70 service account, we recommend using a JSON key. 71 72 :type client_email: string 73 :param client_email: The e-mail attached to the service account. 74 75 :type private_key_path: string 76 :param private_key_path: The path to a private key file (this file was 77 given to you when you created the service 78 account). This file must be in P12 format. 79 80 :type args: tuple 81 :param args: Remaining positional arguments to pass to constructor. 82 83 :type kwargs: dict 84 :param kwargs: Remaining keyword arguments to pass to constructor. 85 86 :rtype: :class:`gcloud.client.Client` 87 :returns: The client created with the retrieved P12 credentials. 88 :raises: :class:`TypeError` if there is a conflict with the kwargs 89 and the credentials created by the factory. 90 """ 91 if 'credentials' in kwargs: 92 raise TypeError('credentials must not be in keyword arguments') 93 credentials = get_for_service_account_p12(client_email, 94 private_key_path) 95 kwargs['credentials'] = credentials 96 return cls(*args, **kwargs) 97 98 99 class Client(_ClientFactoryMixin): 100 """Client to bundle configuration needed for API requests. 101 102 Assumes that the associated ``_connection_class`` only accepts 103 ``http`` and ``credentials`` in its constructor. 104 105 :type credentials: :class:`oauth2client.client.OAuth2Credentials` or 106 :class:`NoneType` 107 :param credentials: The OAuth2 Credentials to use for the connection 108 owned by this client. If not passed (and if no ``http`` 109 object is passed), falls back to the default inferred 110 from the environment. 111 112 :type http: :class:`httplib2.Http` or class that defines ``request()``. 113 :param http: An optional HTTP object to make requests. If not passed, an 114 ``http`` object is created that is bound to the 115 ``credentials`` for the current object. 116 """ 117 118 _connection_class = Connection 119 120 def __init__(self, credentials=None, http=None): 121 if credentials is None and http is None: 122 credentials = get_credentials() 123 self.connection = self._connection_class( 124 credentials=credentials, http=http) 125 126 127 class _ClientProjectMixin(object): 128 """Mixin to allow setting the project on the client. 129 130 :type project: string 131 :param project: the project which the client acts on behalf of. If not 132 passed falls back to the default inferred from the 133 environment. 134 135 :raises: :class:`ValueError` if the project is neither passed in nor 136 set in the environment. 137 """ 138 139 def __init__(self, project=None): 140 project = _determine_default_project(project) 141 if project is None: 142 raise ValueError('Project was not passed and could not be ' 143 'determined from the environment.') 144 if not isinstance(project, six.string_types): 145 raise ValueError('Project must be a string.') 146 self.project = project 147 148 149 class JSONClient(Client, _ClientProjectMixin): 150 """Client to for Google JSON-based API. 151 152 Assumes such APIs use the ``project`` and the client needs to store this 153 value. 154 155 :type project: string 156 :param project: the project which the client acts on behalf of. If not 157 passed falls back to the default inferred from the 158 environment. 159 160 :type credentials: :class:`oauth2client.client.OAuth2Credentials` or 161 :class:`NoneType` 162 :param credentials: The OAuth2 Credentials to use for the connection 163 owned by this client. If not passed (and if no ``http`` 164 object is passed), falls back to the default inferred 165 from the environment. 166 167 :type http: :class:`httplib2.Http` or class that defines ``request()``. 168 :param http: An optional HTTP object to make requests. If not passed, an 169 ``http`` object is created that is bound to the 170 ``credentials`` for the current object. 171 172 :raises: :class:`ValueError` if the project is neither passed in nor 173 set in the environment. 174 """ 175 176 def __init__(self, project=None, credentials=None, http=None): 177 _ClientProjectMixin.__init__(self, project=project) 178 Client.__init__(self, credentials=credentials, http=http) 179 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/gcloud/client.py b/gcloud/client.py --- a/gcloud/client.py +++ b/gcloud/client.py @@ -141,6 +141,8 @@ if project is None: raise ValueError('Project was not passed and could not be ' 'determined from the environment.') + if isinstance(project, six.binary_type): + project = project.decode('utf-8') if not isinstance(project, six.string_types): raise ValueError('Project must be a string.') self.project = project
{"golden_diff": "diff --git a/gcloud/client.py b/gcloud/client.py\n--- a/gcloud/client.py\n+++ b/gcloud/client.py\n@@ -141,6 +141,8 @@\n if project is None:\n raise ValueError('Project was not passed and could not be '\n 'determined from the environment.')\n+ if isinstance(project, six.binary_type):\n+ project = project.decode('utf-8')\n if not isinstance(project, six.string_types):\n raise ValueError('Project must be a string.')\n self.project = project\n", "issue": "Project inference is broken under Python 3.5.1\nProject inference takes place in [_determine_default_project()](https://github.com/GoogleCloudPlatform/gcloud-python/blob/91be6938b26ba9198082f457ae37fba81b8f5ea0/gcloud/_helpers.py#L189), which hands off to [_compute_engine_id()](https://github.com/GoogleCloudPlatform/gcloud-python/blob/91be6938b26ba9198082f457ae37fba81b8f5ea0/gcloud/_helpers.py#L151). That returns the correct value -- but as `bytes`. The `Client` class checks if the project value is a `str` (using `six.string_types`) and raises an error because it is not (that code is [here](https://github.com/GoogleCloudPlatform/gcloud-python/blob/91be6938b26ba9198082f457ae37fba81b8f5ea0/gcloud/client.py#L144)).\n\n", "before_files": [{"content": "# Copyright 2015 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Base classes for client used to interact with Google Cloud APIs.\"\"\"\n\nimport six\n\nfrom gcloud._helpers import _determine_default_project\nfrom gcloud.connection import Connection\nfrom gcloud.credentials import get_credentials\nfrom gcloud.credentials import get_for_service_account_json\nfrom gcloud.credentials import get_for_service_account_p12\n\n\nclass _ClientFactoryMixin(object):\n \"\"\"Mixin to allow factories that create credentials.\n\n .. note::\n\n This class is virtual.\n \"\"\"\n\n @classmethod\n def from_service_account_json(cls, json_credentials_path, *args, **kwargs):\n \"\"\"Factory to retrieve JSON credentials while creating client.\n\n :type json_credentials_path: string\n :param json_credentials_path: The path to a private key file (this file\n was given to you when you created the\n service account). This file must contain\n a JSON object with a private key and\n other credentials information (downloaded\n from the Google APIs console).\n\n :type args: tuple\n :param args: Remaining positional arguments to pass to constructor.\n\n :type kwargs: dict\n :param kwargs: Remaining keyword arguments to pass to constructor.\n\n :rtype: :class:`gcloud.pubsub.client.Client`\n :returns: The client created with the retrieved JSON credentials.\n :raises: :class:`TypeError` if there is a conflict with the kwargs\n and the credentials created by the factory.\n \"\"\"\n if 'credentials' in kwargs:\n raise TypeError('credentials must not be in keyword arguments')\n credentials = get_for_service_account_json(json_credentials_path)\n kwargs['credentials'] = credentials\n return cls(*args, **kwargs)\n\n @classmethod\n def from_service_account_p12(cls, client_email, private_key_path,\n *args, **kwargs):\n \"\"\"Factory to retrieve P12 credentials while creating client.\n\n .. note::\n Unless you have an explicit reason to use a PKCS12 key for your\n service account, we recommend using a JSON key.\n\n :type client_email: string\n :param client_email: The e-mail attached to the service account.\n\n :type private_key_path: string\n :param private_key_path: The path to a private key file (this file was\n given to you when you created the service\n account). This file must be in P12 format.\n\n :type args: tuple\n :param args: Remaining positional arguments to pass to constructor.\n\n :type kwargs: dict\n :param kwargs: Remaining keyword arguments to pass to constructor.\n\n :rtype: :class:`gcloud.client.Client`\n :returns: The client created with the retrieved P12 credentials.\n :raises: :class:`TypeError` if there is a conflict with the kwargs\n and the credentials created by the factory.\n \"\"\"\n if 'credentials' in kwargs:\n raise TypeError('credentials must not be in keyword arguments')\n credentials = get_for_service_account_p12(client_email,\n private_key_path)\n kwargs['credentials'] = credentials\n return cls(*args, **kwargs)\n\n\nclass Client(_ClientFactoryMixin):\n \"\"\"Client to bundle configuration needed for API requests.\n\n Assumes that the associated ``_connection_class`` only accepts\n ``http`` and ``credentials`` in its constructor.\n\n :type credentials: :class:`oauth2client.client.OAuth2Credentials` or\n :class:`NoneType`\n :param credentials: The OAuth2 Credentials to use for the connection\n owned by this client. If not passed (and if no ``http``\n object is passed), falls back to the default inferred\n from the environment.\n\n :type http: :class:`httplib2.Http` or class that defines ``request()``.\n :param http: An optional HTTP object to make requests. If not passed, an\n ``http`` object is created that is bound to the\n ``credentials`` for the current object.\n \"\"\"\n\n _connection_class = Connection\n\n def __init__(self, credentials=None, http=None):\n if credentials is None and http is None:\n credentials = get_credentials()\n self.connection = self._connection_class(\n credentials=credentials, http=http)\n\n\nclass _ClientProjectMixin(object):\n \"\"\"Mixin to allow setting the project on the client.\n\n :type project: string\n :param project: the project which the client acts on behalf of. If not\n passed falls back to the default inferred from the\n environment.\n\n :raises: :class:`ValueError` if the project is neither passed in nor\n set in the environment.\n \"\"\"\n\n def __init__(self, project=None):\n project = _determine_default_project(project)\n if project is None:\n raise ValueError('Project was not passed and could not be '\n 'determined from the environment.')\n if not isinstance(project, six.string_types):\n raise ValueError('Project must be a string.')\n self.project = project\n\n\nclass JSONClient(Client, _ClientProjectMixin):\n \"\"\"Client to for Google JSON-based API.\n\n Assumes such APIs use the ``project`` and the client needs to store this\n value.\n\n :type project: string\n :param project: the project which the client acts on behalf of. If not\n passed falls back to the default inferred from the\n environment.\n\n :type credentials: :class:`oauth2client.client.OAuth2Credentials` or\n :class:`NoneType`\n :param credentials: The OAuth2 Credentials to use for the connection\n owned by this client. If not passed (and if no ``http``\n object is passed), falls back to the default inferred\n from the environment.\n\n :type http: :class:`httplib2.Http` or class that defines ``request()``.\n :param http: An optional HTTP object to make requests. If not passed, an\n ``http`` object is created that is bound to the\n ``credentials`` for the current object.\n\n :raises: :class:`ValueError` if the project is neither passed in nor\n set in the environment.\n \"\"\"\n\n def __init__(self, project=None, credentials=None, http=None):\n _ClientProjectMixin.__init__(self, project=project)\n Client.__init__(self, credentials=credentials, http=http)\n", "path": "gcloud/client.py"}], "after_files": [{"content": "# Copyright 2015 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Base classes for client used to interact with Google Cloud APIs.\"\"\"\n\nimport six\n\nfrom gcloud._helpers import _determine_default_project\nfrom gcloud.connection import Connection\nfrom gcloud.credentials import get_credentials\nfrom gcloud.credentials import get_for_service_account_json\nfrom gcloud.credentials import get_for_service_account_p12\n\n\nclass _ClientFactoryMixin(object):\n \"\"\"Mixin to allow factories that create credentials.\n\n .. note::\n\n This class is virtual.\n \"\"\"\n\n @classmethod\n def from_service_account_json(cls, json_credentials_path, *args, **kwargs):\n \"\"\"Factory to retrieve JSON credentials while creating client.\n\n :type json_credentials_path: string\n :param json_credentials_path: The path to a private key file (this file\n was given to you when you created the\n service account). This file must contain\n a JSON object with a private key and\n other credentials information (downloaded\n from the Google APIs console).\n\n :type args: tuple\n :param args: Remaining positional arguments to pass to constructor.\n\n :type kwargs: dict\n :param kwargs: Remaining keyword arguments to pass to constructor.\n\n :rtype: :class:`gcloud.pubsub.client.Client`\n :returns: The client created with the retrieved JSON credentials.\n :raises: :class:`TypeError` if there is a conflict with the kwargs\n and the credentials created by the factory.\n \"\"\"\n if 'credentials' in kwargs:\n raise TypeError('credentials must not be in keyword arguments')\n credentials = get_for_service_account_json(json_credentials_path)\n kwargs['credentials'] = credentials\n return cls(*args, **kwargs)\n\n @classmethod\n def from_service_account_p12(cls, client_email, private_key_path,\n *args, **kwargs):\n \"\"\"Factory to retrieve P12 credentials while creating client.\n\n .. note::\n Unless you have an explicit reason to use a PKCS12 key for your\n service account, we recommend using a JSON key.\n\n :type client_email: string\n :param client_email: The e-mail attached to the service account.\n\n :type private_key_path: string\n :param private_key_path: The path to a private key file (this file was\n given to you when you created the service\n account). This file must be in P12 format.\n\n :type args: tuple\n :param args: Remaining positional arguments to pass to constructor.\n\n :type kwargs: dict\n :param kwargs: Remaining keyword arguments to pass to constructor.\n\n :rtype: :class:`gcloud.client.Client`\n :returns: The client created with the retrieved P12 credentials.\n :raises: :class:`TypeError` if there is a conflict with the kwargs\n and the credentials created by the factory.\n \"\"\"\n if 'credentials' in kwargs:\n raise TypeError('credentials must not be in keyword arguments')\n credentials = get_for_service_account_p12(client_email,\n private_key_path)\n kwargs['credentials'] = credentials\n return cls(*args, **kwargs)\n\n\nclass Client(_ClientFactoryMixin):\n \"\"\"Client to bundle configuration needed for API requests.\n\n Assumes that the associated ``_connection_class`` only accepts\n ``http`` and ``credentials`` in its constructor.\n\n :type credentials: :class:`oauth2client.client.OAuth2Credentials` or\n :class:`NoneType`\n :param credentials: The OAuth2 Credentials to use for the connection\n owned by this client. If not passed (and if no ``http``\n object is passed), falls back to the default inferred\n from the environment.\n\n :type http: :class:`httplib2.Http` or class that defines ``request()``.\n :param http: An optional HTTP object to make requests. If not passed, an\n ``http`` object is created that is bound to the\n ``credentials`` for the current object.\n \"\"\"\n\n _connection_class = Connection\n\n def __init__(self, credentials=None, http=None):\n if credentials is None and http is None:\n credentials = get_credentials()\n self.connection = self._connection_class(\n credentials=credentials, http=http)\n\n\nclass _ClientProjectMixin(object):\n \"\"\"Mixin to allow setting the project on the client.\n\n :type project: string\n :param project: the project which the client acts on behalf of. If not\n passed falls back to the default inferred from the\n environment.\n\n :raises: :class:`ValueError` if the project is neither passed in nor\n set in the environment.\n \"\"\"\n\n def __init__(self, project=None):\n project = _determine_default_project(project)\n if project is None:\n raise ValueError('Project was not passed and could not be '\n 'determined from the environment.')\n if isinstance(project, six.binary_type):\n project = project.decode('utf-8')\n if not isinstance(project, six.string_types):\n raise ValueError('Project must be a string.')\n self.project = project\n\n\nclass JSONClient(Client, _ClientProjectMixin):\n \"\"\"Client to for Google JSON-based API.\n\n Assumes such APIs use the ``project`` and the client needs to store this\n value.\n\n :type project: string\n :param project: the project which the client acts on behalf of. If not\n passed falls back to the default inferred from the\n environment.\n\n :type credentials: :class:`oauth2client.client.OAuth2Credentials` or\n :class:`NoneType`\n :param credentials: The OAuth2 Credentials to use for the connection\n owned by this client. If not passed (and if no ``http``\n object is passed), falls back to the default inferred\n from the environment.\n\n :type http: :class:`httplib2.Http` or class that defines ``request()``.\n :param http: An optional HTTP object to make requests. If not passed, an\n ``http`` object is created that is bound to the\n ``credentials`` for the current object.\n\n :raises: :class:`ValueError` if the project is neither passed in nor\n set in the environment.\n \"\"\"\n\n def __init__(self, project=None, credentials=None, http=None):\n _ClientProjectMixin.__init__(self, project=project)\n Client.__init__(self, credentials=credentials, http=http)\n", "path": "gcloud/client.py"}]}
2,444
116
gh_patches_debug_1484
rasdani/github-patches
git_diff
PyGithub__PyGithub-1891
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- allow PyJWT 2+ other libraries are moving to PyJWT2+ as requirement, is it possible to update pygithub as well? currently we can't use for example pygithub together with django-social-core --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `setup.py` Content: ``` 1 #!/usr/bin/env python 2 3 ############################ Copyrights and license ############################ 4 # # 5 # Copyright 2012 Vincent Jacques <[email protected]> # 6 # Copyright 2012 Zearin <[email protected]> # 7 # Copyright 2013 Vincent Jacques <[email protected]> # 8 # Copyright 2014 Tomas Radej <[email protected]> # 9 # Copyright 2014 Vincent Jacques <[email protected]> # 10 # Copyright 2015 Jimmy Zelinskie <[email protected]> # 11 # Copyright 2016 Felix Yan <[email protected]> # 12 # Copyright 2016 Jakub Wilk <[email protected]> # 13 # Copyright 2016 Jannis Gebauer <[email protected]> # 14 # Copyright 2016 Peter Buckley <[email protected]> # 15 # Copyright 2017 Hugo <[email protected]> # 16 # Copyright 2017 Jannis Gebauer <[email protected]> # 17 # Copyright 2017 Jannis Gebauer <[email protected]> # 18 # Copyright 2017 Nhomar Hernandez <[email protected]> # 19 # Copyright 2017 Paul Ortman <[email protected]> # 20 # Copyright 2018 Jason White <[email protected]> # 21 # Copyright 2018 Mike Miller <[email protected]> # 22 # Copyright 2018 Wan Liuyang <[email protected]> # 23 # Copyright 2018 sfdye <[email protected]> # 24 # # 25 # This file is part of PyGithub. # 26 # http://pygithub.readthedocs.io/ # 27 # # 28 # PyGithub is free software: you can redistribute it and/or modify it under # 29 # the terms of the GNU Lesser General Public License as published by the Free # 30 # Software Foundation, either version 3 of the License, or (at your option) # 31 # any later version. # 32 # # 33 # PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY # 34 # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # 35 # FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more # 36 # details. # 37 # # 38 # You should have received a copy of the GNU Lesser General Public License # 39 # along with PyGithub. If not, see <http://www.gnu.org/licenses/>. # 40 # # 41 ################################################################################ 42 43 import textwrap 44 45 import setuptools 46 47 version = "1.54.1" 48 49 50 if __name__ == "__main__": 51 setuptools.setup( 52 name="PyGithub", 53 version=version, 54 description="Use the full Github API v3", 55 author="Vincent Jacques", 56 author_email="[email protected]", 57 url="https://github.com/pygithub/pygithub", 58 project_urls={ 59 "Documentation": "http://pygithub.readthedocs.io/en/latest/", 60 "Source": "https://github.com/pygithub/pygithub", 61 "Tracker": "https://github.com/pygithub/pygithub/issues", 62 }, 63 long_description=textwrap.dedent( 64 """\ 65 (Very short) Tutorial 66 ===================== 67 68 First create a Github instance:: 69 70 from github import Github 71 72 # using username and password 73 g = Github("user", "password") 74 75 # or using an access token 76 g = Github("access_token") 77 78 Then play with your Github objects:: 79 80 for repo in g.get_user().get_repos(): 81 print(repo.name) 82 repo.edit(has_wiki=False) 83 84 Reference documentation 85 ======================= 86 87 See http://pygithub.readthedocs.io/en/latest/""" 88 ), 89 packages=["github"], 90 package_data={"github": ["py.typed", "*.pyi"]}, 91 classifiers=[ 92 "Development Status :: 5 - Production/Stable", 93 "Environment :: Web Environment", 94 "Intended Audience :: Developers", 95 "License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)", 96 "Operating System :: OS Independent", 97 "Programming Language :: Python", 98 "Programming Language :: Python :: 3", 99 "Programming Language :: Python :: 3.6", 100 "Programming Language :: Python :: 3.7", 101 "Programming Language :: Python :: 3.8", 102 "Programming Language :: Python :: 3.9", 103 "Topic :: Software Development", 104 ], 105 python_requires=">=3.6", 106 install_requires=[ 107 "deprecated", 108 "pyjwt<2.0", 109 "pynacl>=1.4.0", 110 "requests>=2.14.0", 111 ], 112 extras_require={"integrations": ["cryptography"]}, 113 tests_require=["cryptography", "httpretty>=1.0.3"], 114 ) 115 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -105,7 +105,7 @@ python_requires=">=3.6", install_requires=[ "deprecated", - "pyjwt<2.0", + "pyjwt>=2.0", "pynacl>=1.4.0", "requests>=2.14.0", ],
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -105,7 +105,7 @@\n python_requires=\">=3.6\",\n install_requires=[\n \"deprecated\",\n- \"pyjwt<2.0\",\n+ \"pyjwt>=2.0\",\n \"pynacl>=1.4.0\",\n \"requests>=2.14.0\",\n ],\n", "issue": "allow PyJWT 2+\nother libraries are moving to PyJWT2+ as requirement, is it possible to update pygithub as well? currently we can't use for example pygithub together with django-social-core\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n############################ Copyrights and license ############################\n# #\n# Copyright 2012 Vincent Jacques <[email protected]> #\n# Copyright 2012 Zearin <[email protected]> #\n# Copyright 2013 Vincent Jacques <[email protected]> #\n# Copyright 2014 Tomas Radej <[email protected]> #\n# Copyright 2014 Vincent Jacques <[email protected]> #\n# Copyright 2015 Jimmy Zelinskie <[email protected]> #\n# Copyright 2016 Felix Yan <[email protected]> #\n# Copyright 2016 Jakub Wilk <[email protected]> #\n# Copyright 2016 Jannis Gebauer <[email protected]> #\n# Copyright 2016 Peter Buckley <[email protected]> #\n# Copyright 2017 Hugo <[email protected]> #\n# Copyright 2017 Jannis Gebauer <[email protected]> #\n# Copyright 2017 Jannis Gebauer <[email protected]> #\n# Copyright 2017 Nhomar Hernandez <[email protected]> #\n# Copyright 2017 Paul Ortman <[email protected]> #\n# Copyright 2018 Jason White <[email protected]> #\n# Copyright 2018 Mike Miller <[email protected]> #\n# Copyright 2018 Wan Liuyang <[email protected]> #\n# Copyright 2018 sfdye <[email protected]> #\n# #\n# This file is part of PyGithub. #\n# http://pygithub.readthedocs.io/ #\n# #\n# PyGithub is free software: you can redistribute it and/or modify it under #\n# the terms of the GNU Lesser General Public License as published by the Free #\n# Software Foundation, either version 3 of the License, or (at your option) #\n# any later version. #\n# #\n# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #\n# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #\n# details. #\n# #\n# You should have received a copy of the GNU Lesser General Public License #\n# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #\n# #\n################################################################################\n\nimport textwrap\n\nimport setuptools\n\nversion = \"1.54.1\"\n\n\nif __name__ == \"__main__\":\n setuptools.setup(\n name=\"PyGithub\",\n version=version,\n description=\"Use the full Github API v3\",\n author=\"Vincent Jacques\",\n author_email=\"[email protected]\",\n url=\"https://github.com/pygithub/pygithub\",\n project_urls={\n \"Documentation\": \"http://pygithub.readthedocs.io/en/latest/\",\n \"Source\": \"https://github.com/pygithub/pygithub\",\n \"Tracker\": \"https://github.com/pygithub/pygithub/issues\",\n },\n long_description=textwrap.dedent(\n \"\"\"\\\n (Very short) Tutorial\n =====================\n\n First create a Github instance::\n\n from github import Github\n\n # using username and password\n g = Github(\"user\", \"password\")\n\n # or using an access token\n g = Github(\"access_token\")\n\n Then play with your Github objects::\n\n for repo in g.get_user().get_repos():\n print(repo.name)\n repo.edit(has_wiki=False)\n\n Reference documentation\n =======================\n\n See http://pygithub.readthedocs.io/en/latest/\"\"\"\n ),\n packages=[\"github\"],\n package_data={\"github\": [\"py.typed\", \"*.pyi\"]},\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Software Development\",\n ],\n python_requires=\">=3.6\",\n install_requires=[\n \"deprecated\",\n \"pyjwt<2.0\",\n \"pynacl>=1.4.0\",\n \"requests>=2.14.0\",\n ],\n extras_require={\"integrations\": [\"cryptography\"]},\n tests_require=[\"cryptography\", \"httpretty>=1.0.3\"],\n )\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n############################ Copyrights and license ############################\n# #\n# Copyright 2012 Vincent Jacques <[email protected]> #\n# Copyright 2012 Zearin <[email protected]> #\n# Copyright 2013 Vincent Jacques <[email protected]> #\n# Copyright 2014 Tomas Radej <[email protected]> #\n# Copyright 2014 Vincent Jacques <[email protected]> #\n# Copyright 2015 Jimmy Zelinskie <[email protected]> #\n# Copyright 2016 Felix Yan <[email protected]> #\n# Copyright 2016 Jakub Wilk <[email protected]> #\n# Copyright 2016 Jannis Gebauer <[email protected]> #\n# Copyright 2016 Peter Buckley <[email protected]> #\n# Copyright 2017 Hugo <[email protected]> #\n# Copyright 2017 Jannis Gebauer <[email protected]> #\n# Copyright 2017 Jannis Gebauer <[email protected]> #\n# Copyright 2017 Nhomar Hernandez <[email protected]> #\n# Copyright 2017 Paul Ortman <[email protected]> #\n# Copyright 2018 Jason White <[email protected]> #\n# Copyright 2018 Mike Miller <[email protected]> #\n# Copyright 2018 Wan Liuyang <[email protected]> #\n# Copyright 2018 sfdye <[email protected]> #\n# #\n# This file is part of PyGithub. #\n# http://pygithub.readthedocs.io/ #\n# #\n# PyGithub is free software: you can redistribute it and/or modify it under #\n# the terms of the GNU Lesser General Public License as published by the Free #\n# Software Foundation, either version 3 of the License, or (at your option) #\n# any later version. #\n# #\n# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #\n# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #\n# details. #\n# #\n# You should have received a copy of the GNU Lesser General Public License #\n# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #\n# #\n################################################################################\n\nimport textwrap\n\nimport setuptools\n\nversion = \"1.54.1\"\n\n\nif __name__ == \"__main__\":\n setuptools.setup(\n name=\"PyGithub\",\n version=version,\n description=\"Use the full Github API v3\",\n author=\"Vincent Jacques\",\n author_email=\"[email protected]\",\n url=\"https://github.com/pygithub/pygithub\",\n project_urls={\n \"Documentation\": \"http://pygithub.readthedocs.io/en/latest/\",\n \"Source\": \"https://github.com/pygithub/pygithub\",\n \"Tracker\": \"https://github.com/pygithub/pygithub/issues\",\n },\n long_description=textwrap.dedent(\n \"\"\"\\\n (Very short) Tutorial\n =====================\n\n First create a Github instance::\n\n from github import Github\n\n # using username and password\n g = Github(\"user\", \"password\")\n\n # or using an access token\n g = Github(\"access_token\")\n\n Then play with your Github objects::\n\n for repo in g.get_user().get_repos():\n print(repo.name)\n repo.edit(has_wiki=False)\n\n Reference documentation\n =======================\n\n See http://pygithub.readthedocs.io/en/latest/\"\"\"\n ),\n packages=[\"github\"],\n package_data={\"github\": [\"py.typed\", \"*.pyi\"]},\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Software Development\",\n ],\n python_requires=\">=3.6\",\n install_requires=[\n \"deprecated\",\n \"pyjwt>=2.0\",\n \"pynacl>=1.4.0\",\n \"requests>=2.14.0\",\n ],\n extras_require={\"integrations\": [\"cryptography\"]},\n tests_require=[\"cryptography\", \"httpretty>=1.0.3\"],\n )\n", "path": "setup.py"}]}
1,663
96
gh_patches_debug_17141
rasdani/github-patches
git_diff
ultralytics__ultralytics-3112
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- dvclive.error ### Search before asking - [X] I have searched the YOLOv8 [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report. ### YOLOv8 Component Training ### Bug Hello, I have used several time the YOLOV8 without problem. Today, I tried to retrain my model and I updated the ultralytics packages and when I started the training, i got this error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jupyter-moussa/.local/lib/python3.9/site-packages/ultralytics/yolo/engine/model.py", line 371, in train self.trainer.train() File "/home/jupyter-moussa/.local/lib/python3.9/site-packages/ultralytics/yolo/engine/trainer.py", line 192, in train self._do_train(world_size) File "/home/jupyter-moussa/.local/lib/python3.9/site-packages/ultralytics/yolo/engine/trainer.py", line 275, in _do_train self._setup_train(world_size) File "/home/jupyter-moussa/.local/lib/python3.9/site-packages/ultralytics/yolo/engine/trainer.py", line 268, in _setup_train self.run_callbacks('on_pretrain_routine_end') File "/home/jupyter-moussa/.local/lib/python3.9/site-packages/ultralytics/yolo/engine/trainer.py", line 165, in run_callbacks callback(self) File "/home/jupyter-moussa/.local/lib/python3.9/site-packages/ultralytics/yolo/utils/callbacks/dvc.py", line 76, in on_pretrain_routine_end _log_plots(trainer.plots, 'train') File "/home/jupyter-moussa/.local/lib/python3.9/site-packages/ultralytics/yolo/utils/callbacks/dvc.py", line 40, in _log_plots _log_images(name, prefix) File "/home/jupyter-moussa/.local/lib/python3.9/site-packages/ultralytics/yolo/utils/callbacks/dvc.py", line 33, in _log_images live.log_image(os.path.join(prefix, image_path.name), image_path) File "/home/jupyter-moussa/.local/lib/python3.9/site-packages/dvclive/live.py", line 249, in log_image raise InvalidDataTypeError(name, type(val)) dvclive.error.InvalidDataTypeError: Data 'train/labels.jpg' has not supported type <class 'pathlib.PosixPath'> what do you think about it? Thank you. ### Environment ultralytics 8.0.114 torch 2.0.1 torchaudio 0.13.1 torchvision 0.15.2 Ubuntu 22.04.1 LTS ### Minimal Reproducible Example _No response_ ### Additional _No response_ ### Are you willing to submit a PR? - [ ] Yes I'd like to help by submitting a PR! --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `ultralytics/yolo/utils/callbacks/dvc.py` Content: ``` 1 # Ultralytics YOLO 🚀, GPL-3.0 license 2 import os 3 4 from ultralytics.yolo.utils import LOGGER, TESTS_RUNNING 5 from ultralytics.yolo.utils.torch_utils import get_flops, get_num_params 6 7 try: 8 from importlib.metadata import version 9 10 import dvclive 11 12 assert not TESTS_RUNNING # do not log pytest 13 assert version('dvclive') 14 except (ImportError, AssertionError): 15 dvclive = None 16 17 # DVCLive logger instance 18 live = None 19 _processed_plots = {} 20 21 # `on_fit_epoch_end` is called on final validation (probably need to be fixed) 22 # for now this is the way we distinguish final evaluation of the best model vs 23 # last epoch validation 24 _training_epoch = False 25 26 27 def _logger_disabled(): 28 return os.getenv('ULTRALYTICS_DVC_DISABLED', 'false').lower() == 'true' 29 30 31 def _log_images(image_path, prefix=''): 32 if live: 33 live.log_image(os.path.join(prefix, image_path.name), image_path) 34 35 36 def _log_plots(plots, prefix=''): 37 for name, params in plots.items(): 38 timestamp = params['timestamp'] 39 if _processed_plots.get(name, None) != timestamp: 40 _log_images(name, prefix) 41 _processed_plots[name] = timestamp 42 43 44 def _log_confusion_matrix(validator): 45 targets = [] 46 preds = [] 47 matrix = validator.confusion_matrix.matrix 48 names = list(validator.names.values()) 49 if validator.confusion_matrix.task == 'detect': 50 names += ['background'] 51 52 for ti, pred in enumerate(matrix.T.astype(int)): 53 for pi, num in enumerate(pred): 54 targets.extend([names[ti]] * num) 55 preds.extend([names[pi]] * num) 56 57 live.log_sklearn_plot('confusion_matrix', targets, preds, name='cf.json', normalized=True) 58 59 60 def on_pretrain_routine_start(trainer): 61 try: 62 global live 63 if not _logger_disabled(): 64 live = dvclive.Live(save_dvc_exp=True) 65 LOGGER.info( 66 'DVCLive is detected and auto logging is enabled (can be disabled with `ULTRALYTICS_DVC_DISABLED=true`).' 67 ) 68 else: 69 LOGGER.debug('DVCLive is detected and auto logging is disabled via `ULTRALYTICS_DVC_DISABLED`.') 70 live = None 71 except Exception as e: 72 LOGGER.warning(f'WARNING ⚠️ DVCLive installed but not initialized correctly, not logging this run. {e}') 73 74 75 def on_pretrain_routine_end(trainer): 76 _log_plots(trainer.plots, 'train') 77 78 79 def on_train_start(trainer): 80 if live: 81 live.log_params(trainer.args) 82 83 84 def on_train_epoch_start(trainer): 85 global _training_epoch 86 _training_epoch = True 87 88 89 def on_fit_epoch_end(trainer): 90 global _training_epoch 91 if live and _training_epoch: 92 all_metrics = {**trainer.label_loss_items(trainer.tloss, prefix='train'), **trainer.metrics, **trainer.lr} 93 for metric, value in all_metrics.items(): 94 live.log_metric(metric, value) 95 96 if trainer.epoch == 0: 97 model_info = { 98 'model/parameters': get_num_params(trainer.model), 99 'model/GFLOPs': round(get_flops(trainer.model), 3), 100 'model/speed(ms)': round(trainer.validator.speed['inference'], 3)} 101 102 for metric, value in model_info.items(): 103 live.log_metric(metric, value, plot=False) 104 105 _log_plots(trainer.plots, 'train') 106 _log_plots(trainer.validator.plots, 'val') 107 108 live.next_step() 109 _training_epoch = False 110 111 112 def on_train_end(trainer): 113 if live: 114 # At the end log the best metrics. It runs validator on the best model internally. 115 all_metrics = {**trainer.label_loss_items(trainer.tloss, prefix='train'), **trainer.metrics, **trainer.lr} 116 for metric, value in all_metrics.items(): 117 live.log_metric(metric, value, plot=False) 118 119 _log_plots(trainer.plots, 'eval') 120 _log_plots(trainer.validator.plots, 'eval') 121 _log_confusion_matrix(trainer.validator) 122 123 if trainer.best.exists(): 124 live.log_artifact(trainer.best, copy=True) 125 126 live.end() 127 128 129 callbacks = { 130 'on_pretrain_routine_start': on_pretrain_routine_start, 131 'on_pretrain_routine_end': on_pretrain_routine_end, 132 'on_train_start': on_train_start, 133 'on_train_epoch_start': on_train_epoch_start, 134 'on_fit_epoch_end': on_fit_epoch_end, 135 'on_train_end': on_train_end} if dvclive else {} 136 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/ultralytics/yolo/utils/callbacks/dvc.py b/ultralytics/yolo/utils/callbacks/dvc.py --- a/ultralytics/yolo/utils/callbacks/dvc.py +++ b/ultralytics/yolo/utils/callbacks/dvc.py @@ -1,6 +1,8 @@ # Ultralytics YOLO 🚀, GPL-3.0 license import os +import pkg_resources as pkg + from ultralytics.yolo.utils import LOGGER, TESTS_RUNNING from ultralytics.yolo.utils.torch_utils import get_flops, get_num_params @@ -10,8 +12,12 @@ import dvclive assert not TESTS_RUNNING # do not log pytest - assert version('dvclive') -except (ImportError, AssertionError): + + ver = version('dvclive') + if pkg.parse_version(ver) < pkg.parse_version('2.11.0'): + LOGGER.debug(f'DVCLive is detected but version {ver} is incompatible (>=2.11 required).') + dvclive = None # noqa: F811 +except (ImportError, AssertionError, TypeError): dvclive = None # DVCLive logger instance
{"golden_diff": "diff --git a/ultralytics/yolo/utils/callbacks/dvc.py b/ultralytics/yolo/utils/callbacks/dvc.py\n--- a/ultralytics/yolo/utils/callbacks/dvc.py\n+++ b/ultralytics/yolo/utils/callbacks/dvc.py\n@@ -1,6 +1,8 @@\n # Ultralytics YOLO \ud83d\ude80, GPL-3.0 license\n import os\n \n+import pkg_resources as pkg\n+\n from ultralytics.yolo.utils import LOGGER, TESTS_RUNNING\n from ultralytics.yolo.utils.torch_utils import get_flops, get_num_params\n \n@@ -10,8 +12,12 @@\n import dvclive\n \n assert not TESTS_RUNNING # do not log pytest\n- assert version('dvclive')\n-except (ImportError, AssertionError):\n+\n+ ver = version('dvclive')\n+ if pkg.parse_version(ver) < pkg.parse_version('2.11.0'):\n+ LOGGER.debug(f'DVCLive is detected but version {ver} is incompatible (>=2.11 required).')\n+ dvclive = None # noqa: F811\n+except (ImportError, AssertionError, TypeError):\n dvclive = None\n \n # DVCLive logger instance\n", "issue": "dvclive.error\n### Search before asking\n\n- [X] I have searched the YOLOv8 [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.\n\n\n### YOLOv8 Component\n\nTraining\n\n### Bug\n\nHello,\r\nI have used several time the YOLOV8 without problem.\r\nToday, I tried to retrain my model and I updated the ultralytics packages and when I started the training, i got this error:\r\n\r\n Traceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/jupyter-moussa/.local/lib/python3.9/site-packages/ultralytics/yolo/engine/model.py\", line 371, in train\r\n self.trainer.train()\r\n File \"/home/jupyter-moussa/.local/lib/python3.9/site-packages/ultralytics/yolo/engine/trainer.py\", line 192, in train\r\n self._do_train(world_size)\r\n File \"/home/jupyter-moussa/.local/lib/python3.9/site-packages/ultralytics/yolo/engine/trainer.py\", line 275, in _do_train\r\n self._setup_train(world_size)\r\n File \"/home/jupyter-moussa/.local/lib/python3.9/site-packages/ultralytics/yolo/engine/trainer.py\", line 268, in _setup_train\r\n self.run_callbacks('on_pretrain_routine_end')\r\n File \"/home/jupyter-moussa/.local/lib/python3.9/site-packages/ultralytics/yolo/engine/trainer.py\", line 165, in run_callbacks\r\n callback(self)\r\n File \"/home/jupyter-moussa/.local/lib/python3.9/site-packages/ultralytics/yolo/utils/callbacks/dvc.py\", line 76, in on_pretrain_routine_end\r\n _log_plots(trainer.plots, 'train')\r\n File \"/home/jupyter-moussa/.local/lib/python3.9/site-packages/ultralytics/yolo/utils/callbacks/dvc.py\", line 40, in _log_plots\r\n _log_images(name, prefix)\r\n File \"/home/jupyter-moussa/.local/lib/python3.9/site-packages/ultralytics/yolo/utils/callbacks/dvc.py\", line 33, in _log_images\r\n live.log_image(os.path.join(prefix, image_path.name), image_path)\r\n File \"/home/jupyter-moussa/.local/lib/python3.9/site-packages/dvclive/live.py\", line 249, in log_image\r\n raise InvalidDataTypeError(name, type(val))\r\n dvclive.error.InvalidDataTypeError: Data 'train/labels.jpg' has not supported type <class 'pathlib.PosixPath'>\r\n\r\nwhat do you think about it?\r\n\r\nThank you.\n\n### Environment\n\nultralytics 8.0.114\r\ntorch 2.0.1\r\ntorchaudio 0.13.1\r\ntorchvision 0.15.2\r\nUbuntu 22.04.1 LTS\n\n### Minimal Reproducible Example\n\n_No response_\n\n### Additional\n\n_No response_\n\n### Are you willing to submit a PR?\n\n- [ ] Yes I'd like to help by submitting a PR!\n", "before_files": [{"content": "# Ultralytics YOLO \ud83d\ude80, GPL-3.0 license\nimport os\n\nfrom ultralytics.yolo.utils import LOGGER, TESTS_RUNNING\nfrom ultralytics.yolo.utils.torch_utils import get_flops, get_num_params\n\ntry:\n from importlib.metadata import version\n\n import dvclive\n\n assert not TESTS_RUNNING # do not log pytest\n assert version('dvclive')\nexcept (ImportError, AssertionError):\n dvclive = None\n\n# DVCLive logger instance\nlive = None\n_processed_plots = {}\n\n# `on_fit_epoch_end` is called on final validation (probably need to be fixed)\n# for now this is the way we distinguish final evaluation of the best model vs\n# last epoch validation\n_training_epoch = False\n\n\ndef _logger_disabled():\n return os.getenv('ULTRALYTICS_DVC_DISABLED', 'false').lower() == 'true'\n\n\ndef _log_images(image_path, prefix=''):\n if live:\n live.log_image(os.path.join(prefix, image_path.name), image_path)\n\n\ndef _log_plots(plots, prefix=''):\n for name, params in plots.items():\n timestamp = params['timestamp']\n if _processed_plots.get(name, None) != timestamp:\n _log_images(name, prefix)\n _processed_plots[name] = timestamp\n\n\ndef _log_confusion_matrix(validator):\n targets = []\n preds = []\n matrix = validator.confusion_matrix.matrix\n names = list(validator.names.values())\n if validator.confusion_matrix.task == 'detect':\n names += ['background']\n\n for ti, pred in enumerate(matrix.T.astype(int)):\n for pi, num in enumerate(pred):\n targets.extend([names[ti]] * num)\n preds.extend([names[pi]] * num)\n\n live.log_sklearn_plot('confusion_matrix', targets, preds, name='cf.json', normalized=True)\n\n\ndef on_pretrain_routine_start(trainer):\n try:\n global live\n if not _logger_disabled():\n live = dvclive.Live(save_dvc_exp=True)\n LOGGER.info(\n 'DVCLive is detected and auto logging is enabled (can be disabled with `ULTRALYTICS_DVC_DISABLED=true`).'\n )\n else:\n LOGGER.debug('DVCLive is detected and auto logging is disabled via `ULTRALYTICS_DVC_DISABLED`.')\n live = None\n except Exception as e:\n LOGGER.warning(f'WARNING \u26a0\ufe0f DVCLive installed but not initialized correctly, not logging this run. {e}')\n\n\ndef on_pretrain_routine_end(trainer):\n _log_plots(trainer.plots, 'train')\n\n\ndef on_train_start(trainer):\n if live:\n live.log_params(trainer.args)\n\n\ndef on_train_epoch_start(trainer):\n global _training_epoch\n _training_epoch = True\n\n\ndef on_fit_epoch_end(trainer):\n global _training_epoch\n if live and _training_epoch:\n all_metrics = {**trainer.label_loss_items(trainer.tloss, prefix='train'), **trainer.metrics, **trainer.lr}\n for metric, value in all_metrics.items():\n live.log_metric(metric, value)\n\n if trainer.epoch == 0:\n model_info = {\n 'model/parameters': get_num_params(trainer.model),\n 'model/GFLOPs': round(get_flops(trainer.model), 3),\n 'model/speed(ms)': round(trainer.validator.speed['inference'], 3)}\n\n for metric, value in model_info.items():\n live.log_metric(metric, value, plot=False)\n\n _log_plots(trainer.plots, 'train')\n _log_plots(trainer.validator.plots, 'val')\n\n live.next_step()\n _training_epoch = False\n\n\ndef on_train_end(trainer):\n if live:\n # At the end log the best metrics. It runs validator on the best model internally.\n all_metrics = {**trainer.label_loss_items(trainer.tloss, prefix='train'), **trainer.metrics, **trainer.lr}\n for metric, value in all_metrics.items():\n live.log_metric(metric, value, plot=False)\n\n _log_plots(trainer.plots, 'eval')\n _log_plots(trainer.validator.plots, 'eval')\n _log_confusion_matrix(trainer.validator)\n\n if trainer.best.exists():\n live.log_artifact(trainer.best, copy=True)\n\n live.end()\n\n\ncallbacks = {\n 'on_pretrain_routine_start': on_pretrain_routine_start,\n 'on_pretrain_routine_end': on_pretrain_routine_end,\n 'on_train_start': on_train_start,\n 'on_train_epoch_start': on_train_epoch_start,\n 'on_fit_epoch_end': on_fit_epoch_end,\n 'on_train_end': on_train_end} if dvclive else {}\n", "path": "ultralytics/yolo/utils/callbacks/dvc.py"}], "after_files": [{"content": "# Ultralytics YOLO \ud83d\ude80, GPL-3.0 license\nimport os\n\nimport pkg_resources as pkg\n\nfrom ultralytics.yolo.utils import LOGGER, TESTS_RUNNING\nfrom ultralytics.yolo.utils.torch_utils import get_flops, get_num_params\n\ntry:\n from importlib.metadata import version\n\n import dvclive\n\n assert not TESTS_RUNNING # do not log pytest\n\n ver = version('dvclive')\n if pkg.parse_version(ver) < pkg.parse_version('2.11.0'):\n LOGGER.debug(f'DVCLive is detected but version {ver} is incompatible (>=2.11 required).')\n dvclive = None # noqa: F811\nexcept (ImportError, AssertionError, TypeError):\n dvclive = None\n\n# DVCLive logger instance\nlive = None\n_processed_plots = {}\n\n# `on_fit_epoch_end` is called on final validation (probably need to be fixed)\n# for now this is the way we distinguish final evaluation of the best model vs\n# last epoch validation\n_training_epoch = False\n\n\ndef _logger_disabled():\n return os.getenv('ULTRALYTICS_DVC_DISABLED', 'false').lower() == 'true'\n\n\ndef _log_images(image_path, prefix=''):\n if live:\n live.log_image(os.path.join(prefix, image_path.name), image_path)\n\n\ndef _log_plots(plots, prefix=''):\n for name, params in plots.items():\n timestamp = params['timestamp']\n if _processed_plots.get(name, None) != timestamp:\n _log_images(name, prefix)\n _processed_plots[name] = timestamp\n\n\ndef _log_confusion_matrix(validator):\n targets = []\n preds = []\n matrix = validator.confusion_matrix.matrix\n names = list(validator.names.values())\n if validator.confusion_matrix.task == 'detect':\n names += ['background']\n\n for ti, pred in enumerate(matrix.T.astype(int)):\n for pi, num in enumerate(pred):\n targets.extend([names[ti]] * num)\n preds.extend([names[pi]] * num)\n\n live.log_sklearn_plot('confusion_matrix', targets, preds, name='cf.json', normalized=True)\n\n\ndef on_pretrain_routine_start(trainer):\n try:\n global live\n if not _logger_disabled():\n live = dvclive.Live(save_dvc_exp=True)\n LOGGER.info(\n 'DVCLive is detected and auto logging is enabled (can be disabled with `ULTRALYTICS_DVC_DISABLED=true`).'\n )\n else:\n LOGGER.debug('DVCLive is detected and auto logging is disabled via `ULTRALYTICS_DVC_DISABLED`.')\n live = None\n except Exception as e:\n LOGGER.warning(f'WARNING \u26a0\ufe0f DVCLive installed but not initialized correctly, not logging this run. {e}')\n\n\ndef on_pretrain_routine_end(trainer):\n _log_plots(trainer.plots, 'train')\n\n\ndef on_train_start(trainer):\n if live:\n live.log_params(trainer.args)\n\n\ndef on_train_epoch_start(trainer):\n global _training_epoch\n _training_epoch = True\n\n\ndef on_fit_epoch_end(trainer):\n global _training_epoch\n if live and _training_epoch:\n all_metrics = {**trainer.label_loss_items(trainer.tloss, prefix='train'), **trainer.metrics, **trainer.lr}\n for metric, value in all_metrics.items():\n live.log_metric(metric, value)\n\n if trainer.epoch == 0:\n model_info = {\n 'model/parameters': get_num_params(trainer.model),\n 'model/GFLOPs': round(get_flops(trainer.model), 3),\n 'model/speed(ms)': round(trainer.validator.speed['inference'], 3)}\n\n for metric, value in model_info.items():\n live.log_metric(metric, value, plot=False)\n\n _log_plots(trainer.plots, 'train')\n _log_plots(trainer.validator.plots, 'val')\n\n live.next_step()\n _training_epoch = False\n\n\ndef on_train_end(trainer):\n if live:\n # At the end log the best metrics. It runs validator on the best model internally.\n all_metrics = {**trainer.label_loss_items(trainer.tloss, prefix='train'), **trainer.metrics, **trainer.lr}\n for metric, value in all_metrics.items():\n live.log_metric(metric, value, plot=False)\n\n _log_plots(trainer.plots, 'eval')\n _log_plots(trainer.validator.plots, 'eval')\n _log_confusion_matrix(trainer.validator)\n\n if trainer.best.exists():\n live.log_artifact(trainer.best, copy=True)\n\n live.end()\n\n\ncallbacks = {\n 'on_pretrain_routine_start': on_pretrain_routine_start,\n 'on_pretrain_routine_end': on_pretrain_routine_end,\n 'on_train_start': on_train_start,\n 'on_train_epoch_start': on_train_epoch_start,\n 'on_fit_epoch_end': on_fit_epoch_end,\n 'on_train_end': on_train_end} if dvclive else {}\n", "path": "ultralytics/yolo/utils/callbacks/dvc.py"}]}
2,312
276
gh_patches_debug_14714
rasdani/github-patches
git_diff
bokeh__bokeh-8466
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- "CustomJS for Selections" Example in Docs Broken In the latest version of the docs, it appears [this example]( https://bokeh.pydata.org/en/latest/docs/user_guide/interaction/callbacks.html#customjs-for-selections ) is broken. This is also true of the example in the Bokeh 1.0.0 docs. Selecting points in the plot on the left does not result in points being shown in the right plot. Compare this to [the same plot using Bokeh 0.13.0]( https://bokeh.pydata.org/en/0.13.0/docs/user_guide/interaction/callbacks.html#customjs-for-selections ), which seems to work without issues. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `sphinx/source/docs/user_guide/examples/interaction_callbacks_for_selections.py` Content: ``` 1 from random import random 2 3 from bokeh.layouts import row 4 from bokeh.models import CustomJS, ColumnDataSource 5 from bokeh.plotting import figure, output_file, show 6 7 output_file("callback.html") 8 9 x = [random() for x in range(500)] 10 y = [random() for y in range(500)] 11 12 s1 = ColumnDataSource(data=dict(x=x, y=y)) 13 p1 = figure(plot_width=400, plot_height=400, tools="lasso_select", title="Select Here") 14 p1.circle('x', 'y', source=s1, alpha=0.6) 15 16 s2 = ColumnDataSource(data=dict(x=[], y=[])) 17 p2 = figure(plot_width=400, plot_height=400, x_range=(0, 1), y_range=(0, 1), 18 tools="", title="Watch Here") 19 p2.circle('x', 'y', source=s2, alpha=0.6) 20 21 s1.callback = CustomJS(args=dict(s2=s2), code=""" 22 var inds = cb_obj.selected.indices; 23 var d1 = cb_obj.data; 24 var d2 = s2.data; 25 d2['x'] = [] 26 d2['y'] = [] 27 for (var i = 0; i < inds.length; i++) { 28 d2['x'].push(d1['x'][inds[i]]) 29 d2['y'].push(d1['y'][inds[i]]) 30 } 31 s2.change.emit(); 32 """) 33 34 layout = row(p1, p2) 35 36 show(layout) 37 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/sphinx/source/docs/user_guide/examples/interaction_callbacks_for_selections.py b/sphinx/source/docs/user_guide/examples/interaction_callbacks_for_selections.py --- a/sphinx/source/docs/user_guide/examples/interaction_callbacks_for_selections.py +++ b/sphinx/source/docs/user_guide/examples/interaction_callbacks_for_selections.py @@ -18,9 +18,9 @@ tools="", title="Watch Here") p2.circle('x', 'y', source=s2, alpha=0.6) -s1.callback = CustomJS(args=dict(s2=s2), code=""" - var inds = cb_obj.selected.indices; - var d1 = cb_obj.data; +s1.selected.js_on_change('indices', CustomJS(args=dict(s1=s1, s2=s2), code=""" + var inds = cb_obj.indices; + var d1 = s1.data; var d2 = s2.data; d2['x'] = [] d2['y'] = [] @@ -30,6 +30,7 @@ } s2.change.emit(); """) +) layout = row(p1, p2)
{"golden_diff": "diff --git a/sphinx/source/docs/user_guide/examples/interaction_callbacks_for_selections.py b/sphinx/source/docs/user_guide/examples/interaction_callbacks_for_selections.py\n--- a/sphinx/source/docs/user_guide/examples/interaction_callbacks_for_selections.py\n+++ b/sphinx/source/docs/user_guide/examples/interaction_callbacks_for_selections.py\n@@ -18,9 +18,9 @@\n tools=\"\", title=\"Watch Here\")\n p2.circle('x', 'y', source=s2, alpha=0.6)\n \n-s1.callback = CustomJS(args=dict(s2=s2), code=\"\"\"\n- var inds = cb_obj.selected.indices;\n- var d1 = cb_obj.data;\n+s1.selected.js_on_change('indices', CustomJS(args=dict(s1=s1, s2=s2), code=\"\"\"\n+ var inds = cb_obj.indices;\n+ var d1 = s1.data;\n var d2 = s2.data;\n d2['x'] = []\n d2['y'] = []\n@@ -30,6 +30,7 @@\n }\n s2.change.emit();\n \"\"\")\n+)\n \n layout = row(p1, p2)\n", "issue": "\"CustomJS for Selections\" Example in Docs Broken\nIn the latest version of the docs, it appears [this example]( https://bokeh.pydata.org/en/latest/docs/user_guide/interaction/callbacks.html#customjs-for-selections ) is broken. This is also true of the example in the Bokeh 1.0.0 docs. Selecting points in the plot on the left does not result in points being shown in the right plot. Compare this to [the same plot using Bokeh 0.13.0]( https://bokeh.pydata.org/en/0.13.0/docs/user_guide/interaction/callbacks.html#customjs-for-selections ), which seems to work without issues.\n", "before_files": [{"content": "from random import random\n\nfrom bokeh.layouts import row\nfrom bokeh.models import CustomJS, ColumnDataSource\nfrom bokeh.plotting import figure, output_file, show\n\noutput_file(\"callback.html\")\n\nx = [random() for x in range(500)]\ny = [random() for y in range(500)]\n\ns1 = ColumnDataSource(data=dict(x=x, y=y))\np1 = figure(plot_width=400, plot_height=400, tools=\"lasso_select\", title=\"Select Here\")\np1.circle('x', 'y', source=s1, alpha=0.6)\n\ns2 = ColumnDataSource(data=dict(x=[], y=[]))\np2 = figure(plot_width=400, plot_height=400, x_range=(0, 1), y_range=(0, 1),\n tools=\"\", title=\"Watch Here\")\np2.circle('x', 'y', source=s2, alpha=0.6)\n\ns1.callback = CustomJS(args=dict(s2=s2), code=\"\"\"\n var inds = cb_obj.selected.indices;\n var d1 = cb_obj.data;\n var d2 = s2.data;\n d2['x'] = []\n d2['y'] = []\n for (var i = 0; i < inds.length; i++) {\n d2['x'].push(d1['x'][inds[i]])\n d2['y'].push(d1['y'][inds[i]])\n }\n s2.change.emit();\n \"\"\")\n\nlayout = row(p1, p2)\n\nshow(layout)\n", "path": "sphinx/source/docs/user_guide/examples/interaction_callbacks_for_selections.py"}], "after_files": [{"content": "from random import random\n\nfrom bokeh.layouts import row\nfrom bokeh.models import CustomJS, ColumnDataSource\nfrom bokeh.plotting import figure, output_file, show\n\noutput_file(\"callback.html\")\n\nx = [random() for x in range(500)]\ny = [random() for y in range(500)]\n\ns1 = ColumnDataSource(data=dict(x=x, y=y))\np1 = figure(plot_width=400, plot_height=400, tools=\"lasso_select\", title=\"Select Here\")\np1.circle('x', 'y', source=s1, alpha=0.6)\n\ns2 = ColumnDataSource(data=dict(x=[], y=[]))\np2 = figure(plot_width=400, plot_height=400, x_range=(0, 1), y_range=(0, 1),\n tools=\"\", title=\"Watch Here\")\np2.circle('x', 'y', source=s2, alpha=0.6)\n\ns1.selected.js_on_change('indices', CustomJS(args=dict(s1=s1, s2=s2), code=\"\"\"\n var inds = cb_obj.indices;\n var d1 = s1.data;\n var d2 = s2.data;\n d2['x'] = []\n d2['y'] = []\n for (var i = 0; i < inds.length; i++) {\n d2['x'].push(d1['x'][inds[i]])\n d2['y'].push(d1['y'][inds[i]])\n }\n s2.change.emit();\n \"\"\")\n)\n\nlayout = row(p1, p2)\n\nshow(layout)\n", "path": "sphinx/source/docs/user_guide/examples/interaction_callbacks_for_selections.py"}]}
822
249
gh_patches_debug_2590
rasdani/github-patches
git_diff
cloud-custodian__cloud-custodian-8120
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Wafv2 logging error when using cloudtrail mode ### Describe the bug When my wafv2 logging policy runs after disabling logging I receive an error on a cloudtrail policy when using the DeleteLoggingConfiguration event. ### What did you expect to happen? I expected the policy to match my resource. ### Cloud Provider Amazon Web Services (AWS) ### Cloud Custodian version and dependency information ```shell custodian version --debug Please copy/paste the following info along with any bug reports: Custodian: 0.9.21 Python: 3.8.0 (v3.8.0:fa919fdf25, Oct 14 2019, 10:23:27) [Clang 6.0 (clang-600.0.57)] Platform: posix.uname_result(sysname='Darwin', nodename='kristen-MacBook-Pro', release='21.4.0', version='Darwin Kernel Version 21.4.0: Mon Feb 21 20:35:58 PST 2022; root:xnu-8020.101.4~2/RELEASE_ARM64_T6000', machine='x86_64') Using venv: True Docker: False Installed: argcomplete==2.0.0 attrs==22.1.0 boto3==1.26.30 botocore==1.29.30 docutils==0.17.1 importlib-metadata==4.13.0 importlib-resources==5.10.1 jmespath==1.0.1 jsonschema==4.17.3 pkgutil-resolve-name==1.3.10 pyrsistent==0.19.2 python-dateutil==2.8.2 pyyaml==6.0 s3transfer==0.6.0 six==1.16.0 tabulate==0.8.10 urllib3==1.26.13 zipp==3.11.0 ``` ### Policy ```shell Policy example: - name: wafv2-log-testing resource: aws.wafv2 mode: role: arn:aws:iam::testing type: cloudtrail events: - event: DeleteLoggingConfiguration ids: requestParameters.resourceArn source: wafv2.amazonaws.com filters: - not: - type: logging key: ResourceArn value: present ``` ### Relevant log/traceback output ```shell Error when policy runs: [ERROR] 2023-01-06T18:48:11.706Z 163a02a3-69d6-4d43-a307-365ddcb8ead7 error during policy executionTraceback (most recent call last): File "/var/task/c7n/handler.py", line 165, in dispatch_event p.push(event, context) File "/var/task/c7n/policy.py", line 1288, in push return mode.run(event, lambda_ctx) File "/var/task/c7n/policy.py", line 487, in run resources = self.resolve_resources(event) File "/var/task/c7n/policy.py", line 691, in resolve_resources return super().resolve_resources(event) File "/var/task/c7n/policy.py", line 469, in resolve_resources resources = self.policy.resource_manager.get_resources(resource_ids) File "/var/task/c7n/query.py", line 576, in get_resources resources = self.source.get_resources(ids) File "/var/task/c7n/query.py", line 227, in get_resources return self.query.get(self.manager, ids) File "/var/task/c7n/query.py", line 100, in get resources = self.filter(resource_manager, **params) File "/var/task/c7n/query.py", line 79, in filter return self._invoke_client_enum( File "/var/task/c7n/query.py", line 60, in _invoke_client_enum data = op(**params) File "/var/runtime/botocore/client.py", line 391, in _api_call return self._make_api_call(operation_name, kwargs) File "/var/runtime/botocore/client.py", line 691, in _make_api_call request_dict = self._convert_to_request_dict( File "/var/runtime/botocore/client.py", line 739, in _convert_to_request_dict request_dict = self._serializer.serialize_to_request( File "/var/runtime/botocore/validate.py", line 360, in serialize_to_request raise ParamValidationError(report=report.generate_report())botocore.exceptions.ParamValidationError: Parameter validation failed:Missing required parameter in input: "Scope" | [ERROR] 2023-01-06T18:48:11.706Z 163a02a3-69d6-4d43-a307-365ddcb8ead7 error during policy execution Traceback (most recent call last): File "/var/task/c7n/handler.py", line 165, in dispatch_event p.push(event, context) File "/var/task/c7n/policy.py", line 1288, in push return mode.run(event, lambda_ctx) File "/var/task/c7n/policy.py", line 487, in run resources = self.resolve_resources(event) File "/var/task/c7n/policy.py", line 691, in resolve_resources return super().resolve_resources(event) File "/var/task/c7n/policy.py", line 469, in resolve_resources resources = self.policy.resource_manager.get_resources(resource_ids) File "/var/task/c7n/query.py", line 576, in get_resources resources = self.source.get_resources(ids) File "/var/task/c7n/query.py", line 227, in get_resources return self.query.get(self.manager, ids) File "/var/task/c7n/query.py", line 100, in get resources = self.filter(resource_manager, **params) File "/var/task/c7n/query.py", line 79, in filter return self._invoke_client_enum( File "/var/task/c7n/query.py", line 60, in _invoke_client_enum data = op(**params) File "/var/runtime/botocore/client.py", line 391, in _api_call return self._make_api_call(operation_name, kwargs) File "/var/runtime/botocore/client.py", line 691, in _make_api_call request_dict = self._convert_to_request_dict( File "/var/runtime/botocore/client.py", line 739, in _convert_to_request_dict request_dict = self._serializer.serialize_to_request( File "/var/runtime/botocore/validate.py", line 360, in serialize_to_request raise ParamValidationError(report=report.generate_report()) botocore.exceptions.ParamValidationError: Parameter validation failed: Missing required parameter in input: "Scope" -- | -- ``` ### Extra information or context The pull mode version of this policy works fine. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `c7n/resources/waf.py` Content: ``` 1 # Copyright The Cloud Custodian Authors. 2 # SPDX-License-Identifier: Apache-2.0 3 from c7n.manager import resources 4 from c7n.query import ConfigSource, QueryResourceManager, TypeInfo, DescribeSource 5 from c7n.tags import universal_augment 6 from c7n.filters import ValueFilter 7 from c7n.utils import type_schema, local_session 8 9 10 class DescribeRegionalWaf(DescribeSource): 11 def augment(self, resources): 12 resources = super().augment(resources) 13 return universal_augment(self.manager, resources) 14 15 16 class DescribeWafV2(DescribeSource): 17 def augment(self, resources): 18 return universal_augment(self.manager, resources) 19 20 # set REGIONAL for Scope as default 21 def get_query_params(self, query): 22 q = super(DescribeWafV2, self).get_query_params(query) 23 if q: 24 if 'Scope' not in q: 25 q['Scope'] = 'REGIONAL' 26 else: 27 q = {'Scope': 'REGIONAL'} 28 return q 29 30 31 @resources.register('waf') 32 class WAF(QueryResourceManager): 33 34 class resource_type(TypeInfo): 35 service = "waf" 36 enum_spec = ("list_web_acls", "WebACLs", None) 37 detail_spec = ("get_web_acl", "WebACLId", "WebACLId", "WebACL") 38 name = "Name" 39 id = "WebACLId" 40 dimension = "WebACL" 41 cfn_type = config_type = "AWS::WAF::WebACL" 42 arn_type = "webacl" 43 # override defaults to casing issues 44 permissions_enum = ('waf:ListWebACLs',) 45 permissions_augment = ('waf:GetWebACL',) 46 47 48 @resources.register('waf-regional') 49 class RegionalWAF(QueryResourceManager): 50 51 class resource_type(TypeInfo): 52 service = "waf-regional" 53 enum_spec = ("list_web_acls", "WebACLs", None) 54 detail_spec = ("get_web_acl", "WebACLId", "WebACLId", "WebACL") 55 name = "Name" 56 id = "WebACLId" 57 dimension = "WebACL" 58 cfn_type = config_type = "AWS::WAFRegional::WebACL" 59 arn_type = "webacl" 60 # override defaults to casing issues 61 permissions_enum = ('waf-regional:ListWebACLs',) 62 permissions_augment = ('waf-regional:GetWebACL',) 63 universal_taggable = object() 64 65 source_mapping = { 66 'describe': DescribeRegionalWaf, 67 'config': ConfigSource 68 } 69 70 71 @resources.register('wafv2') 72 class WAFV2(QueryResourceManager): 73 74 class resource_type(TypeInfo): 75 service = "wafv2" 76 enum_spec = ("list_web_acls", "WebACLs", None) 77 detail_spec = ("get_web_acl", "Id", "Id", "WebACL") 78 name = "Name" 79 id = "Id" 80 dimension = "WebACL" 81 cfn_type = config_type = "AWS::WAFv2::WebACL" 82 arn_type = "webacl" 83 # override defaults to casing issues 84 permissions_enum = ('wafv2:ListWebACLs',) 85 permissions_augment = ('wafv2:GetWebACL',) 86 universal_taggable = object() 87 88 source_mapping = { 89 'describe': DescribeWafV2, 90 'config': ConfigSource 91 } 92 93 94 @WAFV2.filter_registry.register('logging') 95 class WAFV2LoggingFilter(ValueFilter): 96 """ 97 Filter by wafv2 logging configuration 98 99 :example: 100 101 .. code-block:: yaml 102 103 policies: 104 - name: wafv2-logging-enabled 105 resource: aws.wafv2 106 filters: 107 - not: 108 - type: logging 109 key: ResourceArn 110 value: present 111 112 - name: check-redacted-fields 113 resource: aws.wafv2 114 filters: 115 - type: logging 116 key: RedactedFields[].SingleHeader.Name 117 value: user-agent 118 op: in 119 value_type: swap 120 """ 121 122 schema = type_schema('logging', rinherit=ValueFilter.schema) 123 permissions = ('wafv2:GetLoggingConfiguration', ) 124 annotation_key = 'c7n:WafV2LoggingConfiguration' 125 126 def process(self, resources, event=None): 127 client = local_session(self.manager.session_factory).client( 128 'wafv2', region_name=self.manager.region) 129 logging_confs = client.list_logging_configurations( 130 Scope='REGIONAL')['LoggingConfigurations'] 131 resource_map = {r['ARN']: r for r in resources} 132 for lc in logging_confs: 133 if lc['ResourceArn'] in resource_map: 134 resource_map[lc['ResourceArn']][self.annotation_key] = lc 135 136 resources = list(resource_map.values()) 137 138 return [ 139 r for r in resources if self.match( 140 r.get(self.annotation_key, {}))] 141 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/c7n/resources/waf.py b/c7n/resources/waf.py --- a/c7n/resources/waf.py +++ b/c7n/resources/waf.py @@ -27,6 +27,10 @@ q = {'Scope': 'REGIONAL'} return q + def get_resources(self, ids): + resources = self.query.filter(self.manager, **self.get_query_params(None)) + return [r for r in resources if r[self.manager.resource_type.id] in ids] + @resources.register('waf') class WAF(QueryResourceManager):
{"golden_diff": "diff --git a/c7n/resources/waf.py b/c7n/resources/waf.py\n--- a/c7n/resources/waf.py\n+++ b/c7n/resources/waf.py\n@@ -27,6 +27,10 @@\n q = {'Scope': 'REGIONAL'}\n return q\n \n+ def get_resources(self, ids):\n+ resources = self.query.filter(self.manager, **self.get_query_params(None))\n+ return [r for r in resources if r[self.manager.resource_type.id] in ids]\n+\n \n @resources.register('waf')\n class WAF(QueryResourceManager):\n", "issue": "Wafv2 logging error when using cloudtrail mode\n### Describe the bug\n\nWhen my wafv2 logging policy runs after disabling logging I receive an error on a cloudtrail policy when using the DeleteLoggingConfiguration event.\n\n### What did you expect to happen?\n\nI expected the policy to match my resource.\n\n### Cloud Provider\n\nAmazon Web Services (AWS)\n\n### Cloud Custodian version and dependency information\n\n```shell\ncustodian version --debug\r\n\r\nPlease copy/paste the following info along with any bug reports:\r\n\r\nCustodian: 0.9.21\r\nPython: 3.8.0 (v3.8.0:fa919fdf25, Oct 14 2019, 10:23:27) \r\n [Clang 6.0 (clang-600.0.57)]\r\nPlatform: posix.uname_result(sysname='Darwin', nodename='kristen-MacBook-Pro', release='21.4.0', version='Darwin Kernel Version 21.4.0: Mon Feb 21 20:35:58 PST 2022; root:xnu-8020.101.4~2/RELEASE_ARM64_T6000', machine='x86_64')\r\nUsing venv: True\r\nDocker: False\r\nInstalled: \r\n\r\nargcomplete==2.0.0\r\nattrs==22.1.0\r\nboto3==1.26.30\r\nbotocore==1.29.30\r\ndocutils==0.17.1\r\nimportlib-metadata==4.13.0\r\nimportlib-resources==5.10.1\r\njmespath==1.0.1\r\njsonschema==4.17.3\r\npkgutil-resolve-name==1.3.10\r\npyrsistent==0.19.2\r\npython-dateutil==2.8.2\r\npyyaml==6.0\r\ns3transfer==0.6.0\r\nsix==1.16.0\r\ntabulate==0.8.10\r\nurllib3==1.26.13\r\nzipp==3.11.0\n```\n\n\n### Policy\n\n```shell\nPolicy example:\r\n\r\n- name: wafv2-log-testing\r\n resource: aws.wafv2\r\n mode:\r\n role: arn:aws:iam::testing\r\n type: cloudtrail\r\n events: \r\n - event: DeleteLoggingConfiguration\r\n ids: requestParameters.resourceArn\r\n source: wafv2.amazonaws.com\r\n filters: \r\n - not:\r\n - type: logging\r\n key: ResourceArn\r\n value: present\n```\n\n\n### Relevant log/traceback output\n\n```shell\nError when policy runs:\r\n\r\n\r\n[ERROR]\t2023-01-06T18:48:11.706Z\t163a02a3-69d6-4d43-a307-365ddcb8ead7\terror during policy executionTraceback (most recent call last): File \"/var/task/c7n/handler.py\", line 165, in dispatch_event p.push(event, context) File \"/var/task/c7n/policy.py\", line 1288, in push return mode.run(event, lambda_ctx) File \"/var/task/c7n/policy.py\", line 487, in run resources = self.resolve_resources(event) File \"/var/task/c7n/policy.py\", line 691, in resolve_resources return super().resolve_resources(event) File \"/var/task/c7n/policy.py\", line 469, in resolve_resources resources = self.policy.resource_manager.get_resources(resource_ids) File \"/var/task/c7n/query.py\", line 576, in get_resources resources = self.source.get_resources(ids) File \"/var/task/c7n/query.py\", line 227, in get_resources return self.query.get(self.manager, ids) File \"/var/task/c7n/query.py\", line 100, in get resources = self.filter(resource_manager, **params) File \"/var/task/c7n/query.py\", line 79, in filter return self._invoke_client_enum( File \"/var/task/c7n/query.py\", line 60, in _invoke_client_enum data = op(**params) File \"/var/runtime/botocore/client.py\", line 391, in _api_call return self._make_api_call(operation_name, kwargs) File \"/var/runtime/botocore/client.py\", line 691, in _make_api_call request_dict = self._convert_to_request_dict( File \"/var/runtime/botocore/client.py\", line 739, in _convert_to_request_dict request_dict = self._serializer.serialize_to_request( File \"/var/runtime/botocore/validate.py\", line 360, in serialize_to_request raise ParamValidationError(report=report.generate_report())botocore.exceptions.ParamValidationError: Parameter validation failed:Missing required parameter in input: \"Scope\" | [ERROR] 2023-01-06T18:48:11.706Z 163a02a3-69d6-4d43-a307-365ddcb8ead7 error during policy execution Traceback (most recent call last): File \"/var/task/c7n/handler.py\", line 165, in dispatch_event p.push(event, context) File \"/var/task/c7n/policy.py\", line 1288, in push return mode.run(event, lambda_ctx) File \"/var/task/c7n/policy.py\", line 487, in run resources = self.resolve_resources(event) File \"/var/task/c7n/policy.py\", line 691, in resolve_resources return super().resolve_resources(event) File \"/var/task/c7n/policy.py\", line 469, in resolve_resources resources = self.policy.resource_manager.get_resources(resource_ids) File \"/var/task/c7n/query.py\", line 576, in get_resources resources = self.source.get_resources(ids) File \"/var/task/c7n/query.py\", line 227, in get_resources return self.query.get(self.manager, ids) File \"/var/task/c7n/query.py\", line 100, in get resources = self.filter(resource_manager, **params) File \"/var/task/c7n/query.py\", line 79, in filter return self._invoke_client_enum( File \"/var/task/c7n/query.py\", line 60, in _invoke_client_enum data = op(**params) File \"/var/runtime/botocore/client.py\", line 391, in _api_call return self._make_api_call(operation_name, kwargs) File \"/var/runtime/botocore/client.py\", line 691, in _make_api_call request_dict = self._convert_to_request_dict( File \"/var/runtime/botocore/client.py\", line 739, in _convert_to_request_dict request_dict = self._serializer.serialize_to_request( File \"/var/runtime/botocore/validate.py\", line 360, in serialize_to_request raise ParamValidationError(report=report.generate_report()) botocore.exceptions.ParamValidationError: Parameter validation failed: Missing required parameter in input: \"Scope\"\r\n-- | --\n```\n\n\n### Extra information or context\n\nThe pull mode version of this policy works fine.\n", "before_files": [{"content": "# Copyright The Cloud Custodian Authors.\n# SPDX-License-Identifier: Apache-2.0\nfrom c7n.manager import resources\nfrom c7n.query import ConfigSource, QueryResourceManager, TypeInfo, DescribeSource\nfrom c7n.tags import universal_augment\nfrom c7n.filters import ValueFilter\nfrom c7n.utils import type_schema, local_session\n\n\nclass DescribeRegionalWaf(DescribeSource):\n def augment(self, resources):\n resources = super().augment(resources)\n return universal_augment(self.manager, resources)\n\n\nclass DescribeWafV2(DescribeSource):\n def augment(self, resources):\n return universal_augment(self.manager, resources)\n\n # set REGIONAL for Scope as default\n def get_query_params(self, query):\n q = super(DescribeWafV2, self).get_query_params(query)\n if q:\n if 'Scope' not in q:\n q['Scope'] = 'REGIONAL'\n else:\n q = {'Scope': 'REGIONAL'}\n return q\n\n\[email protected]('waf')\nclass WAF(QueryResourceManager):\n\n class resource_type(TypeInfo):\n service = \"waf\"\n enum_spec = (\"list_web_acls\", \"WebACLs\", None)\n detail_spec = (\"get_web_acl\", \"WebACLId\", \"WebACLId\", \"WebACL\")\n name = \"Name\"\n id = \"WebACLId\"\n dimension = \"WebACL\"\n cfn_type = config_type = \"AWS::WAF::WebACL\"\n arn_type = \"webacl\"\n # override defaults to casing issues\n permissions_enum = ('waf:ListWebACLs',)\n permissions_augment = ('waf:GetWebACL',)\n\n\[email protected]('waf-regional')\nclass RegionalWAF(QueryResourceManager):\n\n class resource_type(TypeInfo):\n service = \"waf-regional\"\n enum_spec = (\"list_web_acls\", \"WebACLs\", None)\n detail_spec = (\"get_web_acl\", \"WebACLId\", \"WebACLId\", \"WebACL\")\n name = \"Name\"\n id = \"WebACLId\"\n dimension = \"WebACL\"\n cfn_type = config_type = \"AWS::WAFRegional::WebACL\"\n arn_type = \"webacl\"\n # override defaults to casing issues\n permissions_enum = ('waf-regional:ListWebACLs',)\n permissions_augment = ('waf-regional:GetWebACL',)\n universal_taggable = object()\n\n source_mapping = {\n 'describe': DescribeRegionalWaf,\n 'config': ConfigSource\n }\n\n\[email protected]('wafv2')\nclass WAFV2(QueryResourceManager):\n\n class resource_type(TypeInfo):\n service = \"wafv2\"\n enum_spec = (\"list_web_acls\", \"WebACLs\", None)\n detail_spec = (\"get_web_acl\", \"Id\", \"Id\", \"WebACL\")\n name = \"Name\"\n id = \"Id\"\n dimension = \"WebACL\"\n cfn_type = config_type = \"AWS::WAFv2::WebACL\"\n arn_type = \"webacl\"\n # override defaults to casing issues\n permissions_enum = ('wafv2:ListWebACLs',)\n permissions_augment = ('wafv2:GetWebACL',)\n universal_taggable = object()\n\n source_mapping = {\n 'describe': DescribeWafV2,\n 'config': ConfigSource\n }\n\n\[email protected]_registry.register('logging')\nclass WAFV2LoggingFilter(ValueFilter):\n \"\"\"\n Filter by wafv2 logging configuration\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: wafv2-logging-enabled\n resource: aws.wafv2\n filters:\n - not:\n - type: logging\n key: ResourceArn\n value: present\n\n - name: check-redacted-fields\n resource: aws.wafv2\n filters:\n - type: logging\n key: RedactedFields[].SingleHeader.Name\n value: user-agent\n op: in\n value_type: swap\n \"\"\"\n\n schema = type_schema('logging', rinherit=ValueFilter.schema)\n permissions = ('wafv2:GetLoggingConfiguration', )\n annotation_key = 'c7n:WafV2LoggingConfiguration'\n\n def process(self, resources, event=None):\n client = local_session(self.manager.session_factory).client(\n 'wafv2', region_name=self.manager.region)\n logging_confs = client.list_logging_configurations(\n Scope='REGIONAL')['LoggingConfigurations']\n resource_map = {r['ARN']: r for r in resources}\n for lc in logging_confs:\n if lc['ResourceArn'] in resource_map:\n resource_map[lc['ResourceArn']][self.annotation_key] = lc\n\n resources = list(resource_map.values())\n\n return [\n r for r in resources if self.match(\n r.get(self.annotation_key, {}))]\n", "path": "c7n/resources/waf.py"}], "after_files": [{"content": "# Copyright The Cloud Custodian Authors.\n# SPDX-License-Identifier: Apache-2.0\nfrom c7n.manager import resources\nfrom c7n.query import ConfigSource, QueryResourceManager, TypeInfo, DescribeSource\nfrom c7n.tags import universal_augment\nfrom c7n.filters import ValueFilter\nfrom c7n.utils import type_schema, local_session\n\n\nclass DescribeRegionalWaf(DescribeSource):\n def augment(self, resources):\n resources = super().augment(resources)\n return universal_augment(self.manager, resources)\n\n\nclass DescribeWafV2(DescribeSource):\n def augment(self, resources):\n return universal_augment(self.manager, resources)\n\n # set REGIONAL for Scope as default\n def get_query_params(self, query):\n q = super(DescribeWafV2, self).get_query_params(query)\n if q:\n if 'Scope' not in q:\n q['Scope'] = 'REGIONAL'\n else:\n q = {'Scope': 'REGIONAL'}\n return q\n\n def get_resources(self, ids):\n resources = self.query.filter(self.manager, **self.get_query_params(None))\n return [r for r in resources if r[self.manager.resource_type.id] in ids]\n\n\[email protected]('waf')\nclass WAF(QueryResourceManager):\n\n class resource_type(TypeInfo):\n service = \"waf\"\n enum_spec = (\"list_web_acls\", \"WebACLs\", None)\n detail_spec = (\"get_web_acl\", \"WebACLId\", \"WebACLId\", \"WebACL\")\n name = \"Name\"\n id = \"WebACLId\"\n dimension = \"WebACL\"\n cfn_type = config_type = \"AWS::WAF::WebACL\"\n arn_type = \"webacl\"\n # override defaults to casing issues\n permissions_enum = ('waf:ListWebACLs',)\n permissions_augment = ('waf:GetWebACL',)\n\n\[email protected]('waf-regional')\nclass RegionalWAF(QueryResourceManager):\n\n class resource_type(TypeInfo):\n service = \"waf-regional\"\n enum_spec = (\"list_web_acls\", \"WebACLs\", None)\n detail_spec = (\"get_web_acl\", \"WebACLId\", \"WebACLId\", \"WebACL\")\n name = \"Name\"\n id = \"WebACLId\"\n dimension = \"WebACL\"\n cfn_type = config_type = \"AWS::WAFRegional::WebACL\"\n arn_type = \"webacl\"\n # override defaults to casing issues\n permissions_enum = ('waf-regional:ListWebACLs',)\n permissions_augment = ('waf-regional:GetWebACL',)\n universal_taggable = object()\n\n source_mapping = {\n 'describe': DescribeRegionalWaf,\n 'config': ConfigSource\n }\n\n\[email protected]('wafv2')\nclass WAFV2(QueryResourceManager):\n\n class resource_type(TypeInfo):\n service = \"wafv2\"\n enum_spec = (\"list_web_acls\", \"WebACLs\", None)\n detail_spec = (\"get_web_acl\", \"Id\", \"Id\", \"WebACL\")\n name = \"Name\"\n id = \"Id\"\n dimension = \"WebACL\"\n cfn_type = config_type = \"AWS::WAFv2::WebACL\"\n arn_type = \"webacl\"\n # override defaults to casing issues\n permissions_enum = ('wafv2:ListWebACLs',)\n permissions_augment = ('wafv2:GetWebACL',)\n universal_taggable = object()\n\n source_mapping = {\n 'describe': DescribeWafV2,\n 'config': ConfigSource\n }\n\n\[email protected]_registry.register('logging')\nclass WAFV2LoggingFilter(ValueFilter):\n \"\"\"\n Filter by wafv2 logging configuration\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: wafv2-logging-enabled\n resource: aws.wafv2\n filters:\n - not:\n - type: logging\n key: ResourceArn\n value: present\n\n - name: check-redacted-fields\n resource: aws.wafv2\n filters:\n - type: logging\n key: RedactedFields[].SingleHeader.Name\n value: user-agent\n op: in\n value_type: swap\n \"\"\"\n\n schema = type_schema('logging', rinherit=ValueFilter.schema)\n permissions = ('wafv2:GetLoggingConfiguration', )\n annotation_key = 'c7n:WafV2LoggingConfiguration'\n\n def process(self, resources, event=None):\n client = local_session(self.manager.session_factory).client(\n 'wafv2', region_name=self.manager.region)\n logging_confs = client.list_logging_configurations(\n Scope='REGIONAL')['LoggingConfigurations']\n resource_map = {r['ARN']: r for r in resources}\n for lc in logging_confs:\n if lc['ResourceArn'] in resource_map:\n resource_map[lc['ResourceArn']][self.annotation_key] = lc\n\n resources = list(resource_map.values())\n\n return [\n r for r in resources if self.match(\n r.get(self.annotation_key, {}))]\n", "path": "c7n/resources/waf.py"}]}
3,308
128
gh_patches_debug_12317
rasdani/github-patches
git_diff
ansible-collections__community.aws-283
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- ec2_win_password returns success when it fails to decode the password ### SUMMARY An unsuccessful decode call returns: ``` ok: [localhost] => { "changed": false, "invocation": { "module_args": { [trimmed] } }, "win_password": "" } ``` I would expect it to return a failure state --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `plugins/modules/ec2_win_password.py` Content: ``` 1 #!/usr/bin/python 2 # Copyright: Ansible Project 3 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) 4 5 from __future__ import absolute_import, division, print_function 6 __metaclass__ = type 7 8 9 DOCUMENTATION = ''' 10 --- 11 module: ec2_win_password 12 version_added: 1.0.0 13 short_description: Gets the default administrator password for ec2 windows instances 14 description: 15 - Gets the default administrator password from any EC2 Windows instance. The instance is referenced by its id (e.g. C(i-XXXXXXX)). 16 - This module has a dependency on python-boto. 17 author: "Rick Mendes (@rickmendes)" 18 options: 19 instance_id: 20 description: 21 - The instance id to get the password data from. 22 required: true 23 type: str 24 key_file: 25 description: 26 - Path to the file containing the key pair used on the instance. 27 - Conflicts with I(key_data). 28 required: false 29 type: path 30 key_data: 31 description: 32 - The private key (usually stored in vault). 33 - Conflicts with I(key_file), 34 required: false 35 type: str 36 key_passphrase: 37 description: 38 - The passphrase for the instance key pair. The key must use DES or 3DES encryption for this module to decrypt it. You can use openssl to 39 convert your password protected keys if they do not use DES or 3DES. ex) C(openssl rsa -in current_key -out new_key -des3). 40 type: str 41 wait: 42 description: 43 - Whether or not to wait for the password to be available before returning. 44 type: bool 45 default: false 46 wait_timeout: 47 description: 48 - Number of seconds to wait before giving up. 49 default: 120 50 type: int 51 52 extends_documentation_fragment: 53 - amazon.aws.aws 54 - amazon.aws.ec2 55 56 57 requirements: 58 - cryptography 59 60 notes: 61 - As of Ansible 2.4, this module requires the python cryptography module rather than the 62 older pycrypto module. 63 ''' 64 65 EXAMPLES = ''' 66 # Example of getting a password 67 - name: get the Administrator password 68 community.aws.ec2_win_password: 69 profile: my-boto-profile 70 instance_id: i-XXXXXX 71 region: us-east-1 72 key_file: "~/aws-creds/my_test_key.pem" 73 74 # Example of getting a password using a variable 75 - name: get the Administrator password 76 community.aws.ec2_win_password: 77 profile: my-boto-profile 78 instance_id: i-XXXXXX 79 region: us-east-1 80 key_data: "{{ ec2_private_key }}" 81 82 # Example of getting a password with a password protected key 83 - name: get the Administrator password 84 community.aws.ec2_win_password: 85 profile: my-boto-profile 86 instance_id: i-XXXXXX 87 region: us-east-1 88 key_file: "~/aws-creds/my_protected_test_key.pem" 89 key_passphrase: "secret" 90 91 # Example of waiting for a password 92 - name: get the Administrator password 93 community.aws.ec2_win_password: 94 profile: my-boto-profile 95 instance_id: i-XXXXXX 96 region: us-east-1 97 key_file: "~/aws-creds/my_test_key.pem" 98 wait: yes 99 wait_timeout: 45 100 ''' 101 102 import datetime 103 import time 104 from base64 import b64decode 105 106 try: 107 from cryptography.hazmat.backends import default_backend 108 from cryptography.hazmat.primitives.asymmetric.padding import PKCS1v15 109 from cryptography.hazmat.primitives.serialization import load_pem_private_key 110 HAS_CRYPTOGRAPHY = True 111 except ImportError: 112 HAS_CRYPTOGRAPHY = False 113 114 from ansible.module_utils._text import to_bytes 115 116 from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule 117 from ansible_collections.amazon.aws.plugins.module_utils.ec2 import HAS_BOTO 118 from ansible_collections.amazon.aws.plugins.module_utils.ec2 import ec2_connect 119 120 121 def setup_module_object(): 122 argument_spec = dict( 123 instance_id=dict(required=True), 124 key_file=dict(required=False, default=None, type='path'), 125 key_passphrase=dict(no_log=True, default=None, required=False), 126 key_data=dict(no_log=True, default=None, required=False), 127 wait=dict(type='bool', default=False, required=False), 128 wait_timeout=dict(default=120, required=False, type='int'), 129 ) 130 module = AnsibleAWSModule(argument_spec=argument_spec) 131 return module 132 133 134 def ec2_win_password(module): 135 instance_id = module.params.get('instance_id') 136 key_file = module.params.get('key_file') 137 if module.params.get('key_passphrase') is None: 138 b_key_passphrase = None 139 else: 140 b_key_passphrase = to_bytes(module.params.get('key_passphrase'), errors='surrogate_or_strict') 141 if module.params.get('key_data') is None: 142 b_key_data = None 143 else: 144 b_key_data = to_bytes(module.params.get('key_data'), errors='surrogate_or_strict') 145 wait = module.params.get('wait') 146 wait_timeout = module.params.get('wait_timeout') 147 148 ec2 = ec2_connect(module) 149 150 if wait: 151 start = datetime.datetime.now() 152 end = start + datetime.timedelta(seconds=wait_timeout) 153 154 while datetime.datetime.now() < end: 155 data = ec2.get_password_data(instance_id) 156 decoded = b64decode(data) 157 if not decoded: 158 time.sleep(5) 159 else: 160 break 161 else: 162 data = ec2.get_password_data(instance_id) 163 decoded = b64decode(data) 164 165 if wait and datetime.datetime.now() >= end: 166 module.fail_json(msg="wait for password timeout after %d seconds" % wait_timeout) 167 168 if key_file is not None and b_key_data is None: 169 try: 170 with open(key_file, 'rb') as f: 171 key = load_pem_private_key(f.read(), b_key_passphrase, default_backend()) 172 except IOError as e: 173 # Handle bad files 174 module.fail_json(msg="I/O error (%d) opening key file: %s" % (e.errno, e.strerror)) 175 except (ValueError, TypeError) as e: 176 # Handle issues loading key 177 module.fail_json(msg="unable to parse key file") 178 elif b_key_data is not None and key_file is None: 179 try: 180 key = load_pem_private_key(b_key_data, b_key_passphrase, default_backend()) 181 except (ValueError, TypeError) as e: 182 module.fail_json(msg="unable to parse key data") 183 184 try: 185 decrypted = key.decrypt(decoded, PKCS1v15()) 186 except ValueError as e: 187 decrypted = None 188 189 if decrypted is None: 190 module.exit_json(win_password='', changed=False) 191 else: 192 if wait: 193 elapsed = datetime.datetime.now() - start 194 module.exit_json(win_password=decrypted, changed=True, elapsed=elapsed.seconds) 195 else: 196 module.exit_json(win_password=decrypted, changed=True) 197 198 199 def main(): 200 module = setup_module_object() 201 202 if not HAS_BOTO: 203 module.fail_json(msg='Boto required for this module.') 204 205 if not HAS_CRYPTOGRAPHY: 206 module.fail_json(msg='cryptography package required for this module.') 207 208 ec2_win_password(module) 209 210 211 if __name__ == '__main__': 212 main() 213 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/plugins/modules/ec2_win_password.py b/plugins/modules/ec2_win_password.py --- a/plugins/modules/ec2_win_password.py +++ b/plugins/modules/ec2_win_password.py @@ -187,13 +187,13 @@ decrypted = None if decrypted is None: - module.exit_json(win_password='', changed=False) + module.fail_json(msg="unable to decrypt password", win_password='', changed=False) else: if wait: elapsed = datetime.datetime.now() - start - module.exit_json(win_password=decrypted, changed=True, elapsed=elapsed.seconds) + module.exit_json(win_password=decrypted, changed=False, elapsed=elapsed.seconds) else: - module.exit_json(win_password=decrypted, changed=True) + module.exit_json(win_password=decrypted, changed=False) def main():
{"golden_diff": "diff --git a/plugins/modules/ec2_win_password.py b/plugins/modules/ec2_win_password.py\n--- a/plugins/modules/ec2_win_password.py\n+++ b/plugins/modules/ec2_win_password.py\n@@ -187,13 +187,13 @@\n decrypted = None\n \n if decrypted is None:\n- module.exit_json(win_password='', changed=False)\n+ module.fail_json(msg=\"unable to decrypt password\", win_password='', changed=False)\n else:\n if wait:\n elapsed = datetime.datetime.now() - start\n- module.exit_json(win_password=decrypted, changed=True, elapsed=elapsed.seconds)\n+ module.exit_json(win_password=decrypted, changed=False, elapsed=elapsed.seconds)\n else:\n- module.exit_json(win_password=decrypted, changed=True)\n+ module.exit_json(win_password=decrypted, changed=False)\n \n \n def main():\n", "issue": "ec2_win_password returns success when it fails to decode the password\n### SUMMARY\r\nAn unsuccessful decode call returns:\r\n\r\n```\r\nok: [localhost] => {\r\n \"changed\": false,\r\n \"invocation\": {\r\n \"module_args\": {\r\n [trimmed]\r\n }\r\n },\r\n \"win_password\": \"\"\r\n }\r\n```\r\n\r\nI would expect it to return a failure state\n", "before_files": [{"content": "#!/usr/bin/python\n# Copyright: Ansible Project\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = '''\n---\nmodule: ec2_win_password\nversion_added: 1.0.0\nshort_description: Gets the default administrator password for ec2 windows instances\ndescription:\n - Gets the default administrator password from any EC2 Windows instance. The instance is referenced by its id (e.g. C(i-XXXXXXX)).\n - This module has a dependency on python-boto.\nauthor: \"Rick Mendes (@rickmendes)\"\noptions:\n instance_id:\n description:\n - The instance id to get the password data from.\n required: true\n type: str\n key_file:\n description:\n - Path to the file containing the key pair used on the instance.\n - Conflicts with I(key_data).\n required: false\n type: path\n key_data:\n description:\n - The private key (usually stored in vault).\n - Conflicts with I(key_file),\n required: false\n type: str\n key_passphrase:\n description:\n - The passphrase for the instance key pair. The key must use DES or 3DES encryption for this module to decrypt it. You can use openssl to\n convert your password protected keys if they do not use DES or 3DES. ex) C(openssl rsa -in current_key -out new_key -des3).\n type: str\n wait:\n description:\n - Whether or not to wait for the password to be available before returning.\n type: bool\n default: false\n wait_timeout:\n description:\n - Number of seconds to wait before giving up.\n default: 120\n type: int\n\nextends_documentation_fragment:\n- amazon.aws.aws\n- amazon.aws.ec2\n\n\nrequirements:\n - cryptography\n\nnotes:\n - As of Ansible 2.4, this module requires the python cryptography module rather than the\n older pycrypto module.\n'''\n\nEXAMPLES = '''\n# Example of getting a password\n- name: get the Administrator password\n community.aws.ec2_win_password:\n profile: my-boto-profile\n instance_id: i-XXXXXX\n region: us-east-1\n key_file: \"~/aws-creds/my_test_key.pem\"\n\n# Example of getting a password using a variable\n- name: get the Administrator password\n community.aws.ec2_win_password:\n profile: my-boto-profile\n instance_id: i-XXXXXX\n region: us-east-1\n key_data: \"{{ ec2_private_key }}\"\n\n# Example of getting a password with a password protected key\n- name: get the Administrator password\n community.aws.ec2_win_password:\n profile: my-boto-profile\n instance_id: i-XXXXXX\n region: us-east-1\n key_file: \"~/aws-creds/my_protected_test_key.pem\"\n key_passphrase: \"secret\"\n\n# Example of waiting for a password\n- name: get the Administrator password\n community.aws.ec2_win_password:\n profile: my-boto-profile\n instance_id: i-XXXXXX\n region: us-east-1\n key_file: \"~/aws-creds/my_test_key.pem\"\n wait: yes\n wait_timeout: 45\n'''\n\nimport datetime\nimport time\nfrom base64 import b64decode\n\ntry:\n from cryptography.hazmat.backends import default_backend\n from cryptography.hazmat.primitives.asymmetric.padding import PKCS1v15\n from cryptography.hazmat.primitives.serialization import load_pem_private_key\n HAS_CRYPTOGRAPHY = True\nexcept ImportError:\n HAS_CRYPTOGRAPHY = False\n\nfrom ansible.module_utils._text import to_bytes\n\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule\nfrom ansible_collections.amazon.aws.plugins.module_utils.ec2 import HAS_BOTO\nfrom ansible_collections.amazon.aws.plugins.module_utils.ec2 import ec2_connect\n\n\ndef setup_module_object():\n argument_spec = dict(\n instance_id=dict(required=True),\n key_file=dict(required=False, default=None, type='path'),\n key_passphrase=dict(no_log=True, default=None, required=False),\n key_data=dict(no_log=True, default=None, required=False),\n wait=dict(type='bool', default=False, required=False),\n wait_timeout=dict(default=120, required=False, type='int'),\n )\n module = AnsibleAWSModule(argument_spec=argument_spec)\n return module\n\n\ndef ec2_win_password(module):\n instance_id = module.params.get('instance_id')\n key_file = module.params.get('key_file')\n if module.params.get('key_passphrase') is None:\n b_key_passphrase = None\n else:\n b_key_passphrase = to_bytes(module.params.get('key_passphrase'), errors='surrogate_or_strict')\n if module.params.get('key_data') is None:\n b_key_data = None\n else:\n b_key_data = to_bytes(module.params.get('key_data'), errors='surrogate_or_strict')\n wait = module.params.get('wait')\n wait_timeout = module.params.get('wait_timeout')\n\n ec2 = ec2_connect(module)\n\n if wait:\n start = datetime.datetime.now()\n end = start + datetime.timedelta(seconds=wait_timeout)\n\n while datetime.datetime.now() < end:\n data = ec2.get_password_data(instance_id)\n decoded = b64decode(data)\n if not decoded:\n time.sleep(5)\n else:\n break\n else:\n data = ec2.get_password_data(instance_id)\n decoded = b64decode(data)\n\n if wait and datetime.datetime.now() >= end:\n module.fail_json(msg=\"wait for password timeout after %d seconds\" % wait_timeout)\n\n if key_file is not None and b_key_data is None:\n try:\n with open(key_file, 'rb') as f:\n key = load_pem_private_key(f.read(), b_key_passphrase, default_backend())\n except IOError as e:\n # Handle bad files\n module.fail_json(msg=\"I/O error (%d) opening key file: %s\" % (e.errno, e.strerror))\n except (ValueError, TypeError) as e:\n # Handle issues loading key\n module.fail_json(msg=\"unable to parse key file\")\n elif b_key_data is not None and key_file is None:\n try:\n key = load_pem_private_key(b_key_data, b_key_passphrase, default_backend())\n except (ValueError, TypeError) as e:\n module.fail_json(msg=\"unable to parse key data\")\n\n try:\n decrypted = key.decrypt(decoded, PKCS1v15())\n except ValueError as e:\n decrypted = None\n\n if decrypted is None:\n module.exit_json(win_password='', changed=False)\n else:\n if wait:\n elapsed = datetime.datetime.now() - start\n module.exit_json(win_password=decrypted, changed=True, elapsed=elapsed.seconds)\n else:\n module.exit_json(win_password=decrypted, changed=True)\n\n\ndef main():\n module = setup_module_object()\n\n if not HAS_BOTO:\n module.fail_json(msg='Boto required for this module.')\n\n if not HAS_CRYPTOGRAPHY:\n module.fail_json(msg='cryptography package required for this module.')\n\n ec2_win_password(module)\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/ec2_win_password.py"}], "after_files": [{"content": "#!/usr/bin/python\n# Copyright: Ansible Project\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = '''\n---\nmodule: ec2_win_password\nversion_added: 1.0.0\nshort_description: Gets the default administrator password for ec2 windows instances\ndescription:\n - Gets the default administrator password from any EC2 Windows instance. The instance is referenced by its id (e.g. C(i-XXXXXXX)).\n - This module has a dependency on python-boto.\nauthor: \"Rick Mendes (@rickmendes)\"\noptions:\n instance_id:\n description:\n - The instance id to get the password data from.\n required: true\n type: str\n key_file:\n description:\n - Path to the file containing the key pair used on the instance.\n - Conflicts with I(key_data).\n required: false\n type: path\n key_data:\n description:\n - The private key (usually stored in vault).\n - Conflicts with I(key_file),\n required: false\n type: str\n key_passphrase:\n description:\n - The passphrase for the instance key pair. The key must use DES or 3DES encryption for this module to decrypt it. You can use openssl to\n convert your password protected keys if they do not use DES or 3DES. ex) C(openssl rsa -in current_key -out new_key -des3).\n type: str\n wait:\n description:\n - Whether or not to wait for the password to be available before returning.\n type: bool\n default: false\n wait_timeout:\n description:\n - Number of seconds to wait before giving up.\n default: 120\n type: int\n\nextends_documentation_fragment:\n- amazon.aws.aws\n- amazon.aws.ec2\n\n\nrequirements:\n - cryptography\n\nnotes:\n - As of Ansible 2.4, this module requires the python cryptography module rather than the\n older pycrypto module.\n'''\n\nEXAMPLES = '''\n# Example of getting a password\n- name: get the Administrator password\n community.aws.ec2_win_password:\n profile: my-boto-profile\n instance_id: i-XXXXXX\n region: us-east-1\n key_file: \"~/aws-creds/my_test_key.pem\"\n\n# Example of getting a password using a variable\n- name: get the Administrator password\n community.aws.ec2_win_password:\n profile: my-boto-profile\n instance_id: i-XXXXXX\n region: us-east-1\n key_data: \"{{ ec2_private_key }}\"\n\n# Example of getting a password with a password protected key\n- name: get the Administrator password\n community.aws.ec2_win_password:\n profile: my-boto-profile\n instance_id: i-XXXXXX\n region: us-east-1\n key_file: \"~/aws-creds/my_protected_test_key.pem\"\n key_passphrase: \"secret\"\n\n# Example of waiting for a password\n- name: get the Administrator password\n community.aws.ec2_win_password:\n profile: my-boto-profile\n instance_id: i-XXXXXX\n region: us-east-1\n key_file: \"~/aws-creds/my_test_key.pem\"\n wait: yes\n wait_timeout: 45\n'''\n\nimport datetime\nimport time\nfrom base64 import b64decode\n\ntry:\n from cryptography.hazmat.backends import default_backend\n from cryptography.hazmat.primitives.asymmetric.padding import PKCS1v15\n from cryptography.hazmat.primitives.serialization import load_pem_private_key\n HAS_CRYPTOGRAPHY = True\nexcept ImportError:\n HAS_CRYPTOGRAPHY = False\n\nfrom ansible.module_utils._text import to_bytes\n\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule\nfrom ansible_collections.amazon.aws.plugins.module_utils.ec2 import HAS_BOTO\nfrom ansible_collections.amazon.aws.plugins.module_utils.ec2 import ec2_connect\n\n\ndef setup_module_object():\n argument_spec = dict(\n instance_id=dict(required=True),\n key_file=dict(required=False, default=None, type='path'),\n key_passphrase=dict(no_log=True, default=None, required=False),\n key_data=dict(no_log=True, default=None, required=False),\n wait=dict(type='bool', default=False, required=False),\n wait_timeout=dict(default=120, required=False, type='int'),\n )\n module = AnsibleAWSModule(argument_spec=argument_spec)\n return module\n\n\ndef ec2_win_password(module):\n instance_id = module.params.get('instance_id')\n key_file = module.params.get('key_file')\n if module.params.get('key_passphrase') is None:\n b_key_passphrase = None\n else:\n b_key_passphrase = to_bytes(module.params.get('key_passphrase'), errors='surrogate_or_strict')\n if module.params.get('key_data') is None:\n b_key_data = None\n else:\n b_key_data = to_bytes(module.params.get('key_data'), errors='surrogate_or_strict')\n wait = module.params.get('wait')\n wait_timeout = module.params.get('wait_timeout')\n\n ec2 = ec2_connect(module)\n\n if wait:\n start = datetime.datetime.now()\n end = start + datetime.timedelta(seconds=wait_timeout)\n\n while datetime.datetime.now() < end:\n data = ec2.get_password_data(instance_id)\n decoded = b64decode(data)\n if not decoded:\n time.sleep(5)\n else:\n break\n else:\n data = ec2.get_password_data(instance_id)\n decoded = b64decode(data)\n\n if wait and datetime.datetime.now() >= end:\n module.fail_json(msg=\"wait for password timeout after %d seconds\" % wait_timeout)\n\n if key_file is not None and b_key_data is None:\n try:\n with open(key_file, 'rb') as f:\n key = load_pem_private_key(f.read(), b_key_passphrase, default_backend())\n except IOError as e:\n # Handle bad files\n module.fail_json(msg=\"I/O error (%d) opening key file: %s\" % (e.errno, e.strerror))\n except (ValueError, TypeError) as e:\n # Handle issues loading key\n module.fail_json(msg=\"unable to parse key file\")\n elif b_key_data is not None and key_file is None:\n try:\n key = load_pem_private_key(b_key_data, b_key_passphrase, default_backend())\n except (ValueError, TypeError) as e:\n module.fail_json(msg=\"unable to parse key data\")\n\n try:\n decrypted = key.decrypt(decoded, PKCS1v15())\n except ValueError as e:\n decrypted = None\n\n if decrypted is None:\n module.fail_json(msg=\"unable to decrypt password\", win_password='', changed=False)\n else:\n if wait:\n elapsed = datetime.datetime.now() - start\n module.exit_json(win_password=decrypted, changed=False, elapsed=elapsed.seconds)\n else:\n module.exit_json(win_password=decrypted, changed=False)\n\n\ndef main():\n module = setup_module_object()\n\n if not HAS_BOTO:\n module.fail_json(msg='Boto required for this module.')\n\n if not HAS_CRYPTOGRAPHY:\n module.fail_json(msg='cryptography package required for this module.')\n\n ec2_win_password(module)\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/ec2_win_password.py"}]}
2,506
186
gh_patches_debug_27286
rasdani/github-patches
git_diff
psychopy__psychopy-739
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- gui import from psychopy not woriking Hi all, im trying to run a psychopy script from terminal but I get this error: Traceback (most recent call last): File "nf_test_lastrun.py", line 11, in <module> from psychopy import visual, core, data, event, logging, sound, gui File "/Library/Python/2.7/site-packages/PsychoPy-1.81.00-py2.7.egg/psychopy/gui.py", line 11, in <module> from psychopy.app import localization File "/Library/Python/2.7/site-packages/PsychoPy-1.81.00-py2.7.egg/psychopy/app/localization/**init**.py", line 89, in <module> languageID, lang = getID() File "/Library/Python/2.7/site-packages/PsychoPy-1.81.00-py2.7.egg/psychopy/app/localization/**init**.py", line 78, in getID val = codeFromWxId[wx.LANGUAGE_DEFAULT] KeyError: 0 when I open python and try to import from python, all work but gui. any suggestions thanks clemens --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `psychopy/app/localization/__init__.py` Content: ``` 1 #!/usr/bin/env python2 2 # -*- coding: utf-8 -*- 3 4 """Language localization for PsychoPy. 5 6 Sets the locale value as a wx languageID (int) and initializes gettext translation _translate(): 7 from psychopy.app import localization 8 """ 9 10 # Part of the PsychoPy library 11 # Copyright (C) 2014 Jonathan Peirce 12 # Distributed under the terms of the GNU General Public License (GPL). 13 14 # Author: Jeremy Gray, July 2014 15 16 17 import gettext 18 import os, sys, glob, codecs 19 from psychopy import logging, prefs 20 21 import wx 22 23 # need a wx App for wx.Locale: 24 try: 25 wx.Dialog(None, -1) 26 except wx._core.PyNoAppError: 27 if wx.version() < '2.9': 28 tmpApp = wx.PySimpleApp() 29 else: 30 tmpApp = wx.App(False) 31 32 # Get a dict of locale aliases from wx.Locale() -- same cross-platform (Win 7, Mac 10.9) 33 locale = wx.Locale() 34 aliases = {} 35 wxIdFromCode = {} # int: 0, 2-229 36 codeFromWxId = {} # used in directory names e.g. ja_JP; never JPN ala Windows 37 winmap = {} # get windows 3-letter code (=val) from canonical form (=key); use only for setting locale (non-wx) 38 locname = {} # descriptive name, if available; 5-letter code if not 39 reverseMap = {} 40 41 for i in range(230): 42 info = locale.GetLanguageInfo(i) 43 if info: 44 aliases[info.Description] = info.CanonicalName # mix of forms: ja or ja_JP 45 wxIdFromCode[info.CanonicalName] = i 46 codeFromWxId[i] = info.CanonicalName 47 48 mappings = os.path.join(os.path.dirname(__file__), 'mappings.txt') 49 for line in codecs.open(mappings, 'rU', 'utf8').readlines(): 50 try: 51 can, win, name = line.strip().split(' ', 2) # canonical, windows, name-with-spaces 52 except ValueError: 53 can, win = line.strip().split(' ', 1) 54 name = can 55 winmap[can] = win 56 locname[can] = name 57 reverseMap[name] = can 58 59 # what are the available translations? available languages on the OS? 60 expr = os.path.join(os.path.dirname(__file__), '..', 'locale', '*') 61 available = sorted(map(os.path.basename, glob.glob(expr))) 62 sysAvail = [str(l) for l in codeFromWxId.values() # installed language packs 63 if l and locale.IsAvailable(wxIdFromCode[l])] 64 65 def getID(lang=None): 66 """Get wx ID of language to use for translations: `lang`, pref, or system default. 67 68 `lang` is a 5 char `language_REGION`, eg ja_JP 69 """ 70 if lang: 71 val = lang 72 else: 73 try: 74 val = prefs.app['locale'] 75 except KeyError: 76 val = locale.GetLocale() # wx.Locale, no encoding 77 if not val: 78 val = codeFromWxId[wx.LANGUAGE_DEFAULT] 79 try: 80 # out-dated: [can't set wx.Locale here because no app yet] now there is an app 81 # here just determine the value to be used when it can be set 82 language = wxIdFromCode[val] 83 except KeyError: 84 logging.error('locale %s not known to wx.Locale, using default' % val) 85 language = wx.LANGUAGE_DEFAULT 86 87 return language, val 88 89 languageID, lang = getID() 90 #use lang like this: 91 #import locale -- the non-wx version of locale 92 # 93 #if sys.platform.startswith('win'): 94 # v = winmap[val] 95 #else: v=val 96 #locale.setlocale(locale.LC_ALL, (v, 'UTF-8')) 97 98 # set locale before splash screen: 99 if locale.IsAvailable(languageID): 100 wxlocale = wx.Locale(languageID) 101 else: 102 wxlocale = wx.Locale(wx.LANGUAGE_DEFAULT) 103 104 # ideally rewrite the following using wxlocale only: 105 path = os.path.join(os.path.dirname(__file__), '..', 'locale', lang, 'LC_MESSAGE') + os.sep 106 mofile = os.path.join(path, 'messages.mo') 107 try: 108 logging.debug("Opening message catalog %s for locale %s" % (mofile, lang)) 109 trans = gettext.GNUTranslations(open(mofile, "rb")) 110 except IOError: 111 logging.debug("Locale for '%s' not found. Using default." % lang) 112 trans = gettext.NullTranslations() 113 trans.install(unicode=True) 114 115 # to avoid a crash, PsychoPy app uses a nonstandard name _translate instead of _ 116 # seems like a var in a dependency is named _, clobbering _ as global translation: 117 __builtins__['_translate'] = _ 118 del(__builtins__['_']) # idea: force psychopy code to use _translate 119 120 121 #__builtins__['_'] = wx.GetTranslation 122 # this seems to have no effect, needs more investigation: 123 #path = os.path.join(os.path.dirname(__file__), '..', 'locale', lang, 'LC_MESSAGE') + os.sep 124 #wxlocale.AddCatalogLookupPathPrefix(path) 125 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/psychopy/app/localization/__init__.py b/psychopy/app/localization/__init__.py --- a/psychopy/app/localization/__init__.py +++ b/psychopy/app/localization/__init__.py @@ -31,13 +31,10 @@ # Get a dict of locale aliases from wx.Locale() -- same cross-platform (Win 7, Mac 10.9) locale = wx.Locale() -aliases = {} -wxIdFromCode = {} # int: 0, 2-229 -codeFromWxId = {} # used in directory names e.g. ja_JP; never JPN ala Windows -winmap = {} # get windows 3-letter code (=val) from canonical form (=key); use only for setting locale (non-wx) -locname = {} # descriptive name, if available; 5-letter code if not -reverseMap = {} - +aliases = {u'English (U.S.)': 'en_US'} +# set defaults because locale.GetLanguageInfo(0) can return None on some systems: +wxIdFromCode = {'en_US': wx.LANGUAGE_DEFAULT} # int: 0 default, 2-229 +codeFromWxId = {wx.LANGUAGE_DEFAULT: 'en_US'} # used in directory names e.g. ja_JP; never JPN ala Windows for i in range(230): info = locale.GetLanguageInfo(i) if info: @@ -45,6 +42,10 @@ wxIdFromCode[info.CanonicalName] = i codeFromWxId[i] = info.CanonicalName +# read all known mappings cross-platform from a file: +winmap = {'en_US': 'ENU'} # get windows 3-letter code (=val) from canonical form (=key); use only for setting locale (non-wx) +locname = {'en_US': u'English (U.S.)'} # descriptive name, if available; 5-letter code if not +reverseMap = {u'English (U.S.)': 'en_US'} mappings = os.path.join(os.path.dirname(__file__), 'mappings.txt') for line in codecs.open(mappings, 'rU', 'utf8').readlines(): try:
{"golden_diff": "diff --git a/psychopy/app/localization/__init__.py b/psychopy/app/localization/__init__.py\n--- a/psychopy/app/localization/__init__.py\n+++ b/psychopy/app/localization/__init__.py\n@@ -31,13 +31,10 @@\n \n # Get a dict of locale aliases from wx.Locale() -- same cross-platform (Win 7, Mac 10.9)\n locale = wx.Locale()\n-aliases = {}\n-wxIdFromCode = {} # int: 0, 2-229\n-codeFromWxId = {} # used in directory names e.g. ja_JP; never JPN ala Windows\n-winmap = {} # get windows 3-letter code (=val) from canonical form (=key); use only for setting locale (non-wx)\n-locname = {} # descriptive name, if available; 5-letter code if not\n-reverseMap = {}\n-\n+aliases = {u'English (U.S.)': 'en_US'}\n+# set defaults because locale.GetLanguageInfo(0) can return None on some systems:\n+wxIdFromCode = {'en_US': wx.LANGUAGE_DEFAULT} # int: 0 default, 2-229\n+codeFromWxId = {wx.LANGUAGE_DEFAULT: 'en_US'} # used in directory names e.g. ja_JP; never JPN ala Windows\n for i in range(230):\n info = locale.GetLanguageInfo(i)\n if info:\n@@ -45,6 +42,10 @@\n wxIdFromCode[info.CanonicalName] = i\n codeFromWxId[i] = info.CanonicalName\n \n+# read all known mappings cross-platform from a file:\n+winmap = {'en_US': 'ENU'} # get windows 3-letter code (=val) from canonical form (=key); use only for setting locale (non-wx)\n+locname = {'en_US': u'English (U.S.)'} # descriptive name, if available; 5-letter code if not\n+reverseMap = {u'English (U.S.)': 'en_US'}\n mappings = os.path.join(os.path.dirname(__file__), 'mappings.txt')\n for line in codecs.open(mappings, 'rU', 'utf8').readlines():\n try:\n", "issue": "gui import from psychopy not woriking\nHi all,\n\nim trying to run a psychopy script from terminal but I get this error:\n\nTraceback (most recent call last):\n File \"nf_test_lastrun.py\", line 11, in <module>\n from psychopy import visual, core, data, event, logging, sound, gui\n File \"/Library/Python/2.7/site-packages/PsychoPy-1.81.00-py2.7.egg/psychopy/gui.py\", line 11, in <module>\n from psychopy.app import localization\n File \"/Library/Python/2.7/site-packages/PsychoPy-1.81.00-py2.7.egg/psychopy/app/localization/**init**.py\", line 89, in <module>\n languageID, lang = getID()\n File \"/Library/Python/2.7/site-packages/PsychoPy-1.81.00-py2.7.egg/psychopy/app/localization/**init**.py\", line 78, in getID\n val = codeFromWxId[wx.LANGUAGE_DEFAULT]\nKeyError: 0\n\nwhen I open python and try to import from python, all work but gui.\n\nany suggestions\nthanks\nclemens\n\n", "before_files": [{"content": "#!/usr/bin/env python2\n# -*- coding: utf-8 -*-\n\n\"\"\"Language localization for PsychoPy.\n\nSets the locale value as a wx languageID (int) and initializes gettext translation _translate():\n from psychopy.app import localization\n\"\"\"\n\n# Part of the PsychoPy library\n# Copyright (C) 2014 Jonathan Peirce\n# Distributed under the terms of the GNU General Public License (GPL).\n\n# Author: Jeremy Gray, July 2014\n\n\nimport gettext\nimport os, sys, glob, codecs\nfrom psychopy import logging, prefs\n\nimport wx\n\n# need a wx App for wx.Locale:\ntry:\n wx.Dialog(None, -1)\nexcept wx._core.PyNoAppError:\n if wx.version() < '2.9':\n tmpApp = wx.PySimpleApp()\n else:\n tmpApp = wx.App(False)\n\n# Get a dict of locale aliases from wx.Locale() -- same cross-platform (Win 7, Mac 10.9)\nlocale = wx.Locale()\naliases = {}\nwxIdFromCode = {} # int: 0, 2-229\ncodeFromWxId = {} # used in directory names e.g. ja_JP; never JPN ala Windows\nwinmap = {} # get windows 3-letter code (=val) from canonical form (=key); use only for setting locale (non-wx)\nlocname = {} # descriptive name, if available; 5-letter code if not\nreverseMap = {}\n\nfor i in range(230):\n info = locale.GetLanguageInfo(i)\n if info:\n aliases[info.Description] = info.CanonicalName # mix of forms: ja or ja_JP\n wxIdFromCode[info.CanonicalName] = i\n codeFromWxId[i] = info.CanonicalName\n\nmappings = os.path.join(os.path.dirname(__file__), 'mappings.txt')\nfor line in codecs.open(mappings, 'rU', 'utf8').readlines():\n try:\n can, win, name = line.strip().split(' ', 2) # canonical, windows, name-with-spaces\n except ValueError:\n can, win = line.strip().split(' ', 1)\n name = can\n winmap[can] = win\n locname[can] = name\n reverseMap[name] = can\n\n# what are the available translations? available languages on the OS?\nexpr = os.path.join(os.path.dirname(__file__), '..', 'locale', '*')\navailable = sorted(map(os.path.basename, glob.glob(expr)))\nsysAvail = [str(l) for l in codeFromWxId.values() # installed language packs\n if l and locale.IsAvailable(wxIdFromCode[l])]\n\ndef getID(lang=None):\n \"\"\"Get wx ID of language to use for translations: `lang`, pref, or system default.\n\n `lang` is a 5 char `language_REGION`, eg ja_JP\n \"\"\"\n if lang:\n val = lang\n else:\n try:\n val = prefs.app['locale']\n except KeyError:\n val = locale.GetLocale() # wx.Locale, no encoding\n if not val:\n val = codeFromWxId[wx.LANGUAGE_DEFAULT]\n try:\n # out-dated: [can't set wx.Locale here because no app yet] now there is an app\n # here just determine the value to be used when it can be set\n language = wxIdFromCode[val]\n except KeyError:\n logging.error('locale %s not known to wx.Locale, using default' % val)\n language = wx.LANGUAGE_DEFAULT\n\n return language, val\n\nlanguageID, lang = getID()\n#use lang like this:\n#import locale -- the non-wx version of locale\n#\n#if sys.platform.startswith('win'):\n# v = winmap[val]\n#else: v=val\n#locale.setlocale(locale.LC_ALL, (v, 'UTF-8'))\n\n# set locale before splash screen:\nif locale.IsAvailable(languageID):\n wxlocale = wx.Locale(languageID)\nelse:\n wxlocale = wx.Locale(wx.LANGUAGE_DEFAULT)\n\n# ideally rewrite the following using wxlocale only:\npath = os.path.join(os.path.dirname(__file__), '..', 'locale', lang, 'LC_MESSAGE') + os.sep\nmofile = os.path.join(path, 'messages.mo')\ntry:\n logging.debug(\"Opening message catalog %s for locale %s\" % (mofile, lang))\n trans = gettext.GNUTranslations(open(mofile, \"rb\"))\nexcept IOError:\n logging.debug(\"Locale for '%s' not found. Using default.\" % lang)\n trans = gettext.NullTranslations()\ntrans.install(unicode=True)\n\n# to avoid a crash, PsychoPy app uses a nonstandard name _translate instead of _\n# seems like a var in a dependency is named _, clobbering _ as global translation:\n__builtins__['_translate'] = _\ndel(__builtins__['_']) # idea: force psychopy code to use _translate\n\n\n#__builtins__['_'] = wx.GetTranslation\n# this seems to have no effect, needs more investigation:\n#path = os.path.join(os.path.dirname(__file__), '..', 'locale', lang, 'LC_MESSAGE') + os.sep\n#wxlocale.AddCatalogLookupPathPrefix(path)\n", "path": "psychopy/app/localization/__init__.py"}], "after_files": [{"content": "#!/usr/bin/env python2\n# -*- coding: utf-8 -*-\n\n\"\"\"Language localization for PsychoPy.\n\nSets the locale value as a wx languageID (int) and initializes gettext translation _translate():\n from psychopy.app import localization\n\"\"\"\n\n# Part of the PsychoPy library\n# Copyright (C) 2014 Jonathan Peirce\n# Distributed under the terms of the GNU General Public License (GPL).\n\n# Author: Jeremy Gray, July 2014\n\n\nimport gettext\nimport os, sys, glob, codecs\nfrom psychopy import logging, prefs\n\nimport wx\n\n# need a wx App for wx.Locale:\ntry:\n wx.Dialog(None, -1)\nexcept wx._core.PyNoAppError:\n if wx.version() < '2.9':\n tmpApp = wx.PySimpleApp()\n else:\n tmpApp = wx.App(False)\n\n# Get a dict of locale aliases from wx.Locale() -- same cross-platform (Win 7, Mac 10.9)\nlocale = wx.Locale()\naliases = {u'English (U.S.)': 'en_US'}\n# set defaults because locale.GetLanguageInfo(0) can return None on some systems:\nwxIdFromCode = {'en_US': wx.LANGUAGE_DEFAULT} # int: 0 default, 2-229\ncodeFromWxId = {wx.LANGUAGE_DEFAULT: 'en_US'} # used in directory names e.g. ja_JP; never JPN ala Windows\nfor i in range(230):\n info = locale.GetLanguageInfo(i)\n if info:\n aliases[info.Description] = info.CanonicalName # mix of forms: ja or ja_JP\n wxIdFromCode[info.CanonicalName] = i\n codeFromWxId[i] = info.CanonicalName\n\n# read all known mappings cross-platform from a file:\nwinmap = {'en_US': 'ENU'} # get windows 3-letter code (=val) from canonical form (=key); use only for setting locale (non-wx)\nlocname = {'en_US': u'English (U.S.)'} # descriptive name, if available; 5-letter code if not\nreverseMap = {u'English (U.S.)': 'en_US'}\nmappings = os.path.join(os.path.dirname(__file__), 'mappings.txt')\nfor line in codecs.open(mappings, 'rU', 'utf8').readlines():\n try:\n can, win, name = line.strip().split(' ', 2) # canonical, windows, name-with-spaces\n except ValueError:\n can, win = line.strip().split(' ', 1)\n name = can\n winmap[can] = win\n locname[can] = name\n reverseMap[name] = can\n\n# what are the available translations? available languages on the OS?\nexpr = os.path.join(os.path.dirname(__file__), '..', 'locale', '*')\navailable = sorted(map(os.path.basename, glob.glob(expr)))\nsysAvail = [str(l) for l in codeFromWxId.values() # installed language packs\n if l and locale.IsAvailable(wxIdFromCode[l])]\n\ndef getID(lang=None):\n \"\"\"Get wx ID of language to use for translations: `lang`, pref, or system default.\n\n `lang` is a 5 char `language_REGION`, eg ja_JP\n \"\"\"\n if lang:\n val = lang\n else:\n try:\n val = prefs.app['locale']\n except KeyError:\n val = locale.GetLocale() # wx.Locale, no encoding\n if not val:\n val = codeFromWxId[wx.LANGUAGE_DEFAULT]\n try:\n # out-dated: [can't set wx.Locale here because no app yet] now there is an app\n # here just determine the value to be used when it can be set\n language = wxIdFromCode[val]\n except KeyError:\n logging.error('locale %s not known to wx.Locale, using default' % val)\n language = wx.LANGUAGE_DEFAULT\n\n return language, val\n\nlanguageID, lang = getID()\n#use lang like this:\n#import locale -- the non-wx version of locale\n#\n#if sys.platform.startswith('win'):\n# v = winmap[val]\n#else: v=val\n#locale.setlocale(locale.LC_ALL, (v, 'UTF-8'))\n\n# set locale before splash screen:\nif locale.IsAvailable(languageID):\n wxlocale = wx.Locale(languageID)\nelse:\n wxlocale = wx.Locale(wx.LANGUAGE_DEFAULT)\n\n# ideally rewrite the following using wxlocale only:\npath = os.path.join(os.path.dirname(__file__), '..', 'locale', lang, 'LC_MESSAGE') + os.sep\nmofile = os.path.join(path, 'messages.mo')\ntry:\n logging.debug(\"Opening message catalog %s for locale %s\" % (mofile, lang))\n trans = gettext.GNUTranslations(open(mofile, \"rb\"))\nexcept IOError:\n logging.debug(\"Locale for '%s' not found. Using default.\" % lang)\n trans = gettext.NullTranslations()\ntrans.install(unicode=True)\n\n# to avoid a crash, PsychoPy app uses a nonstandard name _translate instead of _\n# seems like a var in a dependency is named _, clobbering _ as global translation:\n__builtins__['_translate'] = _\ndel(__builtins__['_']) # idea: force psychopy code to use _translate\n\n\n#__builtins__['_'] = wx.GetTranslation\n# this seems to have no effect, needs more investigation:\n#path = os.path.join(os.path.dirname(__file__), '..', 'locale', lang, 'LC_MESSAGE') + os.sep\n#wxlocale.AddCatalogLookupPathPrefix(path)\n", "path": "psychopy/app/localization/__init__.py"}]}
1,968
507
gh_patches_debug_4447
rasdani/github-patches
git_diff
Mailu__Mailu-1910
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Rate limiting changes for 1.8 Copied from #1582. For 1.8 we will for now increase rate limiting value and disable rate limiting for the subnet. - Rate limiting - Document rate limiting - Currently the subnet is included in the rate limiting. This means that a user who repeatly fails to login the webmail, blocks the webmail for ALL users. - For 1.8 and master - in mailu.env set the rate limit to a high value. - in mailu.env disable the rate limiter for the subnet. - And document this of course and change this in the documentation - Set status blocked on lubs pull request and request to further discuss this for mailu 1.9. - Make authentication fast #1745 is a draft pr from nextgens which contains a solution for this problem. - We need a new issue and PR for making these changes. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `core/admin/mailu/configuration.py` Content: ``` 1 import os 2 3 from datetime import timedelta 4 from socrate import system 5 6 DEFAULT_CONFIG = { 7 # Specific to the admin UI 8 'DOCKER_SOCKET': 'unix:///var/run/docker.sock', 9 'BABEL_DEFAULT_LOCALE': 'en', 10 'BABEL_DEFAULT_TIMEZONE': 'UTC', 11 'BOOTSTRAP_SERVE_LOCAL': True, 12 'RATELIMIT_STORAGE_URL': '', 13 'QUOTA_STORAGE_URL': '', 14 'DEBUG': False, 15 'DOMAIN_REGISTRATION': False, 16 'TEMPLATES_AUTO_RELOAD': True, 17 'MEMORY_SESSIONS': False, 18 # Database settings 19 'DB_FLAVOR': None, 20 'DB_USER': 'mailu', 21 'DB_PW': None, 22 'DB_HOST': 'database', 23 'DB_NAME': 'mailu', 24 'SQLITE_DATABASE_FILE':'data/main.db', 25 'SQLALCHEMY_DATABASE_URI': 'sqlite:////data/main.db', 26 'SQLALCHEMY_TRACK_MODIFICATIONS': False, 27 # Statistics management 28 'INSTANCE_ID_PATH': '/data/instance', 29 'STATS_ENDPOINT': '18.{}.stats.mailu.io', 30 # Common configuration variables 31 'SECRET_KEY': 'changeMe', 32 'DOMAIN': 'mailu.io', 33 'HOSTNAMES': 'mail.mailu.io,alternative.mailu.io,yetanother.mailu.io', 34 'POSTMASTER': 'postmaster', 35 'TLS_FLAVOR': 'cert', 36 'INBOUND_TLS_ENFORCE': False, 37 'AUTH_RATELIMIT': '10/minute;1000/hour', 38 'AUTH_RATELIMIT_SUBNET': True, 39 'DISABLE_STATISTICS': False, 40 # Mail settings 41 'DMARC_RUA': None, 42 'DMARC_RUF': None, 43 'WELCOME': False, 44 'WELCOME_SUBJECT': 'Dummy welcome topic', 45 'WELCOME_BODY': 'Dummy welcome body', 46 'DKIM_SELECTOR': 'dkim', 47 'DKIM_PATH': '/dkim/{domain}.{selector}.key', 48 'DEFAULT_QUOTA': 1000000000, 49 # Web settings 50 'SITENAME': 'Mailu', 51 'WEBSITE': 'https://mailu.io', 52 'WEB_ADMIN': '/admin', 53 'WEB_WEBMAIL': '/webmail', 54 'WEBMAIL': 'none', 55 'RECAPTCHA_PUBLIC_KEY': '', 56 'RECAPTCHA_PRIVATE_KEY': '', 57 # Advanced settings 58 'LOG_LEVEL': 'WARNING', 59 'SESSION_KEY_BITS': 128, 60 'SESSION_LIFETIME': 24, 61 'SESSION_COOKIE_SECURE': True, 62 'CREDENTIAL_ROUNDS': 12, 63 # Host settings 64 'HOST_IMAP': 'imap', 65 'HOST_LMTP': 'imap:2525', 66 'HOST_POP3': 'imap', 67 'HOST_SMTP': 'smtp', 68 'HOST_AUTHSMTP': 'smtp', 69 'HOST_ADMIN': 'admin', 70 'HOST_WEBMAIL': 'webmail', 71 'HOST_WEBDAV': 'webdav:5232', 72 'HOST_REDIS': 'redis', 73 'HOST_FRONT': 'front', 74 'SUBNET': '192.168.203.0/24', 75 'SUBNET6': None, 76 'POD_ADDRESS_RANGE': None 77 } 78 79 class ConfigManager(dict): 80 """ Naive configuration manager that uses environment only 81 """ 82 83 DB_TEMPLATES = { 84 'sqlite': 'sqlite:////{SQLITE_DATABASE_FILE}', 85 'postgresql': 'postgresql://{DB_USER}:{DB_PW}@{DB_HOST}/{DB_NAME}', 86 'mysql': 'mysql://{DB_USER}:{DB_PW}@{DB_HOST}/{DB_NAME}' 87 } 88 89 def __init__(self): 90 self.config = dict() 91 92 def get_host_address(self, name): 93 # if MYSERVICE_ADDRESS is defined, use this 94 if '{}_ADDRESS'.format(name) in os.environ: 95 return os.environ.get('{}_ADDRESS'.format(name)) 96 # otherwise use the host name and resolve it 97 return system.resolve_address(self.config['HOST_{}'.format(name)]) 98 99 def resolve_hosts(self): 100 self.config["IMAP_ADDRESS"] = self.get_host_address("IMAP") 101 self.config["POP3_ADDRESS"] = self.get_host_address("POP3") 102 self.config["AUTHSMTP_ADDRESS"] = self.get_host_address("AUTHSMTP") 103 self.config["SMTP_ADDRESS"] = self.get_host_address("SMTP") 104 self.config["REDIS_ADDRESS"] = self.get_host_address("REDIS") 105 if self.config["WEBMAIL"] != "none": 106 self.config["WEBMAIL_ADDRESS"] = self.get_host_address("WEBMAIL") 107 108 def __get_env(self, key, value): 109 key_file = key + "_FILE" 110 if key_file in os.environ: 111 with open(os.environ.get(key_file)) as file: 112 value_from_file = file.read() 113 return value_from_file.strip() 114 else: 115 return os.environ.get(key, value) 116 117 def __coerce_value(self, value): 118 if isinstance(value, str) and value.lower() in ('true','yes'): 119 return True 120 elif isinstance(value, str) and value.lower() in ('false', 'no'): 121 return False 122 return value 123 124 def init_app(self, app): 125 self.config.update(app.config) 126 # get environment variables 127 self.config.update({ 128 key: self.__coerce_value(self.__get_env(key, value)) 129 for key, value in DEFAULT_CONFIG.items() 130 }) 131 self.resolve_hosts() 132 133 # automatically set the sqlalchemy string 134 if self.config['DB_FLAVOR']: 135 template = self.DB_TEMPLATES[self.config['DB_FLAVOR']] 136 self.config['SQLALCHEMY_DATABASE_URI'] = template.format(**self.config) 137 138 self.config['RATELIMIT_STORAGE_URL'] = 'redis://{0}/2'.format(self.config['REDIS_ADDRESS']) 139 self.config['QUOTA_STORAGE_URL'] = 'redis://{0}/1'.format(self.config['REDIS_ADDRESS']) 140 self.config['SESSION_STORAGE_URL'] = 'redis://{0}/3'.format(self.config['REDIS_ADDRESS']) 141 self.config['SESSION_COOKIE_SAMESITE'] = 'Strict' 142 self.config['SESSION_COOKIE_HTTPONLY'] = True 143 self.config['PERMANENT_SESSION_LIFETIME'] = timedelta(hours=int(self.config['SESSION_LIFETIME'])) 144 # update the app config itself 145 app.config = self 146 147 def setdefault(self, key, value): 148 if key not in self.config: 149 self.config[key] = value 150 return self.config[key] 151 152 def get(self, *args): 153 return self.config.get(*args) 154 155 def keys(self): 156 return self.config.keys() 157 158 def __getitem__(self, key): 159 return self.config.get(key) 160 161 def __setitem__(self, key, value): 162 self.config[key] = value 163 164 def __contains__(self, key): 165 return key in self.config 166 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/core/admin/mailu/configuration.py b/core/admin/mailu/configuration.py --- a/core/admin/mailu/configuration.py +++ b/core/admin/mailu/configuration.py @@ -34,8 +34,8 @@ 'POSTMASTER': 'postmaster', 'TLS_FLAVOR': 'cert', 'INBOUND_TLS_ENFORCE': False, - 'AUTH_RATELIMIT': '10/minute;1000/hour', - 'AUTH_RATELIMIT_SUBNET': True, + 'AUTH_RATELIMIT': '1000/minute;10000/hour', + 'AUTH_RATELIMIT_SUBNET': False, 'DISABLE_STATISTICS': False, # Mail settings 'DMARC_RUA': None,
{"golden_diff": "diff --git a/core/admin/mailu/configuration.py b/core/admin/mailu/configuration.py\n--- a/core/admin/mailu/configuration.py\n+++ b/core/admin/mailu/configuration.py\n@@ -34,8 +34,8 @@\n 'POSTMASTER': 'postmaster',\n 'TLS_FLAVOR': 'cert',\n 'INBOUND_TLS_ENFORCE': False,\n- 'AUTH_RATELIMIT': '10/minute;1000/hour',\n- 'AUTH_RATELIMIT_SUBNET': True,\n+ 'AUTH_RATELIMIT': '1000/minute;10000/hour',\n+ 'AUTH_RATELIMIT_SUBNET': False,\n 'DISABLE_STATISTICS': False,\n # Mail settings\n 'DMARC_RUA': None,\n", "issue": "Rate limiting changes for 1.8\nCopied from #1582. For 1.8 we will for now increase rate limiting value and disable rate limiting for the subnet. \r\n\r\n- Rate limiting\r\n - Document rate limiting\r\n - Currently the subnet is included in the rate limiting. This means that a user who repeatly fails to login the webmail, blocks the webmail for ALL users.\r\n\r\n- For 1.8 and master\r\n - in mailu.env set the rate limit to a high value.\r\n - in mailu.env disable the rate limiter for the subnet.\r\n - And document this of course and change this in the documentation\r\n - Set status blocked on lubs pull request and request to further discuss this for mailu 1.9.\r\n - Make authentication fast #1745 is a draft pr from nextgens which contains a solution for this problem.\r\n - We need a new issue and PR for making these changes.\n", "before_files": [{"content": "import os\n\nfrom datetime import timedelta\nfrom socrate import system\n\nDEFAULT_CONFIG = {\n # Specific to the admin UI\n 'DOCKER_SOCKET': 'unix:///var/run/docker.sock',\n 'BABEL_DEFAULT_LOCALE': 'en',\n 'BABEL_DEFAULT_TIMEZONE': 'UTC',\n 'BOOTSTRAP_SERVE_LOCAL': True,\n 'RATELIMIT_STORAGE_URL': '',\n 'QUOTA_STORAGE_URL': '',\n 'DEBUG': False,\n 'DOMAIN_REGISTRATION': False,\n 'TEMPLATES_AUTO_RELOAD': True,\n 'MEMORY_SESSIONS': False,\n # Database settings\n 'DB_FLAVOR': None,\n 'DB_USER': 'mailu',\n 'DB_PW': None,\n 'DB_HOST': 'database',\n 'DB_NAME': 'mailu',\n 'SQLITE_DATABASE_FILE':'data/main.db',\n 'SQLALCHEMY_DATABASE_URI': 'sqlite:////data/main.db',\n 'SQLALCHEMY_TRACK_MODIFICATIONS': False,\n # Statistics management\n 'INSTANCE_ID_PATH': '/data/instance',\n 'STATS_ENDPOINT': '18.{}.stats.mailu.io',\n # Common configuration variables\n 'SECRET_KEY': 'changeMe',\n 'DOMAIN': 'mailu.io',\n 'HOSTNAMES': 'mail.mailu.io,alternative.mailu.io,yetanother.mailu.io',\n 'POSTMASTER': 'postmaster',\n 'TLS_FLAVOR': 'cert',\n 'INBOUND_TLS_ENFORCE': False,\n 'AUTH_RATELIMIT': '10/minute;1000/hour',\n 'AUTH_RATELIMIT_SUBNET': True,\n 'DISABLE_STATISTICS': False,\n # Mail settings\n 'DMARC_RUA': None,\n 'DMARC_RUF': None,\n 'WELCOME': False,\n 'WELCOME_SUBJECT': 'Dummy welcome topic',\n 'WELCOME_BODY': 'Dummy welcome body',\n 'DKIM_SELECTOR': 'dkim',\n 'DKIM_PATH': '/dkim/{domain}.{selector}.key',\n 'DEFAULT_QUOTA': 1000000000,\n # Web settings\n 'SITENAME': 'Mailu',\n 'WEBSITE': 'https://mailu.io',\n 'WEB_ADMIN': '/admin',\n 'WEB_WEBMAIL': '/webmail',\n 'WEBMAIL': 'none',\n 'RECAPTCHA_PUBLIC_KEY': '',\n 'RECAPTCHA_PRIVATE_KEY': '',\n # Advanced settings\n 'LOG_LEVEL': 'WARNING',\n 'SESSION_KEY_BITS': 128,\n 'SESSION_LIFETIME': 24,\n 'SESSION_COOKIE_SECURE': True,\n 'CREDENTIAL_ROUNDS': 12,\n # Host settings\n 'HOST_IMAP': 'imap',\n 'HOST_LMTP': 'imap:2525',\n 'HOST_POP3': 'imap',\n 'HOST_SMTP': 'smtp',\n 'HOST_AUTHSMTP': 'smtp',\n 'HOST_ADMIN': 'admin',\n 'HOST_WEBMAIL': 'webmail',\n 'HOST_WEBDAV': 'webdav:5232',\n 'HOST_REDIS': 'redis',\n 'HOST_FRONT': 'front',\n 'SUBNET': '192.168.203.0/24',\n 'SUBNET6': None,\n 'POD_ADDRESS_RANGE': None\n}\n\nclass ConfigManager(dict):\n \"\"\" Naive configuration manager that uses environment only\n \"\"\"\n\n DB_TEMPLATES = {\n 'sqlite': 'sqlite:////{SQLITE_DATABASE_FILE}',\n 'postgresql': 'postgresql://{DB_USER}:{DB_PW}@{DB_HOST}/{DB_NAME}',\n 'mysql': 'mysql://{DB_USER}:{DB_PW}@{DB_HOST}/{DB_NAME}'\n }\n\n def __init__(self):\n self.config = dict()\n\n def get_host_address(self, name):\n # if MYSERVICE_ADDRESS is defined, use this\n if '{}_ADDRESS'.format(name) in os.environ:\n return os.environ.get('{}_ADDRESS'.format(name))\n # otherwise use the host name and resolve it\n return system.resolve_address(self.config['HOST_{}'.format(name)])\n\n def resolve_hosts(self):\n self.config[\"IMAP_ADDRESS\"] = self.get_host_address(\"IMAP\")\n self.config[\"POP3_ADDRESS\"] = self.get_host_address(\"POP3\")\n self.config[\"AUTHSMTP_ADDRESS\"] = self.get_host_address(\"AUTHSMTP\")\n self.config[\"SMTP_ADDRESS\"] = self.get_host_address(\"SMTP\")\n self.config[\"REDIS_ADDRESS\"] = self.get_host_address(\"REDIS\")\n if self.config[\"WEBMAIL\"] != \"none\":\n self.config[\"WEBMAIL_ADDRESS\"] = self.get_host_address(\"WEBMAIL\")\n\n def __get_env(self, key, value):\n key_file = key + \"_FILE\"\n if key_file in os.environ:\n with open(os.environ.get(key_file)) as file:\n value_from_file = file.read()\n return value_from_file.strip()\n else:\n return os.environ.get(key, value)\n\n def __coerce_value(self, value):\n if isinstance(value, str) and value.lower() in ('true','yes'):\n return True\n elif isinstance(value, str) and value.lower() in ('false', 'no'):\n return False\n return value\n\n def init_app(self, app):\n self.config.update(app.config)\n # get environment variables\n self.config.update({\n key: self.__coerce_value(self.__get_env(key, value))\n for key, value in DEFAULT_CONFIG.items()\n })\n self.resolve_hosts()\n\n # automatically set the sqlalchemy string\n if self.config['DB_FLAVOR']:\n template = self.DB_TEMPLATES[self.config['DB_FLAVOR']]\n self.config['SQLALCHEMY_DATABASE_URI'] = template.format(**self.config)\n\n self.config['RATELIMIT_STORAGE_URL'] = 'redis://{0}/2'.format(self.config['REDIS_ADDRESS'])\n self.config['QUOTA_STORAGE_URL'] = 'redis://{0}/1'.format(self.config['REDIS_ADDRESS'])\n self.config['SESSION_STORAGE_URL'] = 'redis://{0}/3'.format(self.config['REDIS_ADDRESS'])\n self.config['SESSION_COOKIE_SAMESITE'] = 'Strict'\n self.config['SESSION_COOKIE_HTTPONLY'] = True\n self.config['PERMANENT_SESSION_LIFETIME'] = timedelta(hours=int(self.config['SESSION_LIFETIME']))\n # update the app config itself\n app.config = self\n\n def setdefault(self, key, value):\n if key not in self.config:\n self.config[key] = value\n return self.config[key]\n\n def get(self, *args):\n return self.config.get(*args)\n\n def keys(self):\n return self.config.keys()\n\n def __getitem__(self, key):\n return self.config.get(key)\n\n def __setitem__(self, key, value):\n self.config[key] = value\n\n def __contains__(self, key):\n return key in self.config\n", "path": "core/admin/mailu/configuration.py"}], "after_files": [{"content": "import os\n\nfrom datetime import timedelta\nfrom socrate import system\n\nDEFAULT_CONFIG = {\n # Specific to the admin UI\n 'DOCKER_SOCKET': 'unix:///var/run/docker.sock',\n 'BABEL_DEFAULT_LOCALE': 'en',\n 'BABEL_DEFAULT_TIMEZONE': 'UTC',\n 'BOOTSTRAP_SERVE_LOCAL': True,\n 'RATELIMIT_STORAGE_URL': '',\n 'QUOTA_STORAGE_URL': '',\n 'DEBUG': False,\n 'DOMAIN_REGISTRATION': False,\n 'TEMPLATES_AUTO_RELOAD': True,\n 'MEMORY_SESSIONS': False,\n # Database settings\n 'DB_FLAVOR': None,\n 'DB_USER': 'mailu',\n 'DB_PW': None,\n 'DB_HOST': 'database',\n 'DB_NAME': 'mailu',\n 'SQLITE_DATABASE_FILE':'data/main.db',\n 'SQLALCHEMY_DATABASE_URI': 'sqlite:////data/main.db',\n 'SQLALCHEMY_TRACK_MODIFICATIONS': False,\n # Statistics management\n 'INSTANCE_ID_PATH': '/data/instance',\n 'STATS_ENDPOINT': '18.{}.stats.mailu.io',\n # Common configuration variables\n 'SECRET_KEY': 'changeMe',\n 'DOMAIN': 'mailu.io',\n 'HOSTNAMES': 'mail.mailu.io,alternative.mailu.io,yetanother.mailu.io',\n 'POSTMASTER': 'postmaster',\n 'TLS_FLAVOR': 'cert',\n 'INBOUND_TLS_ENFORCE': False,\n 'AUTH_RATELIMIT': '1000/minute;10000/hour',\n 'AUTH_RATELIMIT_SUBNET': False,\n 'DISABLE_STATISTICS': False,\n # Mail settings\n 'DMARC_RUA': None,\n 'DMARC_RUF': None,\n 'WELCOME': False,\n 'WELCOME_SUBJECT': 'Dummy welcome topic',\n 'WELCOME_BODY': 'Dummy welcome body',\n 'DKIM_SELECTOR': 'dkim',\n 'DKIM_PATH': '/dkim/{domain}.{selector}.key',\n 'DEFAULT_QUOTA': 1000000000,\n # Web settings\n 'SITENAME': 'Mailu',\n 'WEBSITE': 'https://mailu.io',\n 'WEB_ADMIN': '/admin',\n 'WEB_WEBMAIL': '/webmail',\n 'WEBMAIL': 'none',\n 'RECAPTCHA_PUBLIC_KEY': '',\n 'RECAPTCHA_PRIVATE_KEY': '',\n # Advanced settings\n 'LOG_LEVEL': 'WARNING',\n 'SESSION_KEY_BITS': 128,\n 'SESSION_LIFETIME': 24,\n 'SESSION_COOKIE_SECURE': True,\n 'CREDENTIAL_ROUNDS': 12,\n # Host settings\n 'HOST_IMAP': 'imap',\n 'HOST_LMTP': 'imap:2525',\n 'HOST_POP3': 'imap',\n 'HOST_SMTP': 'smtp',\n 'HOST_AUTHSMTP': 'smtp',\n 'HOST_ADMIN': 'admin',\n 'HOST_WEBMAIL': 'webmail',\n 'HOST_WEBDAV': 'webdav:5232',\n 'HOST_REDIS': 'redis',\n 'HOST_FRONT': 'front',\n 'SUBNET': '192.168.203.0/24',\n 'SUBNET6': None,\n 'POD_ADDRESS_RANGE': None\n}\n\nclass ConfigManager(dict):\n \"\"\" Naive configuration manager that uses environment only\n \"\"\"\n\n DB_TEMPLATES = {\n 'sqlite': 'sqlite:////{SQLITE_DATABASE_FILE}',\n 'postgresql': 'postgresql://{DB_USER}:{DB_PW}@{DB_HOST}/{DB_NAME}',\n 'mysql': 'mysql://{DB_USER}:{DB_PW}@{DB_HOST}/{DB_NAME}'\n }\n\n def __init__(self):\n self.config = dict()\n\n def get_host_address(self, name):\n # if MYSERVICE_ADDRESS is defined, use this\n if '{}_ADDRESS'.format(name) in os.environ:\n return os.environ.get('{}_ADDRESS'.format(name))\n # otherwise use the host name and resolve it\n return system.resolve_address(self.config['HOST_{}'.format(name)])\n\n def resolve_hosts(self):\n self.config[\"IMAP_ADDRESS\"] = self.get_host_address(\"IMAP\")\n self.config[\"POP3_ADDRESS\"] = self.get_host_address(\"POP3\")\n self.config[\"AUTHSMTP_ADDRESS\"] = self.get_host_address(\"AUTHSMTP\")\n self.config[\"SMTP_ADDRESS\"] = self.get_host_address(\"SMTP\")\n self.config[\"REDIS_ADDRESS\"] = self.get_host_address(\"REDIS\")\n if self.config[\"WEBMAIL\"] != \"none\":\n self.config[\"WEBMAIL_ADDRESS\"] = self.get_host_address(\"WEBMAIL\")\n\n def __get_env(self, key, value):\n key_file = key + \"_FILE\"\n if key_file in os.environ:\n with open(os.environ.get(key_file)) as file:\n value_from_file = file.read()\n return value_from_file.strip()\n else:\n return os.environ.get(key, value)\n\n def __coerce_value(self, value):\n if isinstance(value, str) and value.lower() in ('true','yes'):\n return True\n elif isinstance(value, str) and value.lower() in ('false', 'no'):\n return False\n return value\n\n def init_app(self, app):\n self.config.update(app.config)\n # get environment variables\n self.config.update({\n key: self.__coerce_value(self.__get_env(key, value))\n for key, value in DEFAULT_CONFIG.items()\n })\n self.resolve_hosts()\n\n # automatically set the sqlalchemy string\n if self.config['DB_FLAVOR']:\n template = self.DB_TEMPLATES[self.config['DB_FLAVOR']]\n self.config['SQLALCHEMY_DATABASE_URI'] = template.format(**self.config)\n\n self.config['RATELIMIT_STORAGE_URL'] = 'redis://{0}/2'.format(self.config['REDIS_ADDRESS'])\n self.config['QUOTA_STORAGE_URL'] = 'redis://{0}/1'.format(self.config['REDIS_ADDRESS'])\n self.config['SESSION_STORAGE_URL'] = 'redis://{0}/3'.format(self.config['REDIS_ADDRESS'])\n self.config['SESSION_COOKIE_SAMESITE'] = 'Strict'\n self.config['SESSION_COOKIE_HTTPONLY'] = True\n self.config['PERMANENT_SESSION_LIFETIME'] = timedelta(hours=int(self.config['SESSION_LIFETIME']))\n # update the app config itself\n app.config = self\n\n def setdefault(self, key, value):\n if key not in self.config:\n self.config[key] = value\n return self.config[key]\n\n def get(self, *args):\n return self.config.get(*args)\n\n def keys(self):\n return self.config.keys()\n\n def __getitem__(self, key):\n return self.config.get(key)\n\n def __setitem__(self, key, value):\n self.config[key] = value\n\n def __contains__(self, key):\n return key in self.config\n", "path": "core/admin/mailu/configuration.py"}]}
2,349
163
gh_patches_debug_33740
rasdani/github-patches
git_diff
hpcaitech__ColossalAI-1437
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [BUG]: FreqCacheEmbeddingBag._weight.ProcessGroup is initialized before _weight initialized ### 🐛 Describe the bug When I init a DLRM with ParallelFreqAwareEmbeddingBag, the bug is reported as the following: ![image](https://user-images.githubusercontent.com/34452939/184055340-c99579bf-c9df-43ed-8552-7c9ae7aadffb.png) I believe that is because the [ParallelFreqAwareEmbeddingBag](https://github.com/hpcaitech/ColossalAI/blob/039b7ed3bc33173e36c5c4decd41f8d7b1ec0f45/colossalai/nn/_ops/cache_embedding/parallel_freq_aware_embedding.py#L60) init its `_weight.ProcessGroup` before the `_weight` is initialized. After I swap above line with its next line, the traceback shows another error: ![image](https://user-images.githubusercontent.com/34452939/184055612-de402e67-ac19-4860-ac12-9622f88a6aca.png) It looks like some api update issue. ### Environment _No response_ --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `colossalai/nn/_ops/cache_embedding/parallel_freq_aware_embedding.py` Content: ``` 1 import torch 2 import torch.nn.functional as F 3 from typing import List, Optional, Iterator, Tuple 4 5 from .base_embedding import BaseEmbeddingBag 6 from .cache_mgr import CachedParamMgr 7 from torch.nn.parameter import Parameter 8 from .._utils import dual_all_to_all 9 10 from colossalai.tensor import ColoParameter, ShardSpec, ComputeSpec, ComputePattern, ProcessGroup 11 12 13 def get_partition(embedding_dim, rank, world_size) -> Tuple[int, int, bool]: 14 if world_size == 1: 15 return 0, embedding_dim, True 16 17 assert embedding_dim >= world_size, \ 18 f"Embedding dimension {embedding_dim} must be larger than the world size " \ 19 f"{world_size} of the process group" 20 chunk_size = embedding_dim // world_size 21 threshold = embedding_dim % world_size 22 # if embedding dim is divisible by world size 23 if threshold == 0: 24 return rank * chunk_size, (rank + 1) * chunk_size, True 25 26 # align with the split strategy of torch.tensor_split 27 size_list = [chunk_size + 1 if i < threshold else chunk_size for i in range(world_size)] 28 offset = sum(size_list[:rank]) 29 return offset, offset + size_list[rank], False 30 31 32 class ParallelFreqAwareEmbeddingBag(BaseEmbeddingBag): 33 34 def __init__(self, 35 num_embeddings, 36 embedding_dim, 37 padding_idx=None, 38 max_norm=None, 39 norm_type=2., 40 scale_grad_by_freq=False, 41 sparse=False, 42 _weight=None, 43 mode='mean', 44 include_last_offset=False, 45 dtype=None, 46 debug=True): 47 super(ParallelFreqAwareEmbeddingBag, 48 self).__init__(num_embeddings, embedding_dim, padding_idx, max_norm, norm_type, scale_grad_by_freq, 49 sparse, mode, include_last_offset) 50 51 self.rank = torch.distributed.get_rank() 52 self.world_size = torch.distributed.get_world_size() 53 self.debug = debug 54 55 self.partition_start_index, self.partition_end_index, divisible = get_partition( 56 embedding_dim, self.rank, self.world_size) 57 self.embedding_dim_per_partition = self.partition_end_index - self.partition_start_index 58 59 if _weight is None: 60 self._weight.process_group = ProcessGroup(tp_degree=self.world_size) 61 self._weight = ColoParameter.from_torch_tensor(torch.empty(self.num_embeddings, 62 self.embedding_dim_per_partition, 63 device='cpu', 64 dtype=dtype), 65 requires_grad=True, 66 spec=ShardSpec(dims=[-1], num_partitions=[self.world_size])) 67 self.init_parameters() 68 else: 69 assert isinstance(_weight, ColoParameter), "initialized weight must in type of ColoParameter" 70 self._weight = _weight 71 72 @property 73 def weight(self): 74 return self.cache_weight_mgr.cpu_weight 75 76 def named_parameters(self, prefix: str = '', recurse: bool = True) -> Iterator[Tuple[str, Parameter]]: 77 yield 'weight', self.cache_weight_mgr.cuda_cached_weight 78 79 def parameters(self, recurse: bool = True) -> Iterator[Parameter]: 80 yield self.cache_weight_mgr.cuda_cached_weight 81 82 @torch.no_grad() 83 def init_parameters(self): 84 self._weight.data.uniform_(-1 / self.num_embeddings, 1 / self.num_embeddings) 85 if self.padding_idx is not None: 86 self._weight[self.padding_idx].fill_(0) 87 88 def preprocess(self, 89 cuda_row_num: int, 90 ids_freq_mapping: Optional[List[int]] = None, 91 warmup_ratio: float = 0.7, 92 buffer_size: int = 50_000): 93 self.cache_weight_mgr = CachedParamMgr(self._weight, cuda_row_num, buffer_size=buffer_size) 94 self.cache_weight_mgr.reorder(ids_freq_mapping, warmup_ratio) 95 96 def forward(self, indices, offsets=None, per_sample_weights=None, shape_hook=None, scatter_dim=0, gather_dim=-1): 97 with torch.no_grad(): 98 reorder_ids = self.cache_weight_mgr.prepare_ids(indices) 99 100 output_shard = F.embedding_bag(reorder_ids, self.cache_weight_mgr.cuda_cached_weight, offsets, self.max_norm, 101 self.norm_type, self.scale_grad_by_freq, self.mode, self.sparse, 102 per_sample_weights, self.include_last_offset, self.padding_idx) 103 104 if shape_hook is not None: 105 output_shard = shape_hook(output_shard) 106 107 output_full = dual_all_to_all(output_shard, 108 self._weight.get_process_group(), 109 scatter_dim=scatter_dim, 110 gather_dim=gather_dim) 111 return output_full 112 113 @classmethod 114 def from_pretrained(cls, 115 embedding: torch.Tensor, 116 freeze: bool = True, 117 padding_idx: Optional[int] = None, 118 max_norm: Optional[float] = None, 119 norm_type: float = 2., 120 scale_grad_by_freq: bool = False, 121 sparse: bool = False, 122 mode: str = 'mean', 123 include_last_offset: bool = False, 124 debug: bool = True, 125 cuda_row_num: int = 100_000, 126 ids_freq_mapping: Optional[List[int]] = None, 127 warmup_ratio: float = 0.7) -> 'ParallelFreqAwareEmbeddingBag': 128 rows, cols = embedding.shape 129 embedding_bag = cls(rows, cols, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse, embedding, mode, 130 include_last_offset, debug) 131 embedding_bag.preprocess(cuda_row_num, ids_freq_mapping, warmup_ratio) 132 embedding_bag.cache_weight_mgr.cuda_cached_weight.requires_grad_ = not freeze 133 return embedding_bag 134 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/colossalai/nn/_ops/cache_embedding/parallel_freq_aware_embedding.py b/colossalai/nn/_ops/cache_embedding/parallel_freq_aware_embedding.py --- a/colossalai/nn/_ops/cache_embedding/parallel_freq_aware_embedding.py +++ b/colossalai/nn/_ops/cache_embedding/parallel_freq_aware_embedding.py @@ -7,7 +7,7 @@ from torch.nn.parameter import Parameter from .._utils import dual_all_to_all -from colossalai.tensor import ColoParameter, ShardSpec, ComputeSpec, ComputePattern, ProcessGroup +from colossalai.tensor import ColoParameter, ShardSpec, ComputeSpec, ComputePattern, ProcessGroup, ColoTensorSpec def get_partition(embedding_dim, rank, world_size) -> Tuple[int, int, bool]: @@ -57,13 +57,15 @@ self.embedding_dim_per_partition = self.partition_end_index - self.partition_start_index if _weight is None: - self._weight.process_group = ProcessGroup(tp_degree=self.world_size) + colo_tensor_spec = ColoTensorSpec(pg=ProcessGroup(tp_degree=self.world_size), + dist_attr=ShardSpec(dims=[-1], num_partitions=[self.world_size]), + compute_attr=ComputePattern.TP1D) self._weight = ColoParameter.from_torch_tensor(torch.empty(self.num_embeddings, self.embedding_dim_per_partition, device='cpu', dtype=dtype), requires_grad=True, - spec=ShardSpec(dims=[-1], num_partitions=[self.world_size])) + spec=colo_tensor_spec) self.init_parameters() else: assert isinstance(_weight, ColoParameter), "initialized weight must in type of ColoParameter"
{"golden_diff": "diff --git a/colossalai/nn/_ops/cache_embedding/parallel_freq_aware_embedding.py b/colossalai/nn/_ops/cache_embedding/parallel_freq_aware_embedding.py\n--- a/colossalai/nn/_ops/cache_embedding/parallel_freq_aware_embedding.py\n+++ b/colossalai/nn/_ops/cache_embedding/parallel_freq_aware_embedding.py\n@@ -7,7 +7,7 @@\n from torch.nn.parameter import Parameter\n from .._utils import dual_all_to_all\n \n-from colossalai.tensor import ColoParameter, ShardSpec, ComputeSpec, ComputePattern, ProcessGroup\n+from colossalai.tensor import ColoParameter, ShardSpec, ComputeSpec, ComputePattern, ProcessGroup, ColoTensorSpec\n \n \n def get_partition(embedding_dim, rank, world_size) -> Tuple[int, int, bool]:\n@@ -57,13 +57,15 @@\n self.embedding_dim_per_partition = self.partition_end_index - self.partition_start_index\n \n if _weight is None:\n- self._weight.process_group = ProcessGroup(tp_degree=self.world_size)\n+ colo_tensor_spec = ColoTensorSpec(pg=ProcessGroup(tp_degree=self.world_size),\n+ dist_attr=ShardSpec(dims=[-1], num_partitions=[self.world_size]),\n+ compute_attr=ComputePattern.TP1D)\n self._weight = ColoParameter.from_torch_tensor(torch.empty(self.num_embeddings,\n self.embedding_dim_per_partition,\n device='cpu',\n dtype=dtype),\n requires_grad=True,\n- spec=ShardSpec(dims=[-1], num_partitions=[self.world_size]))\n+ spec=colo_tensor_spec)\n self.init_parameters()\n else:\n assert isinstance(_weight, ColoParameter), \"initialized weight must in type of ColoParameter\"\n", "issue": "[BUG]: FreqCacheEmbeddingBag._weight.ProcessGroup is initialized before _weight initialized\n### \ud83d\udc1b Describe the bug\n\nWhen I init a DLRM with ParallelFreqAwareEmbeddingBag, the bug is reported as the following:\r\n![image](https://user-images.githubusercontent.com/34452939/184055340-c99579bf-c9df-43ed-8552-7c9ae7aadffb.png)\r\n\r\n\r\nI believe that is because the [ParallelFreqAwareEmbeddingBag](https://github.com/hpcaitech/ColossalAI/blob/039b7ed3bc33173e36c5c4decd41f8d7b1ec0f45/colossalai/nn/_ops/cache_embedding/parallel_freq_aware_embedding.py#L60) init its `_weight.ProcessGroup` before the `_weight` is initialized.\r\n\r\nAfter I swap above line with its next line, the traceback shows another error:\r\n![image](https://user-images.githubusercontent.com/34452939/184055612-de402e67-ac19-4860-ac12-9622f88a6aca.png)\r\n\r\nIt looks like some api update issue.\n\n### Environment\n\n_No response_\n", "before_files": [{"content": "import torch\nimport torch.nn.functional as F\nfrom typing import List, Optional, Iterator, Tuple\n\nfrom .base_embedding import BaseEmbeddingBag\nfrom .cache_mgr import CachedParamMgr\nfrom torch.nn.parameter import Parameter\nfrom .._utils import dual_all_to_all\n\nfrom colossalai.tensor import ColoParameter, ShardSpec, ComputeSpec, ComputePattern, ProcessGroup\n\n\ndef get_partition(embedding_dim, rank, world_size) -> Tuple[int, int, bool]:\n if world_size == 1:\n return 0, embedding_dim, True\n\n assert embedding_dim >= world_size, \\\n f\"Embedding dimension {embedding_dim} must be larger than the world size \" \\\n f\"{world_size} of the process group\"\n chunk_size = embedding_dim // world_size\n threshold = embedding_dim % world_size\n # if embedding dim is divisible by world size\n if threshold == 0:\n return rank * chunk_size, (rank + 1) * chunk_size, True\n\n # align with the split strategy of torch.tensor_split\n size_list = [chunk_size + 1 if i < threshold else chunk_size for i in range(world_size)]\n offset = sum(size_list[:rank])\n return offset, offset + size_list[rank], False\n\n\nclass ParallelFreqAwareEmbeddingBag(BaseEmbeddingBag):\n\n def __init__(self,\n num_embeddings,\n embedding_dim,\n padding_idx=None,\n max_norm=None,\n norm_type=2.,\n scale_grad_by_freq=False,\n sparse=False,\n _weight=None,\n mode='mean',\n include_last_offset=False,\n dtype=None,\n debug=True):\n super(ParallelFreqAwareEmbeddingBag,\n self).__init__(num_embeddings, embedding_dim, padding_idx, max_norm, norm_type, scale_grad_by_freq,\n sparse, mode, include_last_offset)\n\n self.rank = torch.distributed.get_rank()\n self.world_size = torch.distributed.get_world_size()\n self.debug = debug\n\n self.partition_start_index, self.partition_end_index, divisible = get_partition(\n embedding_dim, self.rank, self.world_size)\n self.embedding_dim_per_partition = self.partition_end_index - self.partition_start_index\n\n if _weight is None:\n self._weight.process_group = ProcessGroup(tp_degree=self.world_size)\n self._weight = ColoParameter.from_torch_tensor(torch.empty(self.num_embeddings,\n self.embedding_dim_per_partition,\n device='cpu',\n dtype=dtype),\n requires_grad=True,\n spec=ShardSpec(dims=[-1], num_partitions=[self.world_size]))\n self.init_parameters()\n else:\n assert isinstance(_weight, ColoParameter), \"initialized weight must in type of ColoParameter\"\n self._weight = _weight\n\n @property\n def weight(self):\n return self.cache_weight_mgr.cpu_weight\n\n def named_parameters(self, prefix: str = '', recurse: bool = True) -> Iterator[Tuple[str, Parameter]]:\n yield 'weight', self.cache_weight_mgr.cuda_cached_weight\n\n def parameters(self, recurse: bool = True) -> Iterator[Parameter]:\n yield self.cache_weight_mgr.cuda_cached_weight\n\n @torch.no_grad()\n def init_parameters(self):\n self._weight.data.uniform_(-1 / self.num_embeddings, 1 / self.num_embeddings)\n if self.padding_idx is not None:\n self._weight[self.padding_idx].fill_(0)\n\n def preprocess(self,\n cuda_row_num: int,\n ids_freq_mapping: Optional[List[int]] = None,\n warmup_ratio: float = 0.7,\n buffer_size: int = 50_000):\n self.cache_weight_mgr = CachedParamMgr(self._weight, cuda_row_num, buffer_size=buffer_size)\n self.cache_weight_mgr.reorder(ids_freq_mapping, warmup_ratio)\n\n def forward(self, indices, offsets=None, per_sample_weights=None, shape_hook=None, scatter_dim=0, gather_dim=-1):\n with torch.no_grad():\n reorder_ids = self.cache_weight_mgr.prepare_ids(indices)\n\n output_shard = F.embedding_bag(reorder_ids, self.cache_weight_mgr.cuda_cached_weight, offsets, self.max_norm,\n self.norm_type, self.scale_grad_by_freq, self.mode, self.sparse,\n per_sample_weights, self.include_last_offset, self.padding_idx)\n\n if shape_hook is not None:\n output_shard = shape_hook(output_shard)\n\n output_full = dual_all_to_all(output_shard,\n self._weight.get_process_group(),\n scatter_dim=scatter_dim,\n gather_dim=gather_dim)\n return output_full\n\n @classmethod\n def from_pretrained(cls,\n embedding: torch.Tensor,\n freeze: bool = True,\n padding_idx: Optional[int] = None,\n max_norm: Optional[float] = None,\n norm_type: float = 2.,\n scale_grad_by_freq: bool = False,\n sparse: bool = False,\n mode: str = 'mean',\n include_last_offset: bool = False,\n debug: bool = True,\n cuda_row_num: int = 100_000,\n ids_freq_mapping: Optional[List[int]] = None,\n warmup_ratio: float = 0.7) -> 'ParallelFreqAwareEmbeddingBag':\n rows, cols = embedding.shape\n embedding_bag = cls(rows, cols, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse, embedding, mode,\n include_last_offset, debug)\n embedding_bag.preprocess(cuda_row_num, ids_freq_mapping, warmup_ratio)\n embedding_bag.cache_weight_mgr.cuda_cached_weight.requires_grad_ = not freeze\n return embedding_bag\n", "path": "colossalai/nn/_ops/cache_embedding/parallel_freq_aware_embedding.py"}], "after_files": [{"content": "import torch\nimport torch.nn.functional as F\nfrom typing import List, Optional, Iterator, Tuple\n\nfrom .base_embedding import BaseEmbeddingBag\nfrom .cache_mgr import CachedParamMgr\nfrom torch.nn.parameter import Parameter\nfrom .._utils import dual_all_to_all\n\nfrom colossalai.tensor import ColoParameter, ShardSpec, ComputeSpec, ComputePattern, ProcessGroup, ColoTensorSpec\n\n\ndef get_partition(embedding_dim, rank, world_size) -> Tuple[int, int, bool]:\n if world_size == 1:\n return 0, embedding_dim, True\n\n assert embedding_dim >= world_size, \\\n f\"Embedding dimension {embedding_dim} must be larger than the world size \" \\\n f\"{world_size} of the process group\"\n chunk_size = embedding_dim // world_size\n threshold = embedding_dim % world_size\n # if embedding dim is divisible by world size\n if threshold == 0:\n return rank * chunk_size, (rank + 1) * chunk_size, True\n\n # align with the split strategy of torch.tensor_split\n size_list = [chunk_size + 1 if i < threshold else chunk_size for i in range(world_size)]\n offset = sum(size_list[:rank])\n return offset, offset + size_list[rank], False\n\n\nclass ParallelFreqAwareEmbeddingBag(BaseEmbeddingBag):\n\n def __init__(self,\n num_embeddings,\n embedding_dim,\n padding_idx=None,\n max_norm=None,\n norm_type=2.,\n scale_grad_by_freq=False,\n sparse=False,\n _weight=None,\n mode='mean',\n include_last_offset=False,\n dtype=None,\n debug=True):\n super(ParallelFreqAwareEmbeddingBag,\n self).__init__(num_embeddings, embedding_dim, padding_idx, max_norm, norm_type, scale_grad_by_freq,\n sparse, mode, include_last_offset)\n\n self.rank = torch.distributed.get_rank()\n self.world_size = torch.distributed.get_world_size()\n self.debug = debug\n\n self.partition_start_index, self.partition_end_index, divisible = get_partition(\n embedding_dim, self.rank, self.world_size)\n self.embedding_dim_per_partition = self.partition_end_index - self.partition_start_index\n\n if _weight is None:\n colo_tensor_spec = ColoTensorSpec(pg=ProcessGroup(tp_degree=self.world_size),\n dist_attr=ShardSpec(dims=[-1], num_partitions=[self.world_size]),\n compute_attr=ComputePattern.TP1D)\n self._weight = ColoParameter.from_torch_tensor(torch.empty(self.num_embeddings,\n self.embedding_dim_per_partition,\n device='cpu',\n dtype=dtype),\n requires_grad=True,\n spec=colo_tensor_spec)\n self.init_parameters()\n else:\n assert isinstance(_weight, ColoParameter), \"initialized weight must in type of ColoParameter\"\n self._weight = _weight\n\n @property\n def weight(self):\n return self.cache_weight_mgr.cpu_weight\n\n def named_parameters(self, prefix: str = '', recurse: bool = True) -> Iterator[Tuple[str, Parameter]]:\n yield 'weight', self.cache_weight_mgr.cuda_cached_weight\n\n def parameters(self, recurse: bool = True) -> Iterator[Parameter]:\n yield self.cache_weight_mgr.cuda_cached_weight\n\n @torch.no_grad()\n def init_parameters(self):\n self._weight.data.uniform_(-1 / self.num_embeddings, 1 / self.num_embeddings)\n if self.padding_idx is not None:\n self._weight[self.padding_idx].fill_(0)\n\n def preprocess(self,\n cuda_row_num: int,\n ids_freq_mapping: Optional[List[int]] = None,\n warmup_ratio: float = 0.7,\n buffer_size: int = 50_000):\n self.cache_weight_mgr = CachedParamMgr(self._weight, cuda_row_num, buffer_size=buffer_size)\n self.cache_weight_mgr.reorder(ids_freq_mapping, warmup_ratio)\n\n def forward(self, indices, offsets=None, per_sample_weights=None, shape_hook=None, scatter_dim=0, gather_dim=-1):\n with torch.no_grad():\n reorder_ids = self.cache_weight_mgr.prepare_ids(indices)\n\n output_shard = F.embedding_bag(reorder_ids, self.cache_weight_mgr.cuda_cached_weight, offsets, self.max_norm,\n self.norm_type, self.scale_grad_by_freq, self.mode, self.sparse,\n per_sample_weights, self.include_last_offset, self.padding_idx)\n\n if shape_hook is not None:\n output_shard = shape_hook(output_shard)\n\n output_full = dual_all_to_all(output_shard,\n self._weight.get_process_group(),\n scatter_dim=scatter_dim,\n gather_dim=gather_dim)\n return output_full\n\n @classmethod\n def from_pretrained(cls,\n embedding: torch.Tensor,\n freeze: bool = True,\n padding_idx: Optional[int] = None,\n max_norm: Optional[float] = None,\n norm_type: float = 2.,\n scale_grad_by_freq: bool = False,\n sparse: bool = False,\n mode: str = 'mean',\n include_last_offset: bool = False,\n debug: bool = True,\n cuda_row_num: int = 100_000,\n ids_freq_mapping: Optional[List[int]] = None,\n warmup_ratio: float = 0.7) -> 'ParallelFreqAwareEmbeddingBag':\n rows, cols = embedding.shape\n embedding_bag = cls(rows, cols, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse, embedding, mode,\n include_last_offset, debug)\n embedding_bag.preprocess(cuda_row_num, ids_freq_mapping, warmup_ratio)\n embedding_bag.cache_weight_mgr.cuda_cached_weight.requires_grad_ = not freeze\n return embedding_bag\n", "path": "colossalai/nn/_ops/cache_embedding/parallel_freq_aware_embedding.py"}]}
2,071
374
gh_patches_debug_11207
rasdani/github-patches
git_diff
iterative__dvc-5205
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- diff: unexpected error when diffing workspace after dvc remove # Bug Report ## Description `dvc diff` will raise unexpected error with no other output if both the `.dvc` file and the original data file are removed from the workspace (i.e. after running `dvc remove`). ``` $ git status ⏎ On branch master Changes not staged for commit: deleted: .gitignore deleted: foo.txt.dvc no changes added to commit $ dvc diff -v 2020-12-28 15:43:46,270 DEBUG: Check for update is enabled. 2020-12-28 15:43:46,584 ERROR: unexpected error ------------------------------------------------------------ Traceback (most recent call last): File "/Users/pmrowla/git/dvc/dvc/main.py", line 90, in main ret = cmd.run() File "/Users/pmrowla/git/dvc/dvc/command/diff.py", line 131, in run diff = self.repo.diff( File "/Users/pmrowla/git/dvc/dvc/repo/__init__.py", line 53, in wrapper return f(repo, *args, **kwargs) File "/Users/pmrowla/git/dvc/dvc/repo/diff.py", line 60, in diff missing = sorted(_filter_missing(self, deleted_or_missing)) File "/Users/pmrowla/git/dvc/dvc/repo/diff.py", line 151, in _filter_missing metadata = repo_tree.metadata(path) File "/Users/pmrowla/git/dvc/dvc/tree/repo.py", line 446, in metadata raise FileNotFoundError FileNotFoundError ------------------------------------------------------------ ``` ### Reproduce ```bash #!/bin/bash set -e set -x REPO="test_repo" rm -rf $REPO mkdir $REPO pushd $REPO git init dvc init echo "foo" > foo.txt dvc add foo.txt git add . git commit -m "init" dvc remove foo.txt.dvc rm foo.txt dvc diff -v popd ``` This issue only affects workspace diff. If the changes after remove are `git commit`ed and then the two commits are `dvc diff`ed, the diff will work as expected. Issue can also be reproduced by doing `git rm <file>.dvc; rm <file>` instead of using `dvc remove`. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `dvc/repo/diff.py` Content: ``` 1 import logging 2 import os 3 4 from dvc.exceptions import PathMissingError 5 from dvc.repo import locked 6 from dvc.tree.local import LocalTree 7 from dvc.tree.repo import RepoTree 8 9 logger = logging.getLogger(__name__) 10 11 12 @locked 13 def diff(self, a_rev="HEAD", b_rev=None, targets=None): 14 """ 15 By default, it compares the workspace with the last commit's tree. 16 17 This implementation differs from `git diff` since DVC doesn't have 18 the concept of `index`, but it keeps the same interface, thus, 19 `dvc diff` would be the same as `dvc diff HEAD`. 20 """ 21 22 if self.scm.no_commits: 23 return {} 24 25 b_rev = b_rev if b_rev else "workspace" 26 results = {} 27 missing_targets = {} 28 for rev in self.brancher(revs=[a_rev, b_rev]): 29 if rev == "workspace" and rev != b_rev: 30 # brancher always returns workspace, but we only need to compute 31 # workspace paths/checksums if b_rev was None 32 continue 33 34 targets_path_infos = None 35 if targets is not None: 36 # convert targets to path_infos, and capture any missing targets 37 targets_path_infos, missing_targets[rev] = _targets_to_path_infos( 38 self, targets 39 ) 40 41 results[rev] = _paths_checksums(self, targets_path_infos) 42 43 if targets is not None: 44 # check for overlapping missing targets between a_rev and b_rev 45 for target in set(missing_targets[a_rev]) & set( 46 missing_targets[b_rev] 47 ): 48 raise PathMissingError(target, self) 49 50 old = results[a_rev] 51 new = results[b_rev] 52 53 # Compare paths between the old and new tree. 54 # set() efficiently converts dict keys to a set 55 added = sorted(set(new) - set(old)) 56 deleted_or_missing = set(old) - set(new) 57 if b_rev == "workspace": 58 # missing status is only applicable when diffing local workspace 59 # against a commit 60 missing = sorted(_filter_missing(self, deleted_or_missing)) 61 else: 62 missing = [] 63 deleted = sorted(deleted_or_missing - set(missing)) 64 modified = sorted(set(old) & set(new)) 65 66 ret = { 67 "added": [{"path": path, "hash": new[path]} for path in added], 68 "deleted": [{"path": path, "hash": old[path]} for path in deleted], 69 "modified": [ 70 {"path": path, "hash": {"old": old[path], "new": new[path]}} 71 for path in modified 72 if old[path] != new[path] 73 ], 74 "not in cache": [ 75 {"path": path, "hash": old[path]} for path in missing 76 ], 77 } 78 79 return ret if any(ret.values()) else {} 80 81 82 def _paths_checksums(repo, targets): 83 """ 84 A dictionary of checksums addressed by relpaths collected from 85 the current tree outputs. 86 87 To help distinguish between a directory and a file output, 88 the former one will come with a trailing slash in the path: 89 90 directory: "data/" 91 file: "data" 92 """ 93 94 return dict(_output_paths(repo, targets)) 95 96 97 def _output_paths(repo, targets): 98 repo_tree = RepoTree(repo, stream=True) 99 on_working_tree = isinstance(repo.tree, LocalTree) 100 101 def _exists(output): 102 if on_working_tree: 103 return output.exists 104 return True 105 106 def _to_path(output): 107 return ( 108 str(output) 109 if not output.is_dir_checksum 110 else os.path.join(str(output), "") 111 ) 112 113 def _to_checksum(output): 114 if on_working_tree: 115 return repo.cache.local.tree.get_hash(output.path_info).value 116 return output.hash_info.value 117 118 for stage in repo.stages: 119 for output in stage.outs: 120 if _exists(output): 121 yield_output = targets is None or any( 122 output.path_info.isin_or_eq(target) for target in targets 123 ) 124 125 if yield_output: 126 yield _to_path(output), _to_checksum(output) 127 128 if output.is_dir_checksum and ( 129 yield_output 130 or any(target.isin(output.path_info) for target in targets) 131 ): 132 yield from _dir_output_paths(repo_tree, output, targets) 133 134 135 def _dir_output_paths(repo_tree, output, targets=None): 136 from dvc.config import NoRemoteError 137 138 try: 139 for fname in repo_tree.walk_files(output.path_info): 140 if targets is None or any( 141 fname.isin_or_eq(target) for target in targets 142 ): 143 yield str(fname), repo_tree.get_file_hash(fname).value 144 except NoRemoteError: 145 logger.warning("dir cache entry for '%s' is missing", output) 146 147 148 def _filter_missing(repo, paths): 149 repo_tree = RepoTree(repo, stream=True) 150 for path in paths: 151 metadata = repo_tree.metadata(path) 152 if metadata.is_dvc: 153 out = metadata.outs[0] 154 if out.status().get(str(out)) == "not in cache": 155 yield path 156 157 158 def _targets_to_path_infos(repo, targets): 159 path_infos = [] 160 missing = [] 161 162 repo_tree = RepoTree(repo, stream=True) 163 164 for target in targets: 165 if repo_tree.exists(target): 166 path_infos.append(repo_tree.metadata(target).path_info) 167 else: 168 missing.append(target) 169 170 return path_infos, missing 171 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/dvc/repo/diff.py b/dvc/repo/diff.py --- a/dvc/repo/diff.py +++ b/dvc/repo/diff.py @@ -148,11 +148,14 @@ def _filter_missing(repo, paths): repo_tree = RepoTree(repo, stream=True) for path in paths: - metadata = repo_tree.metadata(path) - if metadata.is_dvc: - out = metadata.outs[0] - if out.status().get(str(out)) == "not in cache": - yield path + try: + metadata = repo_tree.metadata(path) + if metadata.is_dvc: + out = metadata.outs[0] + if out.status().get(str(out)) == "not in cache": + yield path + except FileNotFoundError: + pass def _targets_to_path_infos(repo, targets):
{"golden_diff": "diff --git a/dvc/repo/diff.py b/dvc/repo/diff.py\n--- a/dvc/repo/diff.py\n+++ b/dvc/repo/diff.py\n@@ -148,11 +148,14 @@\n def _filter_missing(repo, paths):\n repo_tree = RepoTree(repo, stream=True)\n for path in paths:\n- metadata = repo_tree.metadata(path)\n- if metadata.is_dvc:\n- out = metadata.outs[0]\n- if out.status().get(str(out)) == \"not in cache\":\n- yield path\n+ try:\n+ metadata = repo_tree.metadata(path)\n+ if metadata.is_dvc:\n+ out = metadata.outs[0]\n+ if out.status().get(str(out)) == \"not in cache\":\n+ yield path\n+ except FileNotFoundError:\n+ pass\n \n \n def _targets_to_path_infos(repo, targets):\n", "issue": "diff: unexpected error when diffing workspace after dvc remove\n# Bug Report\r\n\r\n## Description\r\n\r\n`dvc diff` will raise unexpected error with no other output if both the `.dvc` file and the original data file are removed from the workspace (i.e. after running `dvc remove`).\r\n\r\n```\r\n$ git status \u23ce\r\nOn branch master\r\nChanges not staged for commit:\r\n deleted: .gitignore\r\n deleted: foo.txt.dvc\r\n\r\nno changes added to commit\r\n\r\n$ dvc diff -v\r\n2020-12-28 15:43:46,270 DEBUG: Check for update is enabled.\r\n2020-12-28 15:43:46,584 ERROR: unexpected error\r\n------------------------------------------------------------\r\nTraceback (most recent call last):\r\n File \"/Users/pmrowla/git/dvc/dvc/main.py\", line 90, in main\r\n ret = cmd.run()\r\n File \"/Users/pmrowla/git/dvc/dvc/command/diff.py\", line 131, in run\r\n diff = self.repo.diff(\r\n File \"/Users/pmrowla/git/dvc/dvc/repo/__init__.py\", line 53, in wrapper\r\n return f(repo, *args, **kwargs)\r\n File \"/Users/pmrowla/git/dvc/dvc/repo/diff.py\", line 60, in diff\r\n missing = sorted(_filter_missing(self, deleted_or_missing))\r\n File \"/Users/pmrowla/git/dvc/dvc/repo/diff.py\", line 151, in _filter_missing\r\n metadata = repo_tree.metadata(path)\r\n File \"/Users/pmrowla/git/dvc/dvc/tree/repo.py\", line 446, in metadata\r\n raise FileNotFoundError\r\nFileNotFoundError\r\n------------------------------------------------------------\r\n```\r\n\r\n### Reproduce\r\n\r\n```bash\r\n#!/bin/bash\r\n\r\nset -e\r\nset -x\r\n\r\nREPO=\"test_repo\"\r\n\r\nrm -rf $REPO\r\nmkdir $REPO\r\npushd $REPO\r\n\r\ngit init\r\ndvc init\r\necho \"foo\" > foo.txt\r\ndvc add foo.txt\r\ngit add .\r\ngit commit -m \"init\"\r\n\r\ndvc remove foo.txt.dvc\r\nrm foo.txt\r\ndvc diff -v\r\n\r\npopd\r\n```\r\n\r\nThis issue only affects workspace diff. If the changes after remove are `git commit`ed and then the two commits are `dvc diff`ed, the diff will work as expected. Issue can also be reproduced by doing `git rm <file>.dvc; rm <file>` instead of using `dvc remove`.\n", "before_files": [{"content": "import logging\nimport os\n\nfrom dvc.exceptions import PathMissingError\nfrom dvc.repo import locked\nfrom dvc.tree.local import LocalTree\nfrom dvc.tree.repo import RepoTree\n\nlogger = logging.getLogger(__name__)\n\n\n@locked\ndef diff(self, a_rev=\"HEAD\", b_rev=None, targets=None):\n \"\"\"\n By default, it compares the workspace with the last commit's tree.\n\n This implementation differs from `git diff` since DVC doesn't have\n the concept of `index`, but it keeps the same interface, thus,\n `dvc diff` would be the same as `dvc diff HEAD`.\n \"\"\"\n\n if self.scm.no_commits:\n return {}\n\n b_rev = b_rev if b_rev else \"workspace\"\n results = {}\n missing_targets = {}\n for rev in self.brancher(revs=[a_rev, b_rev]):\n if rev == \"workspace\" and rev != b_rev:\n # brancher always returns workspace, but we only need to compute\n # workspace paths/checksums if b_rev was None\n continue\n\n targets_path_infos = None\n if targets is not None:\n # convert targets to path_infos, and capture any missing targets\n targets_path_infos, missing_targets[rev] = _targets_to_path_infos(\n self, targets\n )\n\n results[rev] = _paths_checksums(self, targets_path_infos)\n\n if targets is not None:\n # check for overlapping missing targets between a_rev and b_rev\n for target in set(missing_targets[a_rev]) & set(\n missing_targets[b_rev]\n ):\n raise PathMissingError(target, self)\n\n old = results[a_rev]\n new = results[b_rev]\n\n # Compare paths between the old and new tree.\n # set() efficiently converts dict keys to a set\n added = sorted(set(new) - set(old))\n deleted_or_missing = set(old) - set(new)\n if b_rev == \"workspace\":\n # missing status is only applicable when diffing local workspace\n # against a commit\n missing = sorted(_filter_missing(self, deleted_or_missing))\n else:\n missing = []\n deleted = sorted(deleted_or_missing - set(missing))\n modified = sorted(set(old) & set(new))\n\n ret = {\n \"added\": [{\"path\": path, \"hash\": new[path]} for path in added],\n \"deleted\": [{\"path\": path, \"hash\": old[path]} for path in deleted],\n \"modified\": [\n {\"path\": path, \"hash\": {\"old\": old[path], \"new\": new[path]}}\n for path in modified\n if old[path] != new[path]\n ],\n \"not in cache\": [\n {\"path\": path, \"hash\": old[path]} for path in missing\n ],\n }\n\n return ret if any(ret.values()) else {}\n\n\ndef _paths_checksums(repo, targets):\n \"\"\"\n A dictionary of checksums addressed by relpaths collected from\n the current tree outputs.\n\n To help distinguish between a directory and a file output,\n the former one will come with a trailing slash in the path:\n\n directory: \"data/\"\n file: \"data\"\n \"\"\"\n\n return dict(_output_paths(repo, targets))\n\n\ndef _output_paths(repo, targets):\n repo_tree = RepoTree(repo, stream=True)\n on_working_tree = isinstance(repo.tree, LocalTree)\n\n def _exists(output):\n if on_working_tree:\n return output.exists\n return True\n\n def _to_path(output):\n return (\n str(output)\n if not output.is_dir_checksum\n else os.path.join(str(output), \"\")\n )\n\n def _to_checksum(output):\n if on_working_tree:\n return repo.cache.local.tree.get_hash(output.path_info).value\n return output.hash_info.value\n\n for stage in repo.stages:\n for output in stage.outs:\n if _exists(output):\n yield_output = targets is None or any(\n output.path_info.isin_or_eq(target) for target in targets\n )\n\n if yield_output:\n yield _to_path(output), _to_checksum(output)\n\n if output.is_dir_checksum and (\n yield_output\n or any(target.isin(output.path_info) for target in targets)\n ):\n yield from _dir_output_paths(repo_tree, output, targets)\n\n\ndef _dir_output_paths(repo_tree, output, targets=None):\n from dvc.config import NoRemoteError\n\n try:\n for fname in repo_tree.walk_files(output.path_info):\n if targets is None or any(\n fname.isin_or_eq(target) for target in targets\n ):\n yield str(fname), repo_tree.get_file_hash(fname).value\n except NoRemoteError:\n logger.warning(\"dir cache entry for '%s' is missing\", output)\n\n\ndef _filter_missing(repo, paths):\n repo_tree = RepoTree(repo, stream=True)\n for path in paths:\n metadata = repo_tree.metadata(path)\n if metadata.is_dvc:\n out = metadata.outs[0]\n if out.status().get(str(out)) == \"not in cache\":\n yield path\n\n\ndef _targets_to_path_infos(repo, targets):\n path_infos = []\n missing = []\n\n repo_tree = RepoTree(repo, stream=True)\n\n for target in targets:\n if repo_tree.exists(target):\n path_infos.append(repo_tree.metadata(target).path_info)\n else:\n missing.append(target)\n\n return path_infos, missing\n", "path": "dvc/repo/diff.py"}], "after_files": [{"content": "import logging\nimport os\n\nfrom dvc.exceptions import PathMissingError\nfrom dvc.repo import locked\nfrom dvc.tree.local import LocalTree\nfrom dvc.tree.repo import RepoTree\n\nlogger = logging.getLogger(__name__)\n\n\n@locked\ndef diff(self, a_rev=\"HEAD\", b_rev=None, targets=None):\n \"\"\"\n By default, it compares the workspace with the last commit's tree.\n\n This implementation differs from `git diff` since DVC doesn't have\n the concept of `index`, but it keeps the same interface, thus,\n `dvc diff` would be the same as `dvc diff HEAD`.\n \"\"\"\n\n if self.scm.no_commits:\n return {}\n\n b_rev = b_rev if b_rev else \"workspace\"\n results = {}\n missing_targets = {}\n for rev in self.brancher(revs=[a_rev, b_rev]):\n if rev == \"workspace\" and rev != b_rev:\n # brancher always returns workspace, but we only need to compute\n # workspace paths/checksums if b_rev was None\n continue\n\n targets_path_infos = None\n if targets is not None:\n # convert targets to path_infos, and capture any missing targets\n targets_path_infos, missing_targets[rev] = _targets_to_path_infos(\n self, targets\n )\n\n results[rev] = _paths_checksums(self, targets_path_infos)\n\n if targets is not None:\n # check for overlapping missing targets between a_rev and b_rev\n for target in set(missing_targets[a_rev]) & set(\n missing_targets[b_rev]\n ):\n raise PathMissingError(target, self)\n\n old = results[a_rev]\n new = results[b_rev]\n\n # Compare paths between the old and new tree.\n # set() efficiently converts dict keys to a set\n added = sorted(set(new) - set(old))\n deleted_or_missing = set(old) - set(new)\n if b_rev == \"workspace\":\n # missing status is only applicable when diffing local workspace\n # against a commit\n missing = sorted(_filter_missing(self, deleted_or_missing))\n else:\n missing = []\n deleted = sorted(deleted_or_missing - set(missing))\n modified = sorted(set(old) & set(new))\n\n ret = {\n \"added\": [{\"path\": path, \"hash\": new[path]} for path in added],\n \"deleted\": [{\"path\": path, \"hash\": old[path]} for path in deleted],\n \"modified\": [\n {\"path\": path, \"hash\": {\"old\": old[path], \"new\": new[path]}}\n for path in modified\n if old[path] != new[path]\n ],\n \"not in cache\": [\n {\"path\": path, \"hash\": old[path]} for path in missing\n ],\n }\n\n return ret if any(ret.values()) else {}\n\n\ndef _paths_checksums(repo, targets):\n \"\"\"\n A dictionary of checksums addressed by relpaths collected from\n the current tree outputs.\n\n To help distinguish between a directory and a file output,\n the former one will come with a trailing slash in the path:\n\n directory: \"data/\"\n file: \"data\"\n \"\"\"\n\n return dict(_output_paths(repo, targets))\n\n\ndef _output_paths(repo, targets):\n repo_tree = RepoTree(repo, stream=True)\n on_working_tree = isinstance(repo.tree, LocalTree)\n\n def _exists(output):\n if on_working_tree:\n return output.exists\n return True\n\n def _to_path(output):\n return (\n str(output)\n if not output.is_dir_checksum\n else os.path.join(str(output), \"\")\n )\n\n def _to_checksum(output):\n if on_working_tree:\n return repo.cache.local.tree.get_hash(output.path_info).value\n return output.hash_info.value\n\n for stage in repo.stages:\n for output in stage.outs:\n if _exists(output):\n yield_output = targets is None or any(\n output.path_info.isin_or_eq(target) for target in targets\n )\n\n if yield_output:\n yield _to_path(output), _to_checksum(output)\n\n if output.is_dir_checksum and (\n yield_output\n or any(target.isin(output.path_info) for target in targets)\n ):\n yield from _dir_output_paths(repo_tree, output, targets)\n\n\ndef _dir_output_paths(repo_tree, output, targets=None):\n from dvc.config import NoRemoteError\n\n try:\n for fname in repo_tree.walk_files(output.path_info):\n if targets is None or any(\n fname.isin_or_eq(target) for target in targets\n ):\n yield str(fname), repo_tree.get_file_hash(fname).value\n except NoRemoteError:\n logger.warning(\"dir cache entry for '%s' is missing\", output)\n\n\ndef _filter_missing(repo, paths):\n repo_tree = RepoTree(repo, stream=True)\n for path in paths:\n try:\n metadata = repo_tree.metadata(path)\n if metadata.is_dvc:\n out = metadata.outs[0]\n if out.status().get(str(out)) == \"not in cache\":\n yield path\n except FileNotFoundError:\n pass\n\n\ndef _targets_to_path_infos(repo, targets):\n path_infos = []\n missing = []\n\n repo_tree = RepoTree(repo, stream=True)\n\n for target in targets:\n if repo_tree.exists(target):\n path_infos.append(repo_tree.metadata(target).path_info)\n else:\n missing.append(target)\n\n return path_infos, missing\n", "path": "dvc/repo/diff.py"}]}
2,413
201
gh_patches_debug_29988
rasdani/github-patches
git_diff
open-telemetry__opentelemetry-python-1942
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Every string character is being checked As reported @yetanotherjsontodatabaseexporter in #1939, strings are also sequences. We are unnecessarily checking every character [here](https://github.com/open-telemetry/opentelemetry-python/blob/f11ed2f3bacb11d53a7a2b4837cf6308fa34cc71/opentelemetry-api/src/opentelemetry/attributes/__init__.py#L46). --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `opentelemetry-api/src/opentelemetry/attributes/__init__.py` Content: ``` 1 # Copyright The OpenTelemetry Authors 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 # type: ignore 15 16 import logging 17 import threading 18 from collections import OrderedDict 19 from collections.abc import MutableMapping 20 from types import MappingProxyType 21 from typing import MutableSequence, Optional, Sequence 22 23 from opentelemetry.util import types 24 25 _VALID_ATTR_VALUE_TYPES = (bool, str, int, float) 26 27 28 _logger = logging.getLogger(__name__) 29 30 31 def _is_valid_attribute_value(value: types.AttributeValue) -> bool: 32 """Checks if attribute value is valid. 33 34 An attribute value is valid if it is either: 35 - A primitive type: string, boolean, double precision floating 36 point (IEEE 754-1985) or integer. 37 - An array of primitive type values. The array MUST be homogeneous, 38 i.e. it MUST NOT contain values of different types. 39 """ 40 41 if isinstance(value, Sequence): 42 if len(value) == 0: 43 return True 44 45 sequence_first_valid_type = None 46 for element in value: 47 if element is None: 48 continue 49 element_type = type(element) 50 if element_type not in _VALID_ATTR_VALUE_TYPES: 51 _logger.warning( 52 "Invalid type %s in attribute value sequence. Expected one of " 53 "%s or None", 54 element_type.__name__, 55 [ 56 valid_type.__name__ 57 for valid_type in _VALID_ATTR_VALUE_TYPES 58 ], 59 ) 60 return False 61 # The type of the sequence must be homogeneous. The first non-None 62 # element determines the type of the sequence 63 if sequence_first_valid_type is None: 64 sequence_first_valid_type = element_type 65 elif not isinstance(element, sequence_first_valid_type): 66 _logger.warning( 67 "Mixed types %s and %s in attribute value sequence", 68 sequence_first_valid_type.__name__, 69 type(element).__name__, 70 ) 71 return False 72 73 elif not isinstance(value, _VALID_ATTR_VALUE_TYPES): 74 _logger.warning( 75 "Invalid type %s for attribute value. Expected one of %s or a " 76 "sequence of those types", 77 type(value).__name__, 78 [valid_type.__name__ for valid_type in _VALID_ATTR_VALUE_TYPES], 79 ) 80 return False 81 return True 82 83 84 def _filter_attributes(attributes: types.Attributes) -> None: 85 """Applies attribute validation rules and drops (key, value) pairs 86 that doesn't adhere to attributes specification. 87 88 https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/common/common.md#attributes. 89 """ 90 if attributes: 91 for attr_key, attr_value in list(attributes.items()): 92 if not attr_key: 93 _logger.warning("invalid key `%s` (empty or null)", attr_key) 94 attributes.pop(attr_key) 95 continue 96 97 if _is_valid_attribute_value(attr_value): 98 if isinstance(attr_value, MutableSequence): 99 attributes[attr_key] = tuple(attr_value) 100 if isinstance(attr_value, bytes): 101 try: 102 attributes[attr_key] = attr_value.decode() 103 except ValueError: 104 attributes.pop(attr_key) 105 _logger.warning("Byte attribute could not be decoded.") 106 else: 107 attributes.pop(attr_key) 108 109 110 _DEFAULT_LIMIT = 128 111 112 113 class BoundedAttributes(MutableMapping): 114 """An ordered dict with a fixed max capacity. 115 116 Oldest elements are dropped when the dict is full and a new element is 117 added. 118 """ 119 120 def __init__( 121 self, 122 maxlen: Optional[int] = _DEFAULT_LIMIT, 123 attributes: types.Attributes = None, 124 immutable: bool = True, 125 ): 126 if maxlen is not None: 127 if not isinstance(maxlen, int) or maxlen < 0: 128 raise ValueError( 129 "maxlen must be valid int greater or equal to 0" 130 ) 131 self.maxlen = maxlen 132 self.dropped = 0 133 self._dict = OrderedDict() # type: OrderedDict 134 self._lock = threading.Lock() # type: threading.Lock 135 if attributes: 136 _filter_attributes(attributes) 137 for key, value in attributes.items(): 138 self[key] = value 139 self._immutable = immutable 140 141 def __repr__(self): 142 return "{}({}, maxlen={})".format( 143 type(self).__name__, dict(self._dict), self.maxlen 144 ) 145 146 def __getitem__(self, key): 147 return self._dict[key] 148 149 def __setitem__(self, key, value): 150 if getattr(self, "_immutable", False): 151 raise TypeError 152 with self._lock: 153 if self.maxlen is not None and self.maxlen == 0: 154 self.dropped += 1 155 return 156 157 if key in self._dict: 158 del self._dict[key] 159 elif self.maxlen is not None and len(self._dict) == self.maxlen: 160 del self._dict[next(iter(self._dict.keys()))] 161 self.dropped += 1 162 self._dict[key] = value 163 164 def __delitem__(self, key): 165 if getattr(self, "_immutable", False): 166 raise TypeError 167 with self._lock: 168 del self._dict[key] 169 170 def __iter__(self): 171 with self._lock: 172 return iter(self._dict.copy()) 173 174 def __len__(self): 175 return len(self._dict) 176 177 def copy(self): 178 return self._dict.copy() 179 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/opentelemetry-api/src/opentelemetry/attributes/__init__.py b/opentelemetry-api/src/opentelemetry/attributes/__init__.py --- a/opentelemetry-api/src/opentelemetry/attributes/__init__.py +++ b/opentelemetry-api/src/opentelemetry/attributes/__init__.py @@ -17,7 +17,6 @@ import threading from collections import OrderedDict from collections.abc import MutableMapping -from types import MappingProxyType from typing import MutableSequence, Optional, Sequence from opentelemetry.util import types @@ -38,9 +37,10 @@ i.e. it MUST NOT contain values of different types. """ + if isinstance(value, _VALID_ATTR_VALUE_TYPES): + return True + if isinstance(value, Sequence): - if len(value) == 0: - return True sequence_first_valid_type = None for element in value: @@ -69,16 +69,15 @@ type(element).__name__, ) return False - - elif not isinstance(value, _VALID_ATTR_VALUE_TYPES): - _logger.warning( - "Invalid type %s for attribute value. Expected one of %s or a " - "sequence of those types", - type(value).__name__, - [valid_type.__name__ for valid_type in _VALID_ATTR_VALUE_TYPES], - ) - return False - return True + return True + + _logger.warning( + "Invalid type %s for attribute value. Expected one of %s or a " + "sequence of those types", + type(value).__name__, + [valid_type.__name__ for valid_type in _VALID_ATTR_VALUE_TYPES], + ) + return False def _filter_attributes(attributes: types.Attributes) -> None:
{"golden_diff": "diff --git a/opentelemetry-api/src/opentelemetry/attributes/__init__.py b/opentelemetry-api/src/opentelemetry/attributes/__init__.py\n--- a/opentelemetry-api/src/opentelemetry/attributes/__init__.py\n+++ b/opentelemetry-api/src/opentelemetry/attributes/__init__.py\n@@ -17,7 +17,6 @@\n import threading\n from collections import OrderedDict\n from collections.abc import MutableMapping\n-from types import MappingProxyType\n from typing import MutableSequence, Optional, Sequence\n \n from opentelemetry.util import types\n@@ -38,9 +37,10 @@\n i.e. it MUST NOT contain values of different types.\n \"\"\"\n \n+ if isinstance(value, _VALID_ATTR_VALUE_TYPES):\n+ return True\n+\n if isinstance(value, Sequence):\n- if len(value) == 0:\n- return True\n \n sequence_first_valid_type = None\n for element in value:\n@@ -69,16 +69,15 @@\n type(element).__name__,\n )\n return False\n-\n- elif not isinstance(value, _VALID_ATTR_VALUE_TYPES):\n- _logger.warning(\n- \"Invalid type %s for attribute value. Expected one of %s or a \"\n- \"sequence of those types\",\n- type(value).__name__,\n- [valid_type.__name__ for valid_type in _VALID_ATTR_VALUE_TYPES],\n- )\n- return False\n- return True\n+ return True\n+\n+ _logger.warning(\n+ \"Invalid type %s for attribute value. Expected one of %s or a \"\n+ \"sequence of those types\",\n+ type(value).__name__,\n+ [valid_type.__name__ for valid_type in _VALID_ATTR_VALUE_TYPES],\n+ )\n+ return False\n \n \n def _filter_attributes(attributes: types.Attributes) -> None:\n", "issue": "Every string character is being checked\nAs reported @yetanotherjsontodatabaseexporter in #1939, strings are also sequences. We are unnecessarily checking every character [here](https://github.com/open-telemetry/opentelemetry-python/blob/f11ed2f3bacb11d53a7a2b4837cf6308fa34cc71/opentelemetry-api/src/opentelemetry/attributes/__init__.py#L46).\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# type: ignore\n\nimport logging\nimport threading\nfrom collections import OrderedDict\nfrom collections.abc import MutableMapping\nfrom types import MappingProxyType\nfrom typing import MutableSequence, Optional, Sequence\n\nfrom opentelemetry.util import types\n\n_VALID_ATTR_VALUE_TYPES = (bool, str, int, float)\n\n\n_logger = logging.getLogger(__name__)\n\n\ndef _is_valid_attribute_value(value: types.AttributeValue) -> bool:\n \"\"\"Checks if attribute value is valid.\n\n An attribute value is valid if it is either:\n - A primitive type: string, boolean, double precision floating\n point (IEEE 754-1985) or integer.\n - An array of primitive type values. The array MUST be homogeneous,\n i.e. it MUST NOT contain values of different types.\n \"\"\"\n\n if isinstance(value, Sequence):\n if len(value) == 0:\n return True\n\n sequence_first_valid_type = None\n for element in value:\n if element is None:\n continue\n element_type = type(element)\n if element_type not in _VALID_ATTR_VALUE_TYPES:\n _logger.warning(\n \"Invalid type %s in attribute value sequence. Expected one of \"\n \"%s or None\",\n element_type.__name__,\n [\n valid_type.__name__\n for valid_type in _VALID_ATTR_VALUE_TYPES\n ],\n )\n return False\n # The type of the sequence must be homogeneous. The first non-None\n # element determines the type of the sequence\n if sequence_first_valid_type is None:\n sequence_first_valid_type = element_type\n elif not isinstance(element, sequence_first_valid_type):\n _logger.warning(\n \"Mixed types %s and %s in attribute value sequence\",\n sequence_first_valid_type.__name__,\n type(element).__name__,\n )\n return False\n\n elif not isinstance(value, _VALID_ATTR_VALUE_TYPES):\n _logger.warning(\n \"Invalid type %s for attribute value. Expected one of %s or a \"\n \"sequence of those types\",\n type(value).__name__,\n [valid_type.__name__ for valid_type in _VALID_ATTR_VALUE_TYPES],\n )\n return False\n return True\n\n\ndef _filter_attributes(attributes: types.Attributes) -> None:\n \"\"\"Applies attribute validation rules and drops (key, value) pairs\n that doesn't adhere to attributes specification.\n\n https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/common/common.md#attributes.\n \"\"\"\n if attributes:\n for attr_key, attr_value in list(attributes.items()):\n if not attr_key:\n _logger.warning(\"invalid key `%s` (empty or null)\", attr_key)\n attributes.pop(attr_key)\n continue\n\n if _is_valid_attribute_value(attr_value):\n if isinstance(attr_value, MutableSequence):\n attributes[attr_key] = tuple(attr_value)\n if isinstance(attr_value, bytes):\n try:\n attributes[attr_key] = attr_value.decode()\n except ValueError:\n attributes.pop(attr_key)\n _logger.warning(\"Byte attribute could not be decoded.\")\n else:\n attributes.pop(attr_key)\n\n\n_DEFAULT_LIMIT = 128\n\n\nclass BoundedAttributes(MutableMapping):\n \"\"\"An ordered dict with a fixed max capacity.\n\n Oldest elements are dropped when the dict is full and a new element is\n added.\n \"\"\"\n\n def __init__(\n self,\n maxlen: Optional[int] = _DEFAULT_LIMIT,\n attributes: types.Attributes = None,\n immutable: bool = True,\n ):\n if maxlen is not None:\n if not isinstance(maxlen, int) or maxlen < 0:\n raise ValueError(\n \"maxlen must be valid int greater or equal to 0\"\n )\n self.maxlen = maxlen\n self.dropped = 0\n self._dict = OrderedDict() # type: OrderedDict\n self._lock = threading.Lock() # type: threading.Lock\n if attributes:\n _filter_attributes(attributes)\n for key, value in attributes.items():\n self[key] = value\n self._immutable = immutable\n\n def __repr__(self):\n return \"{}({}, maxlen={})\".format(\n type(self).__name__, dict(self._dict), self.maxlen\n )\n\n def __getitem__(self, key):\n return self._dict[key]\n\n def __setitem__(self, key, value):\n if getattr(self, \"_immutable\", False):\n raise TypeError\n with self._lock:\n if self.maxlen is not None and self.maxlen == 0:\n self.dropped += 1\n return\n\n if key in self._dict:\n del self._dict[key]\n elif self.maxlen is not None and len(self._dict) == self.maxlen:\n del self._dict[next(iter(self._dict.keys()))]\n self.dropped += 1\n self._dict[key] = value\n\n def __delitem__(self, key):\n if getattr(self, \"_immutable\", False):\n raise TypeError\n with self._lock:\n del self._dict[key]\n\n def __iter__(self):\n with self._lock:\n return iter(self._dict.copy())\n\n def __len__(self):\n return len(self._dict)\n\n def copy(self):\n return self._dict.copy()\n", "path": "opentelemetry-api/src/opentelemetry/attributes/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# type: ignore\n\nimport logging\nimport threading\nfrom collections import OrderedDict\nfrom collections.abc import MutableMapping\nfrom typing import MutableSequence, Optional, Sequence\n\nfrom opentelemetry.util import types\n\n_VALID_ATTR_VALUE_TYPES = (bool, str, int, float)\n\n\n_logger = logging.getLogger(__name__)\n\n\ndef _is_valid_attribute_value(value: types.AttributeValue) -> bool:\n \"\"\"Checks if attribute value is valid.\n\n An attribute value is valid if it is either:\n - A primitive type: string, boolean, double precision floating\n point (IEEE 754-1985) or integer.\n - An array of primitive type values. The array MUST be homogeneous,\n i.e. it MUST NOT contain values of different types.\n \"\"\"\n\n if isinstance(value, _VALID_ATTR_VALUE_TYPES):\n return True\n\n if isinstance(value, Sequence):\n\n sequence_first_valid_type = None\n for element in value:\n if element is None:\n continue\n element_type = type(element)\n if element_type not in _VALID_ATTR_VALUE_TYPES:\n _logger.warning(\n \"Invalid type %s in attribute value sequence. Expected one of \"\n \"%s or None\",\n element_type.__name__,\n [\n valid_type.__name__\n for valid_type in _VALID_ATTR_VALUE_TYPES\n ],\n )\n return False\n # The type of the sequence must be homogeneous. The first non-None\n # element determines the type of the sequence\n if sequence_first_valid_type is None:\n sequence_first_valid_type = element_type\n elif not isinstance(element, sequence_first_valid_type):\n _logger.warning(\n \"Mixed types %s and %s in attribute value sequence\",\n sequence_first_valid_type.__name__,\n type(element).__name__,\n )\n return False\n return True\n\n _logger.warning(\n \"Invalid type %s for attribute value. Expected one of %s or a \"\n \"sequence of those types\",\n type(value).__name__,\n [valid_type.__name__ for valid_type in _VALID_ATTR_VALUE_TYPES],\n )\n return False\n\n\ndef _filter_attributes(attributes: types.Attributes) -> None:\n \"\"\"Applies attribute validation rules and drops (key, value) pairs\n that doesn't adhere to attributes specification.\n\n https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/common/common.md#attributes.\n \"\"\"\n if attributes:\n for attr_key, attr_value in list(attributes.items()):\n if not attr_key:\n _logger.warning(\"invalid key `%s` (empty or null)\", attr_key)\n attributes.pop(attr_key)\n continue\n\n if _is_valid_attribute_value(attr_value):\n if isinstance(attr_value, MutableSequence):\n attributes[attr_key] = tuple(attr_value)\n if isinstance(attr_value, bytes):\n try:\n attributes[attr_key] = attr_value.decode()\n except ValueError:\n attributes.pop(attr_key)\n _logger.warning(\"Byte attribute could not be decoded.\")\n else:\n attributes.pop(attr_key)\n\n\n_DEFAULT_LIMIT = 128\n\n\nclass BoundedAttributes(MutableMapping):\n \"\"\"An ordered dict with a fixed max capacity.\n\n Oldest elements are dropped when the dict is full and a new element is\n added.\n \"\"\"\n\n def __init__(\n self,\n maxlen: Optional[int] = _DEFAULT_LIMIT,\n attributes: types.Attributes = None,\n immutable: bool = True,\n ):\n if maxlen is not None:\n if not isinstance(maxlen, int) or maxlen < 0:\n raise ValueError(\n \"maxlen must be valid int greater or equal to 0\"\n )\n self.maxlen = maxlen\n self.dropped = 0\n self._dict = OrderedDict() # type: OrderedDict\n self._lock = threading.Lock() # type: threading.Lock\n if attributes:\n _filter_attributes(attributes)\n for key, value in attributes.items():\n self[key] = value\n self._immutable = immutable\n\n def __repr__(self):\n return \"{}({}, maxlen={})\".format(\n type(self).__name__, dict(self._dict), self.maxlen\n )\n\n def __getitem__(self, key):\n return self._dict[key]\n\n def __setitem__(self, key, value):\n if getattr(self, \"_immutable\", False):\n raise TypeError\n with self._lock:\n if self.maxlen is not None and self.maxlen == 0:\n self.dropped += 1\n return\n\n if key in self._dict:\n del self._dict[key]\n elif self.maxlen is not None and len(self._dict) == self.maxlen:\n del self._dict[next(iter(self._dict.keys()))]\n self.dropped += 1\n self._dict[key] = value\n\n def __delitem__(self, key):\n if getattr(self, \"_immutable\", False):\n raise TypeError\n with self._lock:\n del self._dict[key]\n\n def __iter__(self):\n with self._lock:\n return iter(self._dict.copy())\n\n def __len__(self):\n return len(self._dict)\n\n def copy(self):\n return self._dict.copy()\n", "path": "opentelemetry-api/src/opentelemetry/attributes/__init__.py"}]}
2,071
400
gh_patches_debug_3932
rasdani/github-patches
git_diff
scikit-hep__pyhf-2459
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- In Python 3.12 xml.etree.ElementTree will raise DeprecationWarning: Testing an element's truth value will raise an exception in future versions While testing Python 3.12 in CI https://github.com/scikit-hep/pyhf/blob/adddb0797c564a0158a8e2e69a58ee1f98604bf7/tests/test_export.py#L438-L450 raised ```pytb > assert channel E DeprecationWarning: Testing an element's truth value will raise an exception in future versions. Use specific 'len(elem)' or 'elem is not None' test instead. ``` This comes from https://github.com/python/cpython/issues/83122 which landed in Python 3.12. This should get fixed before Python 3.12 support is added. From the Python 3.12 docs: https://docs.python.org/3.12/library/xml.etree.elementtree.html#element-objects > Caution: Elements with no subelements will test as `False`. Testing the truth value of an Element is deprecated and will raise an exception in Python 3.14. Use specific `len(elem)` or `elem is None` test instead.: > > > ```python > element = root.find('foo') > > if not element: # careful! > print("element not found, or element has no subelements") > > if element is None: > print("element not found") > ``` > > _Changed in version 3.12_: Testing the truth value of an Element emits [DeprecationWarning](https://docs.python.org/3/library/exceptions.html#DeprecationWarning). --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `src/pyhf/writexml.py` Content: ``` 1 import logging 2 3 from pathlib import Path 4 import shutil 5 import xml.etree.ElementTree as ET 6 import numpy as np 7 8 import uproot 9 10 from pyhf.mixins import _ChannelSummaryMixin 11 from pyhf.schema import path as schema_path 12 13 _ROOT_DATA_FILE = None 14 15 log = logging.getLogger(__name__) 16 17 __all__ = [ 18 "build_channel", 19 "build_data", 20 "build_measurement", 21 "build_modifier", 22 "build_sample", 23 "indent", 24 ] 25 26 27 def __dir__(): 28 return __all__ 29 30 31 # 'spec' gets passed through all functions as NormFactor is a unique case of having 32 # parameter configurations stored at the modifier-definition-spec level. This means 33 # that build_modifier() needs access to the measurements. The call stack is: 34 # 35 # writexml 36 # ->build_channel 37 # ->build_sample 38 # ->build_modifier 39 # 40 # Therefore, 'spec' needs to be threaded through all these calls. 41 42 43 def _make_hist_name(channel, sample, modifier='', prefix='hist', suffix=''): 44 middle = '_'.join(filter(lambda x: x, [channel, sample, modifier])) 45 return f"{prefix}{middle}{suffix}" 46 47 48 def _export_root_histogram(hist_name, data): 49 if hist_name in _ROOT_DATA_FILE: 50 raise KeyError(f"Duplicate key {hist_name} being written.") 51 _ROOT_DATA_FILE[hist_name] = uproot.to_writable( 52 (np.asarray(data), np.arange(len(data) + 1)) 53 ) 54 55 56 # https://stackoverflow.com/a/4590052 57 def indent(elem, level=0): 58 i = "\n" + level * " " 59 if elem: 60 if not elem.text or not elem.text.strip(): 61 elem.text = i + " " 62 if not elem.tail or not elem.tail.strip(): 63 elem.tail = i 64 for subelem in elem: 65 indent(subelem, level + 1) 66 if not elem.tail or not elem.tail.strip(): 67 elem.tail = i 68 else: 69 if level and (not elem.tail or not elem.tail.strip()): 70 elem.tail = i 71 72 73 def build_measurement(measurementspec, modifiertypes): 74 """ 75 Build the XML measurement specification for a given measurement adhering to defs.json/#definitions/measurement. 76 77 Args: 78 measurementspec (:obj:`dict`): The measurements specification from a :class:`~pyhf.workspace.Workspace`. 79 modifiertypes (:obj:`dict`): A mapping from modifier name (:obj:`str`) to modifier type (:obj:`str`). 80 81 Returns: 82 :class:`xml.etree.cElementTree.Element`: The XML measurement specification. 83 84 """ 85 # need to determine prefixes 86 prefixes = { 87 'normsys': 'alpha_', 88 'histosys': 'alpha_', 89 'shapesys': 'gamma_', 90 'staterror': 'gamma_', 91 } 92 93 config = measurementspec['config'] 94 name = measurementspec['name'] 95 poi = config['poi'] 96 97 # we want to know which parameters are fixed (constant) 98 # and to additionally extract the luminosity information 99 fixed_params = [] 100 lumi = 1.0 101 lumierr = 0.0 102 for parameter in config['parameters']: 103 if parameter.get('fixed', False): 104 pname = parameter['name'] 105 if pname == 'lumi': 106 fixed_params.append('Lumi') 107 else: 108 prefix = prefixes.get(modifiertypes[pname], '') 109 fixed_params.append(f'{prefix}{pname}') 110 # we found luminosity, so handle it 111 if parameter['name'] == 'lumi': 112 lumi = parameter['auxdata'][0] 113 lumierr = parameter['sigmas'][0] 114 115 # define measurement 116 meas = ET.Element( 117 "Measurement", 118 Name=name, 119 Lumi=str(lumi), 120 LumiRelErr=str(lumierr), 121 ExportOnly=str(True), 122 ) 123 poiel = ET.Element('POI') 124 poiel.text = poi 125 meas.append(poiel) 126 127 # add fixed parameters (constant) 128 if fixed_params: 129 se = ET.Element('ParamSetting', Const='True') 130 se.text = ' '.join(fixed_params) 131 meas.append(se) 132 return meas 133 134 135 def build_modifier(spec, modifierspec, channelname, samplename, sampledata): 136 if modifierspec['name'] == 'lumi': 137 return None 138 mod_map = { 139 'histosys': 'HistoSys', 140 'staterror': 'StatError', 141 'normsys': 'OverallSys', 142 'shapesys': 'ShapeSys', 143 'normfactor': 'NormFactor', 144 'shapefactor': 'ShapeFactor', 145 } 146 147 attrs = {'Name': modifierspec['name']} 148 if modifierspec['type'] == 'histosys': 149 attrs['HistoNameLow'] = _make_hist_name( 150 channelname, samplename, modifierspec['name'], suffix='Low' 151 ) 152 attrs['HistoNameHigh'] = _make_hist_name( 153 channelname, samplename, modifierspec['name'], suffix='High' 154 ) 155 _export_root_histogram(attrs['HistoNameLow'], modifierspec['data']['lo_data']) 156 _export_root_histogram(attrs['HistoNameHigh'], modifierspec['data']['hi_data']) 157 elif modifierspec['type'] == 'normsys': 158 attrs['High'] = str(modifierspec['data']['hi']) 159 attrs['Low'] = str(modifierspec['data']['lo']) 160 elif modifierspec['type'] == 'normfactor': 161 # NB: only look at first measurement for normfactor configs. In order 162 # to dump as HistFactory XML, this has to be the same for all 163 # measurements or it will not work correctly. Why? 164 # 165 # Unlike other modifiers, NormFactor has the unique circumstance of 166 # defining its parameter configurations at the modifier level inside 167 # the channel specification, instead of at the measurement level, like 168 # all of the other modifiers. 169 # 170 # However, since I strive for perfection, the "Const" attribute will 171 # never be set here, but at the per-measurement configuration instead 172 # like all other parameters. This is an acceptable compromise. 173 # 174 # Lastly, if a normfactor parameter configuration doesn't exist in the 175 # first measurement parameter configuration, then set defaults. 176 val = 1 177 low = 0 178 high = 10 179 for p in spec['measurements'][0]['config']['parameters']: 180 if p['name'] == modifierspec['name']: 181 val = p.get('inits', [val])[0] 182 low, high = p.get('bounds', [[low, high]])[0] 183 attrs['Val'] = str(val) 184 attrs['Low'] = str(low) 185 attrs['High'] = str(high) 186 elif modifierspec['type'] == 'staterror': 187 attrs['Activate'] = 'True' 188 attrs['HistoName'] = _make_hist_name( 189 channelname, samplename, modifierspec['name'] 190 ) 191 # must be deleted, HiFa XML specification does not support 'Name' 192 del attrs['Name'] 193 # need to make this a relative uncertainty stored in ROOT file 194 _export_root_histogram( 195 attrs['HistoName'], 196 np.divide( 197 modifierspec['data'], 198 sampledata, 199 out=np.zeros_like(sampledata), 200 where=np.asarray(sampledata) != 0, 201 dtype='float', 202 ).tolist(), 203 ) 204 elif modifierspec['type'] == 'shapesys': 205 attrs['ConstraintType'] = 'Poisson' 206 attrs['HistoName'] = _make_hist_name( 207 channelname, samplename, modifierspec['name'] 208 ) 209 # need to make this a relative uncertainty stored in ROOT file 210 _export_root_histogram( 211 attrs['HistoName'], 212 [ 213 np.divide( 214 a, b, out=np.zeros_like(a), where=np.asarray(b) != 0, dtype='float' 215 ) 216 for a, b in np.array( 217 (modifierspec['data'], sampledata), dtype="float" 218 ).T 219 ], 220 ) 221 elif modifierspec['type'] == 'shapefactor': 222 pass 223 else: 224 log.warning( 225 f"Skipping modifier {modifierspec['name']}({modifierspec['type']}) for now" 226 ) 227 return None 228 229 modifier = ET.Element(mod_map[modifierspec['type']], **attrs) 230 return modifier 231 232 233 def build_sample(spec, samplespec, channelname): 234 histname = _make_hist_name(channelname, samplespec['name']) 235 attrs = { 236 'Name': samplespec['name'], 237 'HistoName': histname, 238 'InputFile': _ROOT_DATA_FILE.file_path, 239 'NormalizeByTheory': 'False', 240 } 241 sample = ET.Element('Sample', **attrs) 242 for modspec in samplespec['modifiers']: 243 # if lumi modifier added for this sample, need to set NormalizeByTheory 244 if modspec['type'] == 'lumi': 245 sample.attrib.update({'NormalizeByTheory': 'True'}) 246 modifier = build_modifier( 247 spec, modspec, channelname, samplespec['name'], samplespec['data'] 248 ) 249 if modifier is not None: 250 sample.append(modifier) 251 _export_root_histogram(histname, samplespec['data']) 252 return sample 253 254 255 def build_data(obsspec, channelname): 256 histname = _make_hist_name(channelname, 'data') 257 data = ET.Element('Data', HistoName=histname, InputFile=_ROOT_DATA_FILE.file_path) 258 259 observation = next((obs for obs in obsspec if obs['name'] == channelname), None) 260 _export_root_histogram(histname, observation['data']) 261 return data 262 263 264 def build_channel(spec, channelspec, obsspec): 265 channel = ET.Element( 266 'Channel', Name=channelspec['name'], InputFile=_ROOT_DATA_FILE.file_path 267 ) 268 if obsspec: 269 data = build_data(obsspec, channelspec['name']) 270 channel.append(data) 271 for samplespec in channelspec['samples']: 272 channel.append(build_sample(spec, samplespec, channelspec['name'])) 273 return channel 274 275 276 def writexml(spec, specdir, data_rootdir, resultprefix): 277 global _ROOT_DATA_FILE 278 279 shutil.copyfile( 280 schema_path.joinpath('HistFactorySchema.dtd'), 281 Path(specdir).parent.joinpath('HistFactorySchema.dtd'), 282 ) 283 combination = ET.Element( 284 "Combination", OutputFilePrefix=str(Path(specdir).joinpath(resultprefix)) 285 ) 286 287 with uproot.recreate(Path(data_rootdir).joinpath('data.root')) as _ROOT_DATA_FILE: 288 for channelspec in spec['channels']: 289 channelfilename = str( 290 Path(specdir).joinpath(f'{resultprefix}_{channelspec["name"]}.xml') 291 ) 292 with open(channelfilename, "w", encoding="utf-8") as channelfile: 293 channel = build_channel(spec, channelspec, spec.get('observations')) 294 indent(channel) 295 channelfile.write( 296 "<!DOCTYPE Channel SYSTEM '../HistFactorySchema.dtd'>\n\n" 297 ) 298 channelfile.write( 299 ET.tostring(channel, encoding='utf-8').decode('utf-8') 300 ) 301 302 inp = ET.Element("Input") 303 inp.text = channelfilename 304 combination.append(inp) 305 306 # need information about modifier types to get the right prefix in measurement 307 mixin = _ChannelSummaryMixin(channels=spec['channels']) 308 309 for measurement in spec['measurements']: 310 combination.append(build_measurement(measurement, dict(mixin.modifiers))) 311 indent(combination) 312 return b"<!DOCTYPE Combination SYSTEM 'HistFactorySchema.dtd'>\n\n" + ET.tostring( 313 combination, encoding='utf-8' 314 ) 315 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/src/pyhf/writexml.py b/src/pyhf/writexml.py --- a/src/pyhf/writexml.py +++ b/src/pyhf/writexml.py @@ -56,7 +56,7 @@ # https://stackoverflow.com/a/4590052 def indent(elem, level=0): i = "\n" + level * " " - if elem: + if elem is not None: if not elem.text or not elem.text.strip(): elem.text = i + " " if not elem.tail or not elem.tail.strip():
{"golden_diff": "diff --git a/src/pyhf/writexml.py b/src/pyhf/writexml.py\n--- a/src/pyhf/writexml.py\n+++ b/src/pyhf/writexml.py\n@@ -56,7 +56,7 @@\n # https://stackoverflow.com/a/4590052\n def indent(elem, level=0):\n i = \"\\n\" + level * \" \"\n- if elem:\n+ if elem is not None:\n if not elem.text or not elem.text.strip():\n elem.text = i + \" \"\n if not elem.tail or not elem.tail.strip():\n", "issue": "In Python 3.12 xml.etree.ElementTree will raise DeprecationWarning: Testing an element's truth value will raise an exception in future versions\nWhile testing Python 3.12 in CI\r\n\r\nhttps://github.com/scikit-hep/pyhf/blob/adddb0797c564a0158a8e2e69a58ee1f98604bf7/tests/test_export.py#L438-L450\r\n\r\nraised \r\n\r\n```pytb\r\n> assert channel\r\nE DeprecationWarning: Testing an element's truth value will raise an exception in future versions. Use specific 'len(elem)' or 'elem is not None' test instead.\r\n```\r\n\r\nThis comes from https://github.com/python/cpython/issues/83122 which landed in Python 3.12. This should get fixed before Python 3.12 support is added.\r\n\r\nFrom the Python 3.12 docs: https://docs.python.org/3.12/library/xml.etree.elementtree.html#element-objects\r\n\r\n> Caution: Elements with no subelements will test as `False`. Testing the truth value of an Element is deprecated and will raise an exception in Python 3.14. Use specific `len(elem)` or `elem is None` test instead.:\r\n>\r\n>\r\n> ```python\r\n> element = root.find('foo')\r\n> \r\n> if not element: # careful!\r\n> print(\"element not found, or element has no subelements\")\r\n> \r\n> if element is None:\r\n> print(\"element not found\")\r\n> ```\r\n>\r\n> _Changed in version 3.12_: Testing the truth value of an Element emits [DeprecationWarning](https://docs.python.org/3/library/exceptions.html#DeprecationWarning).\n", "before_files": [{"content": "import logging\n\nfrom pathlib import Path\nimport shutil\nimport xml.etree.ElementTree as ET\nimport numpy as np\n\nimport uproot\n\nfrom pyhf.mixins import _ChannelSummaryMixin\nfrom pyhf.schema import path as schema_path\n\n_ROOT_DATA_FILE = None\n\nlog = logging.getLogger(__name__)\n\n__all__ = [\n \"build_channel\",\n \"build_data\",\n \"build_measurement\",\n \"build_modifier\",\n \"build_sample\",\n \"indent\",\n]\n\n\ndef __dir__():\n return __all__\n\n\n# 'spec' gets passed through all functions as NormFactor is a unique case of having\n# parameter configurations stored at the modifier-definition-spec level. This means\n# that build_modifier() needs access to the measurements. The call stack is:\n#\n# writexml\n# ->build_channel\n# ->build_sample\n# ->build_modifier\n#\n# Therefore, 'spec' needs to be threaded through all these calls.\n\n\ndef _make_hist_name(channel, sample, modifier='', prefix='hist', suffix=''):\n middle = '_'.join(filter(lambda x: x, [channel, sample, modifier]))\n return f\"{prefix}{middle}{suffix}\"\n\n\ndef _export_root_histogram(hist_name, data):\n if hist_name in _ROOT_DATA_FILE:\n raise KeyError(f\"Duplicate key {hist_name} being written.\")\n _ROOT_DATA_FILE[hist_name] = uproot.to_writable(\n (np.asarray(data), np.arange(len(data) + 1))\n )\n\n\n# https://stackoverflow.com/a/4590052\ndef indent(elem, level=0):\n i = \"\\n\" + level * \" \"\n if elem:\n if not elem.text or not elem.text.strip():\n elem.text = i + \" \"\n if not elem.tail or not elem.tail.strip():\n elem.tail = i\n for subelem in elem:\n indent(subelem, level + 1)\n if not elem.tail or not elem.tail.strip():\n elem.tail = i\n else:\n if level and (not elem.tail or not elem.tail.strip()):\n elem.tail = i\n\n\ndef build_measurement(measurementspec, modifiertypes):\n \"\"\"\n Build the XML measurement specification for a given measurement adhering to defs.json/#definitions/measurement.\n\n Args:\n measurementspec (:obj:`dict`): The measurements specification from a :class:`~pyhf.workspace.Workspace`.\n modifiertypes (:obj:`dict`): A mapping from modifier name (:obj:`str`) to modifier type (:obj:`str`).\n\n Returns:\n :class:`xml.etree.cElementTree.Element`: The XML measurement specification.\n\n \"\"\"\n # need to determine prefixes\n prefixes = {\n 'normsys': 'alpha_',\n 'histosys': 'alpha_',\n 'shapesys': 'gamma_',\n 'staterror': 'gamma_',\n }\n\n config = measurementspec['config']\n name = measurementspec['name']\n poi = config['poi']\n\n # we want to know which parameters are fixed (constant)\n # and to additionally extract the luminosity information\n fixed_params = []\n lumi = 1.0\n lumierr = 0.0\n for parameter in config['parameters']:\n if parameter.get('fixed', False):\n pname = parameter['name']\n if pname == 'lumi':\n fixed_params.append('Lumi')\n else:\n prefix = prefixes.get(modifiertypes[pname], '')\n fixed_params.append(f'{prefix}{pname}')\n # we found luminosity, so handle it\n if parameter['name'] == 'lumi':\n lumi = parameter['auxdata'][0]\n lumierr = parameter['sigmas'][0]\n\n # define measurement\n meas = ET.Element(\n \"Measurement\",\n Name=name,\n Lumi=str(lumi),\n LumiRelErr=str(lumierr),\n ExportOnly=str(True),\n )\n poiel = ET.Element('POI')\n poiel.text = poi\n meas.append(poiel)\n\n # add fixed parameters (constant)\n if fixed_params:\n se = ET.Element('ParamSetting', Const='True')\n se.text = ' '.join(fixed_params)\n meas.append(se)\n return meas\n\n\ndef build_modifier(spec, modifierspec, channelname, samplename, sampledata):\n if modifierspec['name'] == 'lumi':\n return None\n mod_map = {\n 'histosys': 'HistoSys',\n 'staterror': 'StatError',\n 'normsys': 'OverallSys',\n 'shapesys': 'ShapeSys',\n 'normfactor': 'NormFactor',\n 'shapefactor': 'ShapeFactor',\n }\n\n attrs = {'Name': modifierspec['name']}\n if modifierspec['type'] == 'histosys':\n attrs['HistoNameLow'] = _make_hist_name(\n channelname, samplename, modifierspec['name'], suffix='Low'\n )\n attrs['HistoNameHigh'] = _make_hist_name(\n channelname, samplename, modifierspec['name'], suffix='High'\n )\n _export_root_histogram(attrs['HistoNameLow'], modifierspec['data']['lo_data'])\n _export_root_histogram(attrs['HistoNameHigh'], modifierspec['data']['hi_data'])\n elif modifierspec['type'] == 'normsys':\n attrs['High'] = str(modifierspec['data']['hi'])\n attrs['Low'] = str(modifierspec['data']['lo'])\n elif modifierspec['type'] == 'normfactor':\n # NB: only look at first measurement for normfactor configs. In order\n # to dump as HistFactory XML, this has to be the same for all\n # measurements or it will not work correctly. Why?\n #\n # Unlike other modifiers, NormFactor has the unique circumstance of\n # defining its parameter configurations at the modifier level inside\n # the channel specification, instead of at the measurement level, like\n # all of the other modifiers.\n #\n # However, since I strive for perfection, the \"Const\" attribute will\n # never be set here, but at the per-measurement configuration instead\n # like all other parameters. This is an acceptable compromise.\n #\n # Lastly, if a normfactor parameter configuration doesn't exist in the\n # first measurement parameter configuration, then set defaults.\n val = 1\n low = 0\n high = 10\n for p in spec['measurements'][0]['config']['parameters']:\n if p['name'] == modifierspec['name']:\n val = p.get('inits', [val])[0]\n low, high = p.get('bounds', [[low, high]])[0]\n attrs['Val'] = str(val)\n attrs['Low'] = str(low)\n attrs['High'] = str(high)\n elif modifierspec['type'] == 'staterror':\n attrs['Activate'] = 'True'\n attrs['HistoName'] = _make_hist_name(\n channelname, samplename, modifierspec['name']\n )\n # must be deleted, HiFa XML specification does not support 'Name'\n del attrs['Name']\n # need to make this a relative uncertainty stored in ROOT file\n _export_root_histogram(\n attrs['HistoName'],\n np.divide(\n modifierspec['data'],\n sampledata,\n out=np.zeros_like(sampledata),\n where=np.asarray(sampledata) != 0,\n dtype='float',\n ).tolist(),\n )\n elif modifierspec['type'] == 'shapesys':\n attrs['ConstraintType'] = 'Poisson'\n attrs['HistoName'] = _make_hist_name(\n channelname, samplename, modifierspec['name']\n )\n # need to make this a relative uncertainty stored in ROOT file\n _export_root_histogram(\n attrs['HistoName'],\n [\n np.divide(\n a, b, out=np.zeros_like(a), where=np.asarray(b) != 0, dtype='float'\n )\n for a, b in np.array(\n (modifierspec['data'], sampledata), dtype=\"float\"\n ).T\n ],\n )\n elif modifierspec['type'] == 'shapefactor':\n pass\n else:\n log.warning(\n f\"Skipping modifier {modifierspec['name']}({modifierspec['type']}) for now\"\n )\n return None\n\n modifier = ET.Element(mod_map[modifierspec['type']], **attrs)\n return modifier\n\n\ndef build_sample(spec, samplespec, channelname):\n histname = _make_hist_name(channelname, samplespec['name'])\n attrs = {\n 'Name': samplespec['name'],\n 'HistoName': histname,\n 'InputFile': _ROOT_DATA_FILE.file_path,\n 'NormalizeByTheory': 'False',\n }\n sample = ET.Element('Sample', **attrs)\n for modspec in samplespec['modifiers']:\n # if lumi modifier added for this sample, need to set NormalizeByTheory\n if modspec['type'] == 'lumi':\n sample.attrib.update({'NormalizeByTheory': 'True'})\n modifier = build_modifier(\n spec, modspec, channelname, samplespec['name'], samplespec['data']\n )\n if modifier is not None:\n sample.append(modifier)\n _export_root_histogram(histname, samplespec['data'])\n return sample\n\n\ndef build_data(obsspec, channelname):\n histname = _make_hist_name(channelname, 'data')\n data = ET.Element('Data', HistoName=histname, InputFile=_ROOT_DATA_FILE.file_path)\n\n observation = next((obs for obs in obsspec if obs['name'] == channelname), None)\n _export_root_histogram(histname, observation['data'])\n return data\n\n\ndef build_channel(spec, channelspec, obsspec):\n channel = ET.Element(\n 'Channel', Name=channelspec['name'], InputFile=_ROOT_DATA_FILE.file_path\n )\n if obsspec:\n data = build_data(obsspec, channelspec['name'])\n channel.append(data)\n for samplespec in channelspec['samples']:\n channel.append(build_sample(spec, samplespec, channelspec['name']))\n return channel\n\n\ndef writexml(spec, specdir, data_rootdir, resultprefix):\n global _ROOT_DATA_FILE\n\n shutil.copyfile(\n schema_path.joinpath('HistFactorySchema.dtd'),\n Path(specdir).parent.joinpath('HistFactorySchema.dtd'),\n )\n combination = ET.Element(\n \"Combination\", OutputFilePrefix=str(Path(specdir).joinpath(resultprefix))\n )\n\n with uproot.recreate(Path(data_rootdir).joinpath('data.root')) as _ROOT_DATA_FILE:\n for channelspec in spec['channels']:\n channelfilename = str(\n Path(specdir).joinpath(f'{resultprefix}_{channelspec[\"name\"]}.xml')\n )\n with open(channelfilename, \"w\", encoding=\"utf-8\") as channelfile:\n channel = build_channel(spec, channelspec, spec.get('observations'))\n indent(channel)\n channelfile.write(\n \"<!DOCTYPE Channel SYSTEM '../HistFactorySchema.dtd'>\\n\\n\"\n )\n channelfile.write(\n ET.tostring(channel, encoding='utf-8').decode('utf-8')\n )\n\n inp = ET.Element(\"Input\")\n inp.text = channelfilename\n combination.append(inp)\n\n # need information about modifier types to get the right prefix in measurement\n mixin = _ChannelSummaryMixin(channels=spec['channels'])\n\n for measurement in spec['measurements']:\n combination.append(build_measurement(measurement, dict(mixin.modifiers)))\n indent(combination)\n return b\"<!DOCTYPE Combination SYSTEM 'HistFactorySchema.dtd'>\\n\\n\" + ET.tostring(\n combination, encoding='utf-8'\n )\n", "path": "src/pyhf/writexml.py"}], "after_files": [{"content": "import logging\n\nfrom pathlib import Path\nimport shutil\nimport xml.etree.ElementTree as ET\nimport numpy as np\n\nimport uproot\n\nfrom pyhf.mixins import _ChannelSummaryMixin\nfrom pyhf.schema import path as schema_path\n\n_ROOT_DATA_FILE = None\n\nlog = logging.getLogger(__name__)\n\n__all__ = [\n \"build_channel\",\n \"build_data\",\n \"build_measurement\",\n \"build_modifier\",\n \"build_sample\",\n \"indent\",\n]\n\n\ndef __dir__():\n return __all__\n\n\n# 'spec' gets passed through all functions as NormFactor is a unique case of having\n# parameter configurations stored at the modifier-definition-spec level. This means\n# that build_modifier() needs access to the measurements. The call stack is:\n#\n# writexml\n# ->build_channel\n# ->build_sample\n# ->build_modifier\n#\n# Therefore, 'spec' needs to be threaded through all these calls.\n\n\ndef _make_hist_name(channel, sample, modifier='', prefix='hist', suffix=''):\n middle = '_'.join(filter(lambda x: x, [channel, sample, modifier]))\n return f\"{prefix}{middle}{suffix}\"\n\n\ndef _export_root_histogram(hist_name, data):\n if hist_name in _ROOT_DATA_FILE:\n raise KeyError(f\"Duplicate key {hist_name} being written.\")\n _ROOT_DATA_FILE[hist_name] = uproot.to_writable(\n (np.asarray(data), np.arange(len(data) + 1))\n )\n\n\n# https://stackoverflow.com/a/4590052\ndef indent(elem, level=0):\n i = \"\\n\" + level * \" \"\n if elem is not None:\n if not elem.text or not elem.text.strip():\n elem.text = i + \" \"\n if not elem.tail or not elem.tail.strip():\n elem.tail = i\n for subelem in elem:\n indent(subelem, level + 1)\n if not elem.tail or not elem.tail.strip():\n elem.tail = i\n else:\n if level and (not elem.tail or not elem.tail.strip()):\n elem.tail = i\n\n\ndef build_measurement(measurementspec, modifiertypes):\n \"\"\"\n Build the XML measurement specification for a given measurement adhering to defs.json/#definitions/measurement.\n\n Args:\n measurementspec (:obj:`dict`): The measurements specification from a :class:`~pyhf.workspace.Workspace`.\n modifiertypes (:obj:`dict`): A mapping from modifier name (:obj:`str`) to modifier type (:obj:`str`).\n\n Returns:\n :class:`xml.etree.cElementTree.Element`: The XML measurement specification.\n\n \"\"\"\n # need to determine prefixes\n prefixes = {\n 'normsys': 'alpha_',\n 'histosys': 'alpha_',\n 'shapesys': 'gamma_',\n 'staterror': 'gamma_',\n }\n\n config = measurementspec['config']\n name = measurementspec['name']\n poi = config['poi']\n\n # we want to know which parameters are fixed (constant)\n # and to additionally extract the luminosity information\n fixed_params = []\n lumi = 1.0\n lumierr = 0.0\n for parameter in config['parameters']:\n if parameter.get('fixed', False):\n pname = parameter['name']\n if pname == 'lumi':\n fixed_params.append('Lumi')\n else:\n prefix = prefixes.get(modifiertypes[pname], '')\n fixed_params.append(f'{prefix}{pname}')\n # we found luminosity, so handle it\n if parameter['name'] == 'lumi':\n lumi = parameter['auxdata'][0]\n lumierr = parameter['sigmas'][0]\n\n # define measurement\n meas = ET.Element(\n \"Measurement\",\n Name=name,\n Lumi=str(lumi),\n LumiRelErr=str(lumierr),\n ExportOnly=str(True),\n )\n poiel = ET.Element('POI')\n poiel.text = poi\n meas.append(poiel)\n\n # add fixed parameters (constant)\n if fixed_params:\n se = ET.Element('ParamSetting', Const='True')\n se.text = ' '.join(fixed_params)\n meas.append(se)\n return meas\n\n\ndef build_modifier(spec, modifierspec, channelname, samplename, sampledata):\n if modifierspec['name'] == 'lumi':\n return None\n mod_map = {\n 'histosys': 'HistoSys',\n 'staterror': 'StatError',\n 'normsys': 'OverallSys',\n 'shapesys': 'ShapeSys',\n 'normfactor': 'NormFactor',\n 'shapefactor': 'ShapeFactor',\n }\n\n attrs = {'Name': modifierspec['name']}\n if modifierspec['type'] == 'histosys':\n attrs['HistoNameLow'] = _make_hist_name(\n channelname, samplename, modifierspec['name'], suffix='Low'\n )\n attrs['HistoNameHigh'] = _make_hist_name(\n channelname, samplename, modifierspec['name'], suffix='High'\n )\n _export_root_histogram(attrs['HistoNameLow'], modifierspec['data']['lo_data'])\n _export_root_histogram(attrs['HistoNameHigh'], modifierspec['data']['hi_data'])\n elif modifierspec['type'] == 'normsys':\n attrs['High'] = str(modifierspec['data']['hi'])\n attrs['Low'] = str(modifierspec['data']['lo'])\n elif modifierspec['type'] == 'normfactor':\n # NB: only look at first measurement for normfactor configs. In order\n # to dump as HistFactory XML, this has to be the same for all\n # measurements or it will not work correctly. Why?\n #\n # Unlike other modifiers, NormFactor has the unique circumstance of\n # defining its parameter configurations at the modifier level inside\n # the channel specification, instead of at the measurement level, like\n # all of the other modifiers.\n #\n # However, since I strive for perfection, the \"Const\" attribute will\n # never be set here, but at the per-measurement configuration instead\n # like all other parameters. This is an acceptable compromise.\n #\n # Lastly, if a normfactor parameter configuration doesn't exist in the\n # first measurement parameter configuration, then set defaults.\n val = 1\n low = 0\n high = 10\n for p in spec['measurements'][0]['config']['parameters']:\n if p['name'] == modifierspec['name']:\n val = p.get('inits', [val])[0]\n low, high = p.get('bounds', [[low, high]])[0]\n attrs['Val'] = str(val)\n attrs['Low'] = str(low)\n attrs['High'] = str(high)\n elif modifierspec['type'] == 'staterror':\n attrs['Activate'] = 'True'\n attrs['HistoName'] = _make_hist_name(\n channelname, samplename, modifierspec['name']\n )\n # must be deleted, HiFa XML specification does not support 'Name'\n del attrs['Name']\n # need to make this a relative uncertainty stored in ROOT file\n _export_root_histogram(\n attrs['HistoName'],\n np.divide(\n modifierspec['data'],\n sampledata,\n out=np.zeros_like(sampledata),\n where=np.asarray(sampledata) != 0,\n dtype='float',\n ).tolist(),\n )\n elif modifierspec['type'] == 'shapesys':\n attrs['ConstraintType'] = 'Poisson'\n attrs['HistoName'] = _make_hist_name(\n channelname, samplename, modifierspec['name']\n )\n # need to make this a relative uncertainty stored in ROOT file\n _export_root_histogram(\n attrs['HistoName'],\n [\n np.divide(\n a, b, out=np.zeros_like(a), where=np.asarray(b) != 0, dtype='float'\n )\n for a, b in np.array(\n (modifierspec['data'], sampledata), dtype=\"float\"\n ).T\n ],\n )\n elif modifierspec['type'] == 'shapefactor':\n pass\n else:\n log.warning(\n f\"Skipping modifier {modifierspec['name']}({modifierspec['type']}) for now\"\n )\n return None\n\n modifier = ET.Element(mod_map[modifierspec['type']], **attrs)\n return modifier\n\n\ndef build_sample(spec, samplespec, channelname):\n histname = _make_hist_name(channelname, samplespec['name'])\n attrs = {\n 'Name': samplespec['name'],\n 'HistoName': histname,\n 'InputFile': _ROOT_DATA_FILE.file_path,\n 'NormalizeByTheory': 'False',\n }\n sample = ET.Element('Sample', **attrs)\n for modspec in samplespec['modifiers']:\n # if lumi modifier added for this sample, need to set NormalizeByTheory\n if modspec['type'] == 'lumi':\n sample.attrib.update({'NormalizeByTheory': 'True'})\n modifier = build_modifier(\n spec, modspec, channelname, samplespec['name'], samplespec['data']\n )\n if modifier is not None:\n sample.append(modifier)\n _export_root_histogram(histname, samplespec['data'])\n return sample\n\n\ndef build_data(obsspec, channelname):\n histname = _make_hist_name(channelname, 'data')\n data = ET.Element('Data', HistoName=histname, InputFile=_ROOT_DATA_FILE.file_path)\n\n observation = next((obs for obs in obsspec if obs['name'] == channelname), None)\n _export_root_histogram(histname, observation['data'])\n return data\n\n\ndef build_channel(spec, channelspec, obsspec):\n channel = ET.Element(\n 'Channel', Name=channelspec['name'], InputFile=_ROOT_DATA_FILE.file_path\n )\n if obsspec:\n data = build_data(obsspec, channelspec['name'])\n channel.append(data)\n for samplespec in channelspec['samples']:\n channel.append(build_sample(spec, samplespec, channelspec['name']))\n return channel\n\n\ndef writexml(spec, specdir, data_rootdir, resultprefix):\n global _ROOT_DATA_FILE\n\n shutil.copyfile(\n schema_path.joinpath('HistFactorySchema.dtd'),\n Path(specdir).parent.joinpath('HistFactorySchema.dtd'),\n )\n combination = ET.Element(\n \"Combination\", OutputFilePrefix=str(Path(specdir).joinpath(resultprefix))\n )\n\n with uproot.recreate(Path(data_rootdir).joinpath('data.root')) as _ROOT_DATA_FILE:\n for channelspec in spec['channels']:\n channelfilename = str(\n Path(specdir).joinpath(f'{resultprefix}_{channelspec[\"name\"]}.xml')\n )\n with open(channelfilename, \"w\", encoding=\"utf-8\") as channelfile:\n channel = build_channel(spec, channelspec, spec.get('observations'))\n indent(channel)\n channelfile.write(\n \"<!DOCTYPE Channel SYSTEM '../HistFactorySchema.dtd'>\\n\\n\"\n )\n channelfile.write(\n ET.tostring(channel, encoding='utf-8').decode('utf-8')\n )\n\n inp = ET.Element(\"Input\")\n inp.text = channelfilename\n combination.append(inp)\n\n # need information about modifier types to get the right prefix in measurement\n mixin = _ChannelSummaryMixin(channels=spec['channels'])\n\n for measurement in spec['measurements']:\n combination.append(build_measurement(measurement, dict(mixin.modifiers)))\n indent(combination)\n return b\"<!DOCTYPE Combination SYSTEM 'HistFactorySchema.dtd'>\\n\\n\" + ET.tostring(\n combination, encoding='utf-8'\n )\n", "path": "src/pyhf/writexml.py"}]}
4,071
135
gh_patches_debug_41073
rasdani/github-patches
git_diff
PaddlePaddle__PaddleSeg-1747
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- paddleseg/models/hrnet_contrast.py 中没有执行 init_weight paddleseg/models/hrnet_contrast.py 中__init__()没有执行 init_weight,导致hrnet_w48_contrast 没法加载完整的模型 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `paddleseg/models/hrnet_contrast.py` Content: ``` 1 # Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 import paddle 16 import paddle.nn as nn 17 import paddle.nn.functional as F 18 19 from paddleseg.cvlibs import manager 20 from paddleseg.models import layers 21 from paddleseg.utils import utils 22 23 24 @manager.MODELS.add_component 25 class HRNetW48Contrast(nn.Layer): 26 """ 27 The HRNetW48Contrast implementation based on PaddlePaddle. 28 29 The original article refers to 30 Wenguan Wang, Tianfei Zhou, et al. "Exploring Cross-Image Pixel Contrast for Semantic Segmentation" 31 (https://arxiv.org/abs/2101.11939). 32 33 Args: 34 in_channels (int): The output dimensions of backbone. 35 num_classes (int): The unique number of target classes. 36 backbone (Paddle.nn.Layer): Backbone network, currently support HRNet_W48. 37 drop_prob (float): The probability of dropout. 38 proj_dim (int): The projection dimensions. 39 align_corners (bool, optional): An argument of F.interpolate. It should be set to False when the feature size is even, 40 e.g. 1024x512, otherwise it is True, e.g. 769x769. Default: False. 41 pretrained (str, optional): The path or url of pretrained model. Default: None. 42 """ 43 def __init__(self, 44 in_channels, 45 num_classes, 46 backbone, 47 drop_prob, 48 proj_dim, 49 align_corners=False, 50 pretrained=None): 51 super().__init__() 52 self.in_channels = in_channels 53 self.backbone = backbone 54 self.num_classes = num_classes 55 self.proj_dim = proj_dim 56 self.align_corners = align_corners 57 self.pretrained = pretrained 58 59 self.cls_head = nn.Sequential( 60 layers.ConvBNReLU(in_channels, 61 in_channels, 62 kernel_size=3, 63 stride=1, 64 padding=1), 65 nn.Dropout2D(drop_prob), 66 nn.Conv2D(in_channels, 67 num_classes, 68 kernel_size=1, 69 stride=1, 70 bias_attr=False), 71 ) 72 self.proj_head = ProjectionHead(dim_in=in_channels, 73 proj_dim=self.proj_dim) 74 75 def init_weight(self): 76 if self.pretrained is not None: 77 utils.load_entire_model(self, self.pretrained) 78 79 def forward(self, x): 80 feats = self.backbone(x)[0] 81 out = self.cls_head(feats) 82 logit_list = [] 83 if self.training: 84 emb = self.proj_head(feats) 85 logit_list.append( 86 F.interpolate(out, 87 paddle.shape(x)[2:], 88 mode='bilinear', 89 align_corners=self.align_corners)) 90 logit_list.append({'seg': out, 'embed': emb}) 91 else: 92 logit_list.append( 93 F.interpolate(out, 94 paddle.shape(x)[2:], 95 mode='bilinear', 96 align_corners=self.align_corners)) 97 return logit_list 98 99 100 class ProjectionHead(nn.Layer): 101 """ 102 The projection head used by contrast learning. 103 Args: 104 dim_in (int): The dimensions of input features. 105 proj_dim (int, optional): The output dimensions of projection head. Default: 256. 106 proj (str, optional): The type of projection head, only support 'linear' and 'convmlp'. Default: 'convmlp'. 107 """ 108 def __init__(self, dim_in, proj_dim=256, proj='convmlp'): 109 super(ProjectionHead, self).__init__() 110 if proj == 'linear': 111 self.proj = nn.Conv2D(dim_in, proj_dim, kernel_size=1) 112 elif proj == 'convmlp': 113 self.proj = nn.Sequential( 114 layers.ConvBNReLU(dim_in, dim_in, kernel_size=1), 115 nn.Conv2D(dim_in, proj_dim, kernel_size=1), 116 ) 117 else: 118 raise ValueError( 119 "The type of project head only support 'linear' and 'convmlp', but got {}." 120 .format(proj)) 121 122 def forward(self, x): 123 return F.normalize(self.proj(x), p=2, axis=1) 124 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/paddleseg/models/hrnet_contrast.py b/paddleseg/models/hrnet_contrast.py --- a/paddleseg/models/hrnet_contrast.py +++ b/paddleseg/models/hrnet_contrast.py @@ -40,6 +40,7 @@ e.g. 1024x512, otherwise it is True, e.g. 769x769. Default: False. pretrained (str, optional): The path or url of pretrained model. Default: None. """ + def __init__(self, in_channels, num_classes, @@ -54,23 +55,23 @@ self.num_classes = num_classes self.proj_dim = proj_dim self.align_corners = align_corners - self.pretrained = pretrained self.cls_head = nn.Sequential( - layers.ConvBNReLU(in_channels, - in_channels, - kernel_size=3, - stride=1, - padding=1), + layers.ConvBNReLU( + in_channels, in_channels, kernel_size=3, stride=1, padding=1), nn.Dropout2D(drop_prob), - nn.Conv2D(in_channels, - num_classes, - kernel_size=1, - stride=1, - bias_attr=False), + nn.Conv2D( + in_channels, + num_classes, + kernel_size=1, + stride=1, + bias_attr=False), ) - self.proj_head = ProjectionHead(dim_in=in_channels, - proj_dim=self.proj_dim) + self.proj_head = ProjectionHead( + dim_in=in_channels, proj_dim=self.proj_dim) + + self.pretrained = pretrained + self.init_weight() def init_weight(self): if self.pretrained is not None: @@ -83,17 +84,19 @@ if self.training: emb = self.proj_head(feats) logit_list.append( - F.interpolate(out, - paddle.shape(x)[2:], - mode='bilinear', - align_corners=self.align_corners)) + F.interpolate( + out, + paddle.shape(x)[2:], + mode='bilinear', + align_corners=self.align_corners)) logit_list.append({'seg': out, 'embed': emb}) else: logit_list.append( - F.interpolate(out, - paddle.shape(x)[2:], - mode='bilinear', - align_corners=self.align_corners)) + F.interpolate( + out, + paddle.shape(x)[2:], + mode='bilinear', + align_corners=self.align_corners)) return logit_list @@ -105,6 +108,7 @@ proj_dim (int, optional): The output dimensions of projection head. Default: 256. proj (str, optional): The type of projection head, only support 'linear' and 'convmlp'. Default: 'convmlp'. """ + def __init__(self, dim_in, proj_dim=256, proj='convmlp'): super(ProjectionHead, self).__init__() if proj == 'linear':
{"golden_diff": "diff --git a/paddleseg/models/hrnet_contrast.py b/paddleseg/models/hrnet_contrast.py\n--- a/paddleseg/models/hrnet_contrast.py\n+++ b/paddleseg/models/hrnet_contrast.py\n@@ -40,6 +40,7 @@\n e.g. 1024x512, otherwise it is True, e.g. 769x769. Default: False.\n pretrained (str, optional): The path or url of pretrained model. Default: None.\n \"\"\"\n+\n def __init__(self,\n in_channels,\n num_classes,\n@@ -54,23 +55,23 @@\n self.num_classes = num_classes\n self.proj_dim = proj_dim\n self.align_corners = align_corners\n- self.pretrained = pretrained\n \n self.cls_head = nn.Sequential(\n- layers.ConvBNReLU(in_channels,\n- in_channels,\n- kernel_size=3,\n- stride=1,\n- padding=1),\n+ layers.ConvBNReLU(\n+ in_channels, in_channels, kernel_size=3, stride=1, padding=1),\n nn.Dropout2D(drop_prob),\n- nn.Conv2D(in_channels,\n- num_classes,\n- kernel_size=1,\n- stride=1,\n- bias_attr=False),\n+ nn.Conv2D(\n+ in_channels,\n+ num_classes,\n+ kernel_size=1,\n+ stride=1,\n+ bias_attr=False),\n )\n- self.proj_head = ProjectionHead(dim_in=in_channels,\n- proj_dim=self.proj_dim)\n+ self.proj_head = ProjectionHead(\n+ dim_in=in_channels, proj_dim=self.proj_dim)\n+\n+ self.pretrained = pretrained\n+ self.init_weight()\n \n def init_weight(self):\n if self.pretrained is not None:\n@@ -83,17 +84,19 @@\n if self.training:\n emb = self.proj_head(feats)\n logit_list.append(\n- F.interpolate(out,\n- paddle.shape(x)[2:],\n- mode='bilinear',\n- align_corners=self.align_corners))\n+ F.interpolate(\n+ out,\n+ paddle.shape(x)[2:],\n+ mode='bilinear',\n+ align_corners=self.align_corners))\n logit_list.append({'seg': out, 'embed': emb})\n else:\n logit_list.append(\n- F.interpolate(out,\n- paddle.shape(x)[2:],\n- mode='bilinear',\n- align_corners=self.align_corners))\n+ F.interpolate(\n+ out,\n+ paddle.shape(x)[2:],\n+ mode='bilinear',\n+ align_corners=self.align_corners))\n return logit_list\n \n \n@@ -105,6 +108,7 @@\n proj_dim (int, optional): The output dimensions of projection head. Default: 256.\n proj (str, optional): The type of projection head, only support 'linear' and 'convmlp'. Default: 'convmlp'.\n \"\"\"\n+\n def __init__(self, dim_in, proj_dim=256, proj='convmlp'):\n super(ProjectionHead, self).__init__()\n if proj == 'linear':\n", "issue": "paddleseg/models/hrnet_contrast.py \u4e2d\u6ca1\u6709\u6267\u884c init_weight\npaddleseg/models/hrnet_contrast.py \u4e2d__init__()\u6ca1\u6709\u6267\u884c init_weight\uff0c\u5bfc\u81f4hrnet_w48_contrast \u6ca1\u6cd5\u52a0\u8f7d\u5b8c\u6574\u7684\u6a21\u578b\n", "before_files": [{"content": "# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport paddle\nimport paddle.nn as nn\nimport paddle.nn.functional as F\n\nfrom paddleseg.cvlibs import manager\nfrom paddleseg.models import layers\nfrom paddleseg.utils import utils\n\n\[email protected]_component\nclass HRNetW48Contrast(nn.Layer):\n \"\"\"\n The HRNetW48Contrast implementation based on PaddlePaddle.\n\n The original article refers to\n Wenguan Wang, Tianfei Zhou, et al. \"Exploring Cross-Image Pixel Contrast for Semantic Segmentation\"\n (https://arxiv.org/abs/2101.11939).\n\n Args:\n in_channels (int): The output dimensions of backbone.\n num_classes (int): The unique number of target classes.\n backbone (Paddle.nn.Layer): Backbone network, currently support HRNet_W48.\n drop_prob (float): The probability of dropout.\n proj_dim (int): The projection dimensions.\n align_corners (bool, optional): An argument of F.interpolate. It should be set to False when the feature size is even,\n e.g. 1024x512, otherwise it is True, e.g. 769x769. Default: False.\n pretrained (str, optional): The path or url of pretrained model. Default: None.\n \"\"\"\n def __init__(self,\n in_channels,\n num_classes,\n backbone,\n drop_prob,\n proj_dim,\n align_corners=False,\n pretrained=None):\n super().__init__()\n self.in_channels = in_channels\n self.backbone = backbone\n self.num_classes = num_classes\n self.proj_dim = proj_dim\n self.align_corners = align_corners\n self.pretrained = pretrained\n\n self.cls_head = nn.Sequential(\n layers.ConvBNReLU(in_channels,\n in_channels,\n kernel_size=3,\n stride=1,\n padding=1),\n nn.Dropout2D(drop_prob),\n nn.Conv2D(in_channels,\n num_classes,\n kernel_size=1,\n stride=1,\n bias_attr=False),\n )\n self.proj_head = ProjectionHead(dim_in=in_channels,\n proj_dim=self.proj_dim)\n\n def init_weight(self):\n if self.pretrained is not None:\n utils.load_entire_model(self, self.pretrained)\n\n def forward(self, x):\n feats = self.backbone(x)[0]\n out = self.cls_head(feats)\n logit_list = []\n if self.training:\n emb = self.proj_head(feats)\n logit_list.append(\n F.interpolate(out,\n paddle.shape(x)[2:],\n mode='bilinear',\n align_corners=self.align_corners))\n logit_list.append({'seg': out, 'embed': emb})\n else:\n logit_list.append(\n F.interpolate(out,\n paddle.shape(x)[2:],\n mode='bilinear',\n align_corners=self.align_corners))\n return logit_list\n\n\nclass ProjectionHead(nn.Layer):\n \"\"\"\n The projection head used by contrast learning.\n Args:\n dim_in (int): The dimensions of input features.\n proj_dim (int, optional): The output dimensions of projection head. Default: 256.\n proj (str, optional): The type of projection head, only support 'linear' and 'convmlp'. Default: 'convmlp'.\n \"\"\"\n def __init__(self, dim_in, proj_dim=256, proj='convmlp'):\n super(ProjectionHead, self).__init__()\n if proj == 'linear':\n self.proj = nn.Conv2D(dim_in, proj_dim, kernel_size=1)\n elif proj == 'convmlp':\n self.proj = nn.Sequential(\n layers.ConvBNReLU(dim_in, dim_in, kernel_size=1),\n nn.Conv2D(dim_in, proj_dim, kernel_size=1),\n )\n else:\n raise ValueError(\n \"The type of project head only support 'linear' and 'convmlp', but got {}.\"\n .format(proj))\n\n def forward(self, x):\n return F.normalize(self.proj(x), p=2, axis=1)\n", "path": "paddleseg/models/hrnet_contrast.py"}], "after_files": [{"content": "# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport paddle\nimport paddle.nn as nn\nimport paddle.nn.functional as F\n\nfrom paddleseg.cvlibs import manager\nfrom paddleseg.models import layers\nfrom paddleseg.utils import utils\n\n\[email protected]_component\nclass HRNetW48Contrast(nn.Layer):\n \"\"\"\n The HRNetW48Contrast implementation based on PaddlePaddle.\n\n The original article refers to\n Wenguan Wang, Tianfei Zhou, et al. \"Exploring Cross-Image Pixel Contrast for Semantic Segmentation\"\n (https://arxiv.org/abs/2101.11939).\n\n Args:\n in_channels (int): The output dimensions of backbone.\n num_classes (int): The unique number of target classes.\n backbone (Paddle.nn.Layer): Backbone network, currently support HRNet_W48.\n drop_prob (float): The probability of dropout.\n proj_dim (int): The projection dimensions.\n align_corners (bool, optional): An argument of F.interpolate. It should be set to False when the feature size is even,\n e.g. 1024x512, otherwise it is True, e.g. 769x769. Default: False.\n pretrained (str, optional): The path or url of pretrained model. Default: None.\n \"\"\"\n\n def __init__(self,\n in_channels,\n num_classes,\n backbone,\n drop_prob,\n proj_dim,\n align_corners=False,\n pretrained=None):\n super().__init__()\n self.in_channels = in_channels\n self.backbone = backbone\n self.num_classes = num_classes\n self.proj_dim = proj_dim\n self.align_corners = align_corners\n\n self.cls_head = nn.Sequential(\n layers.ConvBNReLU(\n in_channels, in_channels, kernel_size=3, stride=1, padding=1),\n nn.Dropout2D(drop_prob),\n nn.Conv2D(\n in_channels,\n num_classes,\n kernel_size=1,\n stride=1,\n bias_attr=False),\n )\n self.proj_head = ProjectionHead(\n dim_in=in_channels, proj_dim=self.proj_dim)\n\n self.pretrained = pretrained\n self.init_weight()\n\n def init_weight(self):\n if self.pretrained is not None:\n utils.load_entire_model(self, self.pretrained)\n\n def forward(self, x):\n feats = self.backbone(x)[0]\n out = self.cls_head(feats)\n logit_list = []\n if self.training:\n emb = self.proj_head(feats)\n logit_list.append(\n F.interpolate(\n out,\n paddle.shape(x)[2:],\n mode='bilinear',\n align_corners=self.align_corners))\n logit_list.append({'seg': out, 'embed': emb})\n else:\n logit_list.append(\n F.interpolate(\n out,\n paddle.shape(x)[2:],\n mode='bilinear',\n align_corners=self.align_corners))\n return logit_list\n\n\nclass ProjectionHead(nn.Layer):\n \"\"\"\n The projection head used by contrast learning.\n Args:\n dim_in (int): The dimensions of input features.\n proj_dim (int, optional): The output dimensions of projection head. Default: 256.\n proj (str, optional): The type of projection head, only support 'linear' and 'convmlp'. Default: 'convmlp'.\n \"\"\"\n\n def __init__(self, dim_in, proj_dim=256, proj='convmlp'):\n super(ProjectionHead, self).__init__()\n if proj == 'linear':\n self.proj = nn.Conv2D(dim_in, proj_dim, kernel_size=1)\n elif proj == 'convmlp':\n self.proj = nn.Sequential(\n layers.ConvBNReLU(dim_in, dim_in, kernel_size=1),\n nn.Conv2D(dim_in, proj_dim, kernel_size=1),\n )\n else:\n raise ValueError(\n \"The type of project head only support 'linear' and 'convmlp', but got {}.\"\n .format(proj))\n\n def forward(self, x):\n return F.normalize(self.proj(x), p=2, axis=1)\n", "path": "paddleseg/models/hrnet_contrast.py"}]}
1,602
701
gh_patches_debug_13422
rasdani/github-patches
git_diff
urllib3__urllib3-1399
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Interrupted system call while profiling with plop Using Python 2.7.12, requests 2.19.0, urllib3 1.23 and using [plop](https://github.com/bdarnell/plop) for profiling, I'm intermittently hitting this stack trace in long-running code: ``` File "/home/bmerry/work/sdp/env/local/lib/python2.7/site-packages/requests/sessions.py", line 525, in get return self.request('GET', url, **kwargs) File "/home/bmerry/work/sdp/env/local/lib/python2.7/site-packages/requests/sessions.py", line 512, in request resp = self.send(prep, **send_kwargs) File "/home/bmerry/work/sdp/env/local/lib/python2.7/site-packages/requests/sessions.py", line 622, in send r = adapter.send(request, **kwargs) File "/home/bmerry/work/sdp/git/katdal/katdal/chunkstore_s3.py", line 56, in send return super(_TimeoutHTTPAdapter, self).send(request, stream, timeout, *args, **kwargs) File "/home/bmerry/work/sdp/env/local/lib/python2.7/site-packages/requests/adapters.py", line 445, in send timeout=timeout File "/home/bmerry/work/sdp/env/local/lib/python2.7/site-packages/urllib3/connectionpool.py", line 588, in urlopen conn = self._get_conn(timeout=pool_timeout) File "/home/bmerry/work/sdp/env/local/lib/python2.7/site-packages/urllib3/connectionpool.py", line 239, in _get_conn if conn and is_connection_dropped(conn): File "/home/bmerry/work/sdp/env/local/lib/python2.7/site-packages/urllib3/util/connection.py", line 23, in is_connection_dropped return wait_for_read(sock, timeout=0.0) File "/home/bmerry/work/sdp/env/local/lib/python2.7/site-packages/urllib3/util/wait.py", line 146, in wait_for_read return wait_for_socket(sock, read=True, timeout=timeout) File "/home/bmerry/work/sdp/env/local/lib/python2.7/site-packages/urllib3/util/wait.py", line 107, in poll_wait_for_socket return bool(_retry_on_intr(do_poll, timeout)) File "/home/bmerry/work/sdp/env/local/lib/python2.7/site-packages/urllib3/util/wait.py", line 47, in _retry_on_intr return fn(timeout) File "/home/bmerry/work/sdp/env/local/lib/python2.7/site-packages/urllib3/util/wait.py", line 105, in do_poll return poll_obj.poll(t) select.error: (4, 'Interrupted system call') Profiling timer expired ``` Looking at the implementation of `_retry_on_intr` for older Pythons, it has this special case: ```python if timeout is not None and timeout <= 0: return fn(timeout) ``` which in turn seems to apply to the call stack above (see is_connection_dropped, which passes a timeout of 0.0). So apparently there are cases where poll can fail with EINTR even with a zero timeout. FWIW, I'm running Ubuntu 16.04 and Linux 4.4.0-116-generic. I'll try commenting out that fast path and doing some testing overnight to confirm that that is the problem. I don't yet had a minimal reproducible example, but I'll work on it (my first attempt of just banging on some URL in a loop hasn't worked). I wanted to file this before I forgot. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `urllib3/util/wait.py` Content: ``` 1 import errno 2 from functools import partial 3 import select 4 import sys 5 try: 6 from time import monotonic 7 except ImportError: 8 from time import time as monotonic 9 10 __all__ = ["NoWayToWaitForSocketError", "wait_for_read", "wait_for_write"] 11 12 13 class NoWayToWaitForSocketError(Exception): 14 pass 15 16 17 # How should we wait on sockets? 18 # 19 # There are two types of APIs you can use for waiting on sockets: the fancy 20 # modern stateful APIs like epoll/kqueue, and the older stateless APIs like 21 # select/poll. The stateful APIs are more efficient when you have a lots of 22 # sockets to keep track of, because you can set them up once and then use them 23 # lots of times. But we only ever want to wait on a single socket at a time 24 # and don't want to keep track of state, so the stateless APIs are actually 25 # more efficient. So we want to use select() or poll(). 26 # 27 # Now, how do we choose between select() and poll()? On traditional Unixes, 28 # select() has a strange calling convention that makes it slow, or fail 29 # altogether, for high-numbered file descriptors. The point of poll() is to fix 30 # that, so on Unixes, we prefer poll(). 31 # 32 # On Windows, there is no poll() (or at least Python doesn't provide a wrapper 33 # for it), but that's OK, because on Windows, select() doesn't have this 34 # strange calling convention; plain select() works fine. 35 # 36 # So: on Windows we use select(), and everywhere else we use poll(). We also 37 # fall back to select() in case poll() is somehow broken or missing. 38 39 if sys.version_info >= (3, 5): 40 # Modern Python, that retries syscalls by default 41 def _retry_on_intr(fn, timeout): 42 return fn(timeout) 43 else: 44 # Old and broken Pythons. 45 def _retry_on_intr(fn, timeout): 46 if timeout is not None and timeout <= 0: 47 return fn(timeout) 48 49 if timeout is None: 50 deadline = float("inf") 51 else: 52 deadline = monotonic() + timeout 53 54 while True: 55 try: 56 return fn(timeout) 57 # OSError for 3 <= pyver < 3.5, select.error for pyver <= 2.7 58 except (OSError, select.error) as e: 59 # 'e.args[0]' incantation works for both OSError and select.error 60 if e.args[0] != errno.EINTR: 61 raise 62 else: 63 timeout = deadline - monotonic() 64 if timeout < 0: 65 timeout = 0 66 if timeout == float("inf"): 67 timeout = None 68 continue 69 70 71 def select_wait_for_socket(sock, read=False, write=False, timeout=None): 72 if not read and not write: 73 raise RuntimeError("must specify at least one of read=True, write=True") 74 rcheck = [] 75 wcheck = [] 76 if read: 77 rcheck.append(sock) 78 if write: 79 wcheck.append(sock) 80 # When doing a non-blocking connect, most systems signal success by 81 # marking the socket writable. Windows, though, signals success by marked 82 # it as "exceptional". We paper over the difference by checking the write 83 # sockets for both conditions. (The stdlib selectors module does the same 84 # thing.) 85 fn = partial(select.select, rcheck, wcheck, wcheck) 86 rready, wready, xready = _retry_on_intr(fn, timeout) 87 return bool(rready or wready or xready) 88 89 90 def poll_wait_for_socket(sock, read=False, write=False, timeout=None): 91 if not read and not write: 92 raise RuntimeError("must specify at least one of read=True, write=True") 93 mask = 0 94 if read: 95 mask |= select.POLLIN 96 if write: 97 mask |= select.POLLOUT 98 poll_obj = select.poll() 99 poll_obj.register(sock, mask) 100 101 # For some reason, poll() takes timeout in milliseconds 102 def do_poll(t): 103 if t is not None: 104 t *= 1000 105 return poll_obj.poll(t) 106 107 return bool(_retry_on_intr(do_poll, timeout)) 108 109 110 def null_wait_for_socket(*args, **kwargs): 111 raise NoWayToWaitForSocketError("no select-equivalent available") 112 113 114 def _have_working_poll(): 115 # Apparently some systems have a select.poll that fails as soon as you try 116 # to use it, either due to strange configuration or broken monkeypatching 117 # from libraries like eventlet/greenlet. 118 try: 119 poll_obj = select.poll() 120 poll_obj.poll(0) 121 except (AttributeError, OSError): 122 return False 123 else: 124 return True 125 126 127 def wait_for_socket(*args, **kwargs): 128 # We delay choosing which implementation to use until the first time we're 129 # called. We could do it at import time, but then we might make the wrong 130 # decision if someone goes wild with monkeypatching select.poll after 131 # we're imported. 132 global wait_for_socket 133 if _have_working_poll(): 134 wait_for_socket = poll_wait_for_socket 135 elif hasattr(select, "select"): 136 wait_for_socket = select_wait_for_socket 137 else: # Platform-specific: Appengine. 138 wait_for_socket = null_wait_for_socket 139 return wait_for_socket(*args, **kwargs) 140 141 142 def wait_for_read(sock, timeout=None): 143 """ Waits for reading to be available on a given socket. 144 Returns True if the socket is readable, or False if the timeout expired. 145 """ 146 return wait_for_socket(sock, read=True, timeout=timeout) 147 148 149 def wait_for_write(sock, timeout=None): 150 """ Waits for writing to be available on a given socket. 151 Returns True if the socket is readable, or False if the timeout expired. 152 """ 153 return wait_for_socket(sock, write=True, timeout=timeout) 154 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/urllib3/util/wait.py b/urllib3/util/wait.py --- a/urllib3/util/wait.py +++ b/urllib3/util/wait.py @@ -43,9 +43,6 @@ else: # Old and broken Pythons. def _retry_on_intr(fn, timeout): - if timeout is not None and timeout <= 0: - return fn(timeout) - if timeout is None: deadline = float("inf") else: @@ -117,7 +114,7 @@ # from libraries like eventlet/greenlet. try: poll_obj = select.poll() - poll_obj.poll(0) + _retry_on_intr(poll_obj.poll, 0) except (AttributeError, OSError): return False else:
{"golden_diff": "diff --git a/urllib3/util/wait.py b/urllib3/util/wait.py\n--- a/urllib3/util/wait.py\n+++ b/urllib3/util/wait.py\n@@ -43,9 +43,6 @@\n else:\n # Old and broken Pythons.\n def _retry_on_intr(fn, timeout):\n- if timeout is not None and timeout <= 0:\n- return fn(timeout)\n-\n if timeout is None:\n deadline = float(\"inf\")\n else:\n@@ -117,7 +114,7 @@\n # from libraries like eventlet/greenlet.\n try:\n poll_obj = select.poll()\n- poll_obj.poll(0)\n+ _retry_on_intr(poll_obj.poll, 0)\n except (AttributeError, OSError):\n return False\n else:\n", "issue": "Interrupted system call while profiling with plop\nUsing Python 2.7.12, requests 2.19.0, urllib3 1.23 and using [plop](https://github.com/bdarnell/plop) for profiling, I'm intermittently hitting this stack trace in long-running code:\r\n```\r\n File \"/home/bmerry/work/sdp/env/local/lib/python2.7/site-packages/requests/sessions.py\", line 525, in get\r\n return self.request('GET', url, **kwargs)\r\n File \"/home/bmerry/work/sdp/env/local/lib/python2.7/site-packages/requests/sessions.py\", line 512, in request\r\n resp = self.send(prep, **send_kwargs)\r\n File \"/home/bmerry/work/sdp/env/local/lib/python2.7/site-packages/requests/sessions.py\", line 622, in send\r\n r = adapter.send(request, **kwargs)\r\n File \"/home/bmerry/work/sdp/git/katdal/katdal/chunkstore_s3.py\", line 56, in send\r\n return super(_TimeoutHTTPAdapter, self).send(request, stream, timeout, *args, **kwargs)\r\n File \"/home/bmerry/work/sdp/env/local/lib/python2.7/site-packages/requests/adapters.py\", line 445, in send\r\n timeout=timeout\r\n File \"/home/bmerry/work/sdp/env/local/lib/python2.7/site-packages/urllib3/connectionpool.py\", line 588, in urlopen\r\n conn = self._get_conn(timeout=pool_timeout)\r\n File \"/home/bmerry/work/sdp/env/local/lib/python2.7/site-packages/urllib3/connectionpool.py\", line 239, in _get_conn\r\n if conn and is_connection_dropped(conn):\r\n File \"/home/bmerry/work/sdp/env/local/lib/python2.7/site-packages/urllib3/util/connection.py\", line 23, in is_connection_dropped\r\n return wait_for_read(sock, timeout=0.0)\r\n File \"/home/bmerry/work/sdp/env/local/lib/python2.7/site-packages/urllib3/util/wait.py\", line 146, in wait_for_read\r\n return wait_for_socket(sock, read=True, timeout=timeout)\r\n File \"/home/bmerry/work/sdp/env/local/lib/python2.7/site-packages/urllib3/util/wait.py\", line 107, in poll_wait_for_socket\r\n return bool(_retry_on_intr(do_poll, timeout))\r\n File \"/home/bmerry/work/sdp/env/local/lib/python2.7/site-packages/urllib3/util/wait.py\", line 47, in _retry_on_intr\r\n return fn(timeout)\r\n File \"/home/bmerry/work/sdp/env/local/lib/python2.7/site-packages/urllib3/util/wait.py\", line 105, in do_poll\r\n return poll_obj.poll(t)\r\nselect.error: (4, 'Interrupted system call')\r\nProfiling timer expired\r\n```\r\n\r\nLooking at the implementation of `_retry_on_intr` for older Pythons, it has this special case:\r\n```python\r\n if timeout is not None and timeout <= 0:\r\n return fn(timeout)\r\n```\r\nwhich in turn seems to apply to the call stack above (see is_connection_dropped, which passes a timeout of 0.0). So apparently there are cases where poll can fail with EINTR even with a zero timeout. FWIW, I'm running Ubuntu 16.04 and Linux 4.4.0-116-generic. I'll try commenting out that fast path and doing some testing overnight to confirm that that is the problem.\r\n\r\nI don't yet had a minimal reproducible example, but I'll work on it (my first attempt of just banging on some URL in a loop hasn't worked). I wanted to file this before I forgot.\n", "before_files": [{"content": "import errno\nfrom functools import partial\nimport select\nimport sys\ntry:\n from time import monotonic\nexcept ImportError:\n from time import time as monotonic\n\n__all__ = [\"NoWayToWaitForSocketError\", \"wait_for_read\", \"wait_for_write\"]\n\n\nclass NoWayToWaitForSocketError(Exception):\n pass\n\n\n# How should we wait on sockets?\n#\n# There are two types of APIs you can use for waiting on sockets: the fancy\n# modern stateful APIs like epoll/kqueue, and the older stateless APIs like\n# select/poll. The stateful APIs are more efficient when you have a lots of\n# sockets to keep track of, because you can set them up once and then use them\n# lots of times. But we only ever want to wait on a single socket at a time\n# and don't want to keep track of state, so the stateless APIs are actually\n# more efficient. So we want to use select() or poll().\n#\n# Now, how do we choose between select() and poll()? On traditional Unixes,\n# select() has a strange calling convention that makes it slow, or fail\n# altogether, for high-numbered file descriptors. The point of poll() is to fix\n# that, so on Unixes, we prefer poll().\n#\n# On Windows, there is no poll() (or at least Python doesn't provide a wrapper\n# for it), but that's OK, because on Windows, select() doesn't have this\n# strange calling convention; plain select() works fine.\n#\n# So: on Windows we use select(), and everywhere else we use poll(). We also\n# fall back to select() in case poll() is somehow broken or missing.\n\nif sys.version_info >= (3, 5):\n # Modern Python, that retries syscalls by default\n def _retry_on_intr(fn, timeout):\n return fn(timeout)\nelse:\n # Old and broken Pythons.\n def _retry_on_intr(fn, timeout):\n if timeout is not None and timeout <= 0:\n return fn(timeout)\n\n if timeout is None:\n deadline = float(\"inf\")\n else:\n deadline = monotonic() + timeout\n\n while True:\n try:\n return fn(timeout)\n # OSError for 3 <= pyver < 3.5, select.error for pyver <= 2.7\n except (OSError, select.error) as e:\n # 'e.args[0]' incantation works for both OSError and select.error\n if e.args[0] != errno.EINTR:\n raise\n else:\n timeout = deadline - monotonic()\n if timeout < 0:\n timeout = 0\n if timeout == float(\"inf\"):\n timeout = None\n continue\n\n\ndef select_wait_for_socket(sock, read=False, write=False, timeout=None):\n if not read and not write:\n raise RuntimeError(\"must specify at least one of read=True, write=True\")\n rcheck = []\n wcheck = []\n if read:\n rcheck.append(sock)\n if write:\n wcheck.append(sock)\n # When doing a non-blocking connect, most systems signal success by\n # marking the socket writable. Windows, though, signals success by marked\n # it as \"exceptional\". We paper over the difference by checking the write\n # sockets for both conditions. (The stdlib selectors module does the same\n # thing.)\n fn = partial(select.select, rcheck, wcheck, wcheck)\n rready, wready, xready = _retry_on_intr(fn, timeout)\n return bool(rready or wready or xready)\n\n\ndef poll_wait_for_socket(sock, read=False, write=False, timeout=None):\n if not read and not write:\n raise RuntimeError(\"must specify at least one of read=True, write=True\")\n mask = 0\n if read:\n mask |= select.POLLIN\n if write:\n mask |= select.POLLOUT\n poll_obj = select.poll()\n poll_obj.register(sock, mask)\n\n # For some reason, poll() takes timeout in milliseconds\n def do_poll(t):\n if t is not None:\n t *= 1000\n return poll_obj.poll(t)\n\n return bool(_retry_on_intr(do_poll, timeout))\n\n\ndef null_wait_for_socket(*args, **kwargs):\n raise NoWayToWaitForSocketError(\"no select-equivalent available\")\n\n\ndef _have_working_poll():\n # Apparently some systems have a select.poll that fails as soon as you try\n # to use it, either due to strange configuration or broken monkeypatching\n # from libraries like eventlet/greenlet.\n try:\n poll_obj = select.poll()\n poll_obj.poll(0)\n except (AttributeError, OSError):\n return False\n else:\n return True\n\n\ndef wait_for_socket(*args, **kwargs):\n # We delay choosing which implementation to use until the first time we're\n # called. We could do it at import time, but then we might make the wrong\n # decision if someone goes wild with monkeypatching select.poll after\n # we're imported.\n global wait_for_socket\n if _have_working_poll():\n wait_for_socket = poll_wait_for_socket\n elif hasattr(select, \"select\"):\n wait_for_socket = select_wait_for_socket\n else: # Platform-specific: Appengine.\n wait_for_socket = null_wait_for_socket\n return wait_for_socket(*args, **kwargs)\n\n\ndef wait_for_read(sock, timeout=None):\n \"\"\" Waits for reading to be available on a given socket.\n Returns True if the socket is readable, or False if the timeout expired.\n \"\"\"\n return wait_for_socket(sock, read=True, timeout=timeout)\n\n\ndef wait_for_write(sock, timeout=None):\n \"\"\" Waits for writing to be available on a given socket.\n Returns True if the socket is readable, or False if the timeout expired.\n \"\"\"\n return wait_for_socket(sock, write=True, timeout=timeout)\n", "path": "urllib3/util/wait.py"}], "after_files": [{"content": "import errno\nfrom functools import partial\nimport select\nimport sys\ntry:\n from time import monotonic\nexcept ImportError:\n from time import time as monotonic\n\n__all__ = [\"NoWayToWaitForSocketError\", \"wait_for_read\", \"wait_for_write\"]\n\n\nclass NoWayToWaitForSocketError(Exception):\n pass\n\n\n# How should we wait on sockets?\n#\n# There are two types of APIs you can use for waiting on sockets: the fancy\n# modern stateful APIs like epoll/kqueue, and the older stateless APIs like\n# select/poll. The stateful APIs are more efficient when you have a lots of\n# sockets to keep track of, because you can set them up once and then use them\n# lots of times. But we only ever want to wait on a single socket at a time\n# and don't want to keep track of state, so the stateless APIs are actually\n# more efficient. So we want to use select() or poll().\n#\n# Now, how do we choose between select() and poll()? On traditional Unixes,\n# select() has a strange calling convention that makes it slow, or fail\n# altogether, for high-numbered file descriptors. The point of poll() is to fix\n# that, so on Unixes, we prefer poll().\n#\n# On Windows, there is no poll() (or at least Python doesn't provide a wrapper\n# for it), but that's OK, because on Windows, select() doesn't have this\n# strange calling convention; plain select() works fine.\n#\n# So: on Windows we use select(), and everywhere else we use poll(). We also\n# fall back to select() in case poll() is somehow broken or missing.\n\nif sys.version_info >= (3, 5):\n # Modern Python, that retries syscalls by default\n def _retry_on_intr(fn, timeout):\n return fn(timeout)\nelse:\n # Old and broken Pythons.\n def _retry_on_intr(fn, timeout):\n if timeout is None:\n deadline = float(\"inf\")\n else:\n deadline = monotonic() + timeout\n\n while True:\n try:\n return fn(timeout)\n # OSError for 3 <= pyver < 3.5, select.error for pyver <= 2.7\n except (OSError, select.error) as e:\n # 'e.args[0]' incantation works for both OSError and select.error\n if e.args[0] != errno.EINTR:\n raise\n else:\n timeout = deadline - monotonic()\n if timeout < 0:\n timeout = 0\n if timeout == float(\"inf\"):\n timeout = None\n continue\n\n\ndef select_wait_for_socket(sock, read=False, write=False, timeout=None):\n if not read and not write:\n raise RuntimeError(\"must specify at least one of read=True, write=True\")\n rcheck = []\n wcheck = []\n if read:\n rcheck.append(sock)\n if write:\n wcheck.append(sock)\n # When doing a non-blocking connect, most systems signal success by\n # marking the socket writable. Windows, though, signals success by marked\n # it as \"exceptional\". We paper over the difference by checking the write\n # sockets for both conditions. (The stdlib selectors module does the same\n # thing.)\n fn = partial(select.select, rcheck, wcheck, wcheck)\n rready, wready, xready = _retry_on_intr(fn, timeout)\n return bool(rready or wready or xready)\n\n\ndef poll_wait_for_socket(sock, read=False, write=False, timeout=None):\n if not read and not write:\n raise RuntimeError(\"must specify at least one of read=True, write=True\")\n mask = 0\n if read:\n mask |= select.POLLIN\n if write:\n mask |= select.POLLOUT\n poll_obj = select.poll()\n poll_obj.register(sock, mask)\n\n # For some reason, poll() takes timeout in milliseconds\n def do_poll(t):\n if t is not None:\n t *= 1000\n return poll_obj.poll(t)\n\n return bool(_retry_on_intr(do_poll, timeout))\n\n\ndef null_wait_for_socket(*args, **kwargs):\n raise NoWayToWaitForSocketError(\"no select-equivalent available\")\n\n\ndef _have_working_poll():\n # Apparently some systems have a select.poll that fails as soon as you try\n # to use it, either due to strange configuration or broken monkeypatching\n # from libraries like eventlet/greenlet.\n try:\n poll_obj = select.poll()\n _retry_on_intr(poll_obj.poll, 0)\n except (AttributeError, OSError):\n return False\n else:\n return True\n\n\ndef wait_for_socket(*args, **kwargs):\n # We delay choosing which implementation to use until the first time we're\n # called. We could do it at import time, but then we might make the wrong\n # decision if someone goes wild with monkeypatching select.poll after\n # we're imported.\n global wait_for_socket\n if _have_working_poll():\n wait_for_socket = poll_wait_for_socket\n elif hasattr(select, \"select\"):\n wait_for_socket = select_wait_for_socket\n else: # Platform-specific: Appengine.\n wait_for_socket = null_wait_for_socket\n return wait_for_socket(*args, **kwargs)\n\n\ndef wait_for_read(sock, timeout=None):\n \"\"\" Waits for reading to be available on a given socket.\n Returns True if the socket is readable, or False if the timeout expired.\n \"\"\"\n return wait_for_socket(sock, read=True, timeout=timeout)\n\n\ndef wait_for_write(sock, timeout=None):\n \"\"\" Waits for writing to be available on a given socket.\n Returns True if the socket is readable, or False if the timeout expired.\n \"\"\"\n return wait_for_socket(sock, write=True, timeout=timeout)\n", "path": "urllib3/util/wait.py"}]}
2,753
181
gh_patches_debug_4386
rasdani/github-patches
git_diff
open-telemetry__opentelemetry-python-2414
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Did you mean to use f-string here? Did you mean to use f-string here? _Originally posted by @lonewolf3739 in https://github.com/open-telemetry/opentelemetry-python/pull/2405#discussion_r792096137_ --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py` Content: ``` 1 # Copyright The OpenTelemetry Authors 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 from atexit import register, unregister 16 from logging import getLogger 17 from threading import Lock 18 from typing import Optional, Sequence 19 20 from opentelemetry._metrics import Meter as APIMeter 21 from opentelemetry._metrics import MeterProvider as APIMeterProvider 22 from opentelemetry._metrics import NoOpMeter 23 from opentelemetry._metrics.instrument import Counter as APICounter 24 from opentelemetry._metrics.instrument import Histogram as APIHistogram 25 from opentelemetry._metrics.instrument import ( 26 ObservableCounter as APIObservableCounter, 27 ) 28 from opentelemetry._metrics.instrument import ( 29 ObservableGauge as APIObservableGauge, 30 ) 31 from opentelemetry._metrics.instrument import ( 32 ObservableUpDownCounter as APIObservableUpDownCounter, 33 ) 34 from opentelemetry._metrics.instrument import UpDownCounter as APIUpDownCounter 35 from opentelemetry.sdk._metrics.instrument import ( 36 Counter, 37 Histogram, 38 ObservableCounter, 39 ObservableGauge, 40 ObservableUpDownCounter, 41 UpDownCounter, 42 ) 43 from opentelemetry.sdk._metrics.measurement_consumer import ( 44 MeasurementConsumer, 45 SynchronousMeasurementConsumer, 46 ) 47 from opentelemetry.sdk._metrics.metric_reader import MetricReader 48 from opentelemetry.sdk._metrics.sdk_configuration import SdkConfiguration 49 from opentelemetry.sdk.resources import Resource 50 from opentelemetry.sdk.util.instrumentation import InstrumentationInfo 51 52 _logger = getLogger(__name__) 53 54 55 class Meter(APIMeter): 56 def __init__( 57 self, 58 instrumentation_info: InstrumentationInfo, 59 measurement_consumer: MeasurementConsumer, 60 ): 61 super().__init__(instrumentation_info) 62 self._instrumentation_info = instrumentation_info 63 self._measurement_consumer = measurement_consumer 64 65 def create_counter(self, name, unit=None, description=None) -> APICounter: 66 return Counter( 67 name, 68 self._instrumentation_info, 69 self._measurement_consumer, 70 unit, 71 description, 72 ) 73 74 def create_up_down_counter( 75 self, name, unit=None, description=None 76 ) -> APIUpDownCounter: 77 return UpDownCounter( 78 name, 79 self._instrumentation_info, 80 self._measurement_consumer, 81 unit, 82 description, 83 ) 84 85 def create_observable_counter( 86 self, name, callback, unit=None, description=None 87 ) -> APIObservableCounter: 88 89 instrument = ObservableCounter( 90 name, 91 self._instrumentation_info, 92 self._measurement_consumer, 93 callback, 94 unit, 95 description, 96 ) 97 98 self._measurement_consumer.register_asynchronous_instrument(instrument) 99 100 return instrument 101 102 def create_histogram( 103 self, name, unit=None, description=None 104 ) -> APIHistogram: 105 return Histogram( 106 name, 107 self._instrumentation_info, 108 self._measurement_consumer, 109 unit, 110 description, 111 ) 112 113 def create_observable_gauge( 114 self, name, callback, unit=None, description=None 115 ) -> APIObservableGauge: 116 117 instrument = ObservableGauge( 118 name, 119 self._instrumentation_info, 120 self._measurement_consumer, 121 callback, 122 unit, 123 description, 124 ) 125 126 self._measurement_consumer.register_asynchronous_instrument(instrument) 127 128 return instrument 129 130 def create_observable_up_down_counter( 131 self, name, callback, unit=None, description=None 132 ) -> APIObservableUpDownCounter: 133 134 instrument = ObservableUpDownCounter( 135 name, 136 self._instrumentation_info, 137 self._measurement_consumer, 138 callback, 139 unit, 140 description, 141 ) 142 143 self._measurement_consumer.register_asynchronous_instrument(instrument) 144 145 return instrument 146 147 148 class MeterProvider(APIMeterProvider): 149 """See `opentelemetry._metrics.MeterProvider`.""" 150 151 def __init__( 152 self, 153 metric_readers: Sequence[MetricReader] = (), 154 resource: Resource = Resource.create({}), 155 shutdown_on_exit: bool = True, 156 ): 157 self._lock = Lock() 158 self._meter_lock = Lock() 159 self._atexit_handler = None 160 self._sdk_config = SdkConfiguration( 161 resource=resource, metric_readers=metric_readers 162 ) 163 self._measurement_consumer = SynchronousMeasurementConsumer( 164 sdk_config=self._sdk_config 165 ) 166 167 if shutdown_on_exit: 168 self._atexit_handler = register(self.shutdown) 169 170 self._meters = {} 171 self._metric_readers = metric_readers 172 173 for metric_reader in self._sdk_config.metric_readers: 174 metric_reader._register_measurement_consumer(self) 175 176 self._shutdown = False 177 178 def force_flush(self) -> bool: 179 180 # FIXME implement a timeout 181 182 metric_reader_result = True 183 184 for metric_reader in self._sdk_config.metric_readers: 185 metric_reader_result = ( 186 metric_reader_result and metric_reader.force_flush() 187 ) 188 189 if not metric_reader_result: 190 _logger.warning("Unable to force flush all metric readers") 191 192 return metric_reader_result 193 194 def shutdown(self): 195 # FIXME implement a timeout 196 197 if self._shutdown: 198 _logger.warning("shutdown can only be called once") 199 return False 200 201 overall_result = True 202 203 for metric_reader in self._sdk_config.metric_readers: 204 metric_reader_result = metric_reader.shutdown() 205 206 if not metric_reader_result: 207 _logger.warning( 208 "MetricReader {metric_reader} failed to shutdown" 209 ) 210 211 overall_result = overall_result and metric_reader_result 212 213 self._shutdown = True 214 215 if self._atexit_handler is not None: 216 unregister(self._atexit_handler) 217 self._atexit_handler = None 218 219 return overall_result 220 221 def get_meter( 222 self, 223 name: str, 224 version: Optional[str] = None, 225 schema_url: Optional[str] = None, 226 ) -> Meter: 227 228 if self._shutdown: 229 _logger.warning( 230 "A shutdown `MeterProvider` can not provide a `Meter`" 231 ) 232 return NoOpMeter(name, version=version, schema_url=schema_url) 233 234 info = InstrumentationInfo(name, version, schema_url) 235 with self._meter_lock: 236 if not self._meters.get(info): 237 self._meters[info] = Meter( 238 info, 239 self._measurement_consumer, 240 ) 241 return self._meters[info] 242 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py --- a/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py +++ b/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py @@ -205,7 +205,7 @@ if not metric_reader_result: _logger.warning( - "MetricReader {metric_reader} failed to shutdown" + "MetricReader %s failed to shutdown", metric_reader ) overall_result = overall_result and metric_reader_result
{"golden_diff": "diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py\n--- a/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py\n+++ b/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py\n@@ -205,7 +205,7 @@\n \n if not metric_reader_result:\n _logger.warning(\n- \"MetricReader {metric_reader} failed to shutdown\"\n+ \"MetricReader %s failed to shutdown\", metric_reader\n )\n \n overall_result = overall_result and metric_reader_result\n", "issue": "Did you mean to use f-string here?\nDid you mean to use f-string here?\r\n\r\n_Originally posted by @lonewolf3739 in https://github.com/open-telemetry/opentelemetry-python/pull/2405#discussion_r792096137_\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom atexit import register, unregister\nfrom logging import getLogger\nfrom threading import Lock\nfrom typing import Optional, Sequence\n\nfrom opentelemetry._metrics import Meter as APIMeter\nfrom opentelemetry._metrics import MeterProvider as APIMeterProvider\nfrom opentelemetry._metrics import NoOpMeter\nfrom opentelemetry._metrics.instrument import Counter as APICounter\nfrom opentelemetry._metrics.instrument import Histogram as APIHistogram\nfrom opentelemetry._metrics.instrument import (\n ObservableCounter as APIObservableCounter,\n)\nfrom opentelemetry._metrics.instrument import (\n ObservableGauge as APIObservableGauge,\n)\nfrom opentelemetry._metrics.instrument import (\n ObservableUpDownCounter as APIObservableUpDownCounter,\n)\nfrom opentelemetry._metrics.instrument import UpDownCounter as APIUpDownCounter\nfrom opentelemetry.sdk._metrics.instrument import (\n Counter,\n Histogram,\n ObservableCounter,\n ObservableGauge,\n ObservableUpDownCounter,\n UpDownCounter,\n)\nfrom opentelemetry.sdk._metrics.measurement_consumer import (\n MeasurementConsumer,\n SynchronousMeasurementConsumer,\n)\nfrom opentelemetry.sdk._metrics.metric_reader import MetricReader\nfrom opentelemetry.sdk._metrics.sdk_configuration import SdkConfiguration\nfrom opentelemetry.sdk.resources import Resource\nfrom opentelemetry.sdk.util.instrumentation import InstrumentationInfo\n\n_logger = getLogger(__name__)\n\n\nclass Meter(APIMeter):\n def __init__(\n self,\n instrumentation_info: InstrumentationInfo,\n measurement_consumer: MeasurementConsumer,\n ):\n super().__init__(instrumentation_info)\n self._instrumentation_info = instrumentation_info\n self._measurement_consumer = measurement_consumer\n\n def create_counter(self, name, unit=None, description=None) -> APICounter:\n return Counter(\n name,\n self._instrumentation_info,\n self._measurement_consumer,\n unit,\n description,\n )\n\n def create_up_down_counter(\n self, name, unit=None, description=None\n ) -> APIUpDownCounter:\n return UpDownCounter(\n name,\n self._instrumentation_info,\n self._measurement_consumer,\n unit,\n description,\n )\n\n def create_observable_counter(\n self, name, callback, unit=None, description=None\n ) -> APIObservableCounter:\n\n instrument = ObservableCounter(\n name,\n self._instrumentation_info,\n self._measurement_consumer,\n callback,\n unit,\n description,\n )\n\n self._measurement_consumer.register_asynchronous_instrument(instrument)\n\n return instrument\n\n def create_histogram(\n self, name, unit=None, description=None\n ) -> APIHistogram:\n return Histogram(\n name,\n self._instrumentation_info,\n self._measurement_consumer,\n unit,\n description,\n )\n\n def create_observable_gauge(\n self, name, callback, unit=None, description=None\n ) -> APIObservableGauge:\n\n instrument = ObservableGauge(\n name,\n self._instrumentation_info,\n self._measurement_consumer,\n callback,\n unit,\n description,\n )\n\n self._measurement_consumer.register_asynchronous_instrument(instrument)\n\n return instrument\n\n def create_observable_up_down_counter(\n self, name, callback, unit=None, description=None\n ) -> APIObservableUpDownCounter:\n\n instrument = ObservableUpDownCounter(\n name,\n self._instrumentation_info,\n self._measurement_consumer,\n callback,\n unit,\n description,\n )\n\n self._measurement_consumer.register_asynchronous_instrument(instrument)\n\n return instrument\n\n\nclass MeterProvider(APIMeterProvider):\n \"\"\"See `opentelemetry._metrics.MeterProvider`.\"\"\"\n\n def __init__(\n self,\n metric_readers: Sequence[MetricReader] = (),\n resource: Resource = Resource.create({}),\n shutdown_on_exit: bool = True,\n ):\n self._lock = Lock()\n self._meter_lock = Lock()\n self._atexit_handler = None\n self._sdk_config = SdkConfiguration(\n resource=resource, metric_readers=metric_readers\n )\n self._measurement_consumer = SynchronousMeasurementConsumer(\n sdk_config=self._sdk_config\n )\n\n if shutdown_on_exit:\n self._atexit_handler = register(self.shutdown)\n\n self._meters = {}\n self._metric_readers = metric_readers\n\n for metric_reader in self._sdk_config.metric_readers:\n metric_reader._register_measurement_consumer(self)\n\n self._shutdown = False\n\n def force_flush(self) -> bool:\n\n # FIXME implement a timeout\n\n metric_reader_result = True\n\n for metric_reader in self._sdk_config.metric_readers:\n metric_reader_result = (\n metric_reader_result and metric_reader.force_flush()\n )\n\n if not metric_reader_result:\n _logger.warning(\"Unable to force flush all metric readers\")\n\n return metric_reader_result\n\n def shutdown(self):\n # FIXME implement a timeout\n\n if self._shutdown:\n _logger.warning(\"shutdown can only be called once\")\n return False\n\n overall_result = True\n\n for metric_reader in self._sdk_config.metric_readers:\n metric_reader_result = metric_reader.shutdown()\n\n if not metric_reader_result:\n _logger.warning(\n \"MetricReader {metric_reader} failed to shutdown\"\n )\n\n overall_result = overall_result and metric_reader_result\n\n self._shutdown = True\n\n if self._atexit_handler is not None:\n unregister(self._atexit_handler)\n self._atexit_handler = None\n\n return overall_result\n\n def get_meter(\n self,\n name: str,\n version: Optional[str] = None,\n schema_url: Optional[str] = None,\n ) -> Meter:\n\n if self._shutdown:\n _logger.warning(\n \"A shutdown `MeterProvider` can not provide a `Meter`\"\n )\n return NoOpMeter(name, version=version, schema_url=schema_url)\n\n info = InstrumentationInfo(name, version, schema_url)\n with self._meter_lock:\n if not self._meters.get(info):\n self._meters[info] = Meter(\n info,\n self._measurement_consumer,\n )\n return self._meters[info]\n", "path": "opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom atexit import register, unregister\nfrom logging import getLogger\nfrom threading import Lock\nfrom typing import Optional, Sequence\n\nfrom opentelemetry._metrics import Meter as APIMeter\nfrom opentelemetry._metrics import MeterProvider as APIMeterProvider\nfrom opentelemetry._metrics import NoOpMeter\nfrom opentelemetry._metrics.instrument import Counter as APICounter\nfrom opentelemetry._metrics.instrument import Histogram as APIHistogram\nfrom opentelemetry._metrics.instrument import (\n ObservableCounter as APIObservableCounter,\n)\nfrom opentelemetry._metrics.instrument import (\n ObservableGauge as APIObservableGauge,\n)\nfrom opentelemetry._metrics.instrument import (\n ObservableUpDownCounter as APIObservableUpDownCounter,\n)\nfrom opentelemetry._metrics.instrument import UpDownCounter as APIUpDownCounter\nfrom opentelemetry.sdk._metrics.instrument import (\n Counter,\n Histogram,\n ObservableCounter,\n ObservableGauge,\n ObservableUpDownCounter,\n UpDownCounter,\n)\nfrom opentelemetry.sdk._metrics.measurement_consumer import (\n MeasurementConsumer,\n SynchronousMeasurementConsumer,\n)\nfrom opentelemetry.sdk._metrics.metric_reader import MetricReader\nfrom opentelemetry.sdk._metrics.sdk_configuration import SdkConfiguration\nfrom opentelemetry.sdk.resources import Resource\nfrom opentelemetry.sdk.util.instrumentation import InstrumentationInfo\n\n_logger = getLogger(__name__)\n\n\nclass Meter(APIMeter):\n def __init__(\n self,\n instrumentation_info: InstrumentationInfo,\n measurement_consumer: MeasurementConsumer,\n ):\n super().__init__(instrumentation_info)\n self._instrumentation_info = instrumentation_info\n self._measurement_consumer = measurement_consumer\n\n def create_counter(self, name, unit=None, description=None) -> APICounter:\n return Counter(\n name,\n self._instrumentation_info,\n self._measurement_consumer,\n unit,\n description,\n )\n\n def create_up_down_counter(\n self, name, unit=None, description=None\n ) -> APIUpDownCounter:\n return UpDownCounter(\n name,\n self._instrumentation_info,\n self._measurement_consumer,\n unit,\n description,\n )\n\n def create_observable_counter(\n self, name, callback, unit=None, description=None\n ) -> APIObservableCounter:\n\n instrument = ObservableCounter(\n name,\n self._instrumentation_info,\n self._measurement_consumer,\n callback,\n unit,\n description,\n )\n\n self._measurement_consumer.register_asynchronous_instrument(instrument)\n\n return instrument\n\n def create_histogram(\n self, name, unit=None, description=None\n ) -> APIHistogram:\n return Histogram(\n name,\n self._instrumentation_info,\n self._measurement_consumer,\n unit,\n description,\n )\n\n def create_observable_gauge(\n self, name, callback, unit=None, description=None\n ) -> APIObservableGauge:\n\n instrument = ObservableGauge(\n name,\n self._instrumentation_info,\n self._measurement_consumer,\n callback,\n unit,\n description,\n )\n\n self._measurement_consumer.register_asynchronous_instrument(instrument)\n\n return instrument\n\n def create_observable_up_down_counter(\n self, name, callback, unit=None, description=None\n ) -> APIObservableUpDownCounter:\n\n instrument = ObservableUpDownCounter(\n name,\n self._instrumentation_info,\n self._measurement_consumer,\n callback,\n unit,\n description,\n )\n\n self._measurement_consumer.register_asynchronous_instrument(instrument)\n\n return instrument\n\n\nclass MeterProvider(APIMeterProvider):\n \"\"\"See `opentelemetry._metrics.MeterProvider`.\"\"\"\n\n def __init__(\n self,\n metric_readers: Sequence[MetricReader] = (),\n resource: Resource = Resource.create({}),\n shutdown_on_exit: bool = True,\n ):\n self._lock = Lock()\n self._meter_lock = Lock()\n self._atexit_handler = None\n self._sdk_config = SdkConfiguration(\n resource=resource, metric_readers=metric_readers\n )\n self._measurement_consumer = SynchronousMeasurementConsumer(\n sdk_config=self._sdk_config\n )\n\n if shutdown_on_exit:\n self._atexit_handler = register(self.shutdown)\n\n self._meters = {}\n self._metric_readers = metric_readers\n\n for metric_reader in self._sdk_config.metric_readers:\n metric_reader._register_measurement_consumer(self)\n\n self._shutdown = False\n\n def force_flush(self) -> bool:\n\n # FIXME implement a timeout\n\n metric_reader_result = True\n\n for metric_reader in self._sdk_config.metric_readers:\n metric_reader_result = (\n metric_reader_result and metric_reader.force_flush()\n )\n\n if not metric_reader_result:\n _logger.warning(\"Unable to force flush all metric readers\")\n\n return metric_reader_result\n\n def shutdown(self):\n # FIXME implement a timeout\n\n if self._shutdown:\n _logger.warning(\"shutdown can only be called once\")\n return False\n\n overall_result = True\n\n for metric_reader in self._sdk_config.metric_readers:\n metric_reader_result = metric_reader.shutdown()\n\n if not metric_reader_result:\n _logger.warning(\n \"MetricReader %s failed to shutdown\", metric_reader\n )\n\n overall_result = overall_result and metric_reader_result\n\n self._shutdown = True\n\n if self._atexit_handler is not None:\n unregister(self._atexit_handler)\n self._atexit_handler = None\n\n return overall_result\n\n def get_meter(\n self,\n name: str,\n version: Optional[str] = None,\n schema_url: Optional[str] = None,\n ) -> Meter:\n\n if self._shutdown:\n _logger.warning(\n \"A shutdown `MeterProvider` can not provide a `Meter`\"\n )\n return NoOpMeter(name, version=version, schema_url=schema_url)\n\n info = InstrumentationInfo(name, version, schema_url)\n with self._meter_lock:\n if not self._meters.get(info):\n self._meters[info] = Meter(\n info,\n self._measurement_consumer,\n )\n return self._meters[info]\n", "path": "opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py"}]}
2,410
140
gh_patches_debug_33490
rasdani/github-patches
git_diff
apache__airflow-22536
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- PostgresToGoogleCloudStorageOperator - Custom schema mapping Version : 1.10.12 I used PostgresToGoogleCloudStorageOperator to export the data and the schema file as well. But I saw a column on Postgres was `TIMESTAMP without time zone` but in BigQuery the auto-create table (via `GoogleCloudStorageToBigQueryOperator`) used the JSON schema file and created the table. When I checked the BQ table the data type was `TIMESTAMP`. For without timezone data, **`DATETIME`** would be the right choice. So can we manually MAP the data types during the schema file export? --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `airflow/providers/google/cloud/transfers/postgres_to_gcs.py` Content: ``` 1 # 2 # Licensed to the Apache Software Foundation (ASF) under one 3 # or more contributor license agreements. See the NOTICE file 4 # distributed with this work for additional information 5 # regarding copyright ownership. The ASF licenses this file 6 # to you under the Apache License, Version 2.0 (the 7 # "License"); you may not use this file except in compliance 8 # with the License. You may obtain a copy of the License at 9 # 10 # http://www.apache.org/licenses/LICENSE-2.0 11 # 12 # Unless required by applicable law or agreed to in writing, 13 # software distributed under the License is distributed on an 14 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 # KIND, either express or implied. See the License for the 16 # specific language governing permissions and limitations 17 # under the License. 18 """PostgreSQL to GCS operator.""" 19 20 import datetime 21 import json 22 import time 23 import uuid 24 from decimal import Decimal 25 from typing import Dict 26 27 import pendulum 28 29 from airflow.providers.google.cloud.transfers.sql_to_gcs import BaseSQLToGCSOperator 30 from airflow.providers.postgres.hooks.postgres import PostgresHook 31 32 33 class _PostgresServerSideCursorDecorator: 34 """ 35 Inspired by `_PrestoToGCSPrestoCursorAdapter` to keep this consistent. 36 37 Decorator for allowing description to be available for postgres cursor in case server side 38 cursor is used. It doesn't provide other methods except those needed in BaseSQLToGCSOperator, 39 which is more of a safety feature. 40 """ 41 42 def __init__(self, cursor): 43 self.cursor = cursor 44 self.rows = [] 45 self.initialized = False 46 47 def __iter__(self): 48 return self 49 50 def __next__(self): 51 if self.rows: 52 return self.rows.pop() 53 else: 54 self.initialized = True 55 return next(self.cursor) 56 57 @property 58 def description(self): 59 """Fetch first row to initialize cursor description when using server side cursor.""" 60 if not self.initialized: 61 element = self.cursor.fetchone() 62 if element is not None: 63 self.rows.append(element) 64 self.initialized = True 65 return self.cursor.description 66 67 68 class PostgresToGCSOperator(BaseSQLToGCSOperator): 69 """ 70 Copy data from Postgres to Google Cloud Storage in JSON or CSV format. 71 72 :param postgres_conn_id: Reference to a specific Postgres hook. 73 :param use_server_side_cursor: If server-side cursor should be used for querying postgres. 74 For detailed info, check https://www.psycopg.org/docs/usage.html#server-side-cursors 75 :param cursor_itersize: How many records are fetched at a time in case of server-side cursor. 76 """ 77 78 ui_color = '#a0e08c' 79 80 type_map = { 81 1114: 'TIMESTAMP', 82 1184: 'TIMESTAMP', 83 1082: 'TIMESTAMP', 84 1083: 'TIMESTAMP', 85 1005: 'INTEGER', 86 1007: 'INTEGER', 87 1016: 'INTEGER', 88 20: 'INTEGER', 89 21: 'INTEGER', 90 23: 'INTEGER', 91 16: 'BOOLEAN', 92 700: 'FLOAT', 93 701: 'FLOAT', 94 1700: 'FLOAT', 95 } 96 97 def __init__( 98 self, 99 *, 100 postgres_conn_id='postgres_default', 101 use_server_side_cursor=False, 102 cursor_itersize=2000, 103 **kwargs, 104 ): 105 super().__init__(**kwargs) 106 self.postgres_conn_id = postgres_conn_id 107 self.use_server_side_cursor = use_server_side_cursor 108 self.cursor_itersize = cursor_itersize 109 110 def _unique_name(self): 111 return f"{self.dag_id}__{self.task_id}__{uuid.uuid4()}" if self.use_server_side_cursor else None 112 113 def query(self): 114 """Queries Postgres and returns a cursor to the results.""" 115 hook = PostgresHook(postgres_conn_id=self.postgres_conn_id) 116 conn = hook.get_conn() 117 cursor = conn.cursor(name=self._unique_name()) 118 cursor.execute(self.sql, self.parameters) 119 if self.use_server_side_cursor: 120 cursor.itersize = self.cursor_itersize 121 return _PostgresServerSideCursorDecorator(cursor) 122 return cursor 123 124 def field_to_bigquery(self, field) -> Dict[str, str]: 125 return { 126 'name': field[0], 127 'type': self.type_map.get(field[1], "STRING"), 128 'mode': 'REPEATED' if field[1] in (1009, 1005, 1007, 1016) else 'NULLABLE', 129 } 130 131 def convert_type(self, value, schema_type): 132 """ 133 Takes a value from Postgres, and converts it to a value that's safe for 134 JSON/Google Cloud Storage/BigQuery. Dates are converted to UTC seconds. 135 Decimals are converted to floats. Times are converted to seconds. 136 """ 137 if isinstance(value, (datetime.datetime, datetime.date)): 138 return pendulum.parse(value.isoformat()).float_timestamp 139 if isinstance(value, datetime.time): 140 formatted_time = time.strptime(str(value), "%H:%M:%S") 141 return int( 142 datetime.timedelta( 143 hours=formatted_time.tm_hour, minutes=formatted_time.tm_min, seconds=formatted_time.tm_sec 144 ).total_seconds() 145 ) 146 if isinstance(value, dict): 147 return json.dumps(value) 148 if isinstance(value, Decimal): 149 return float(value) 150 return value 151 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/airflow/providers/google/cloud/transfers/postgres_to_gcs.py b/airflow/providers/google/cloud/transfers/postgres_to_gcs.py --- a/airflow/providers/google/cloud/transfers/postgres_to_gcs.py +++ b/airflow/providers/google/cloud/transfers/postgres_to_gcs.py @@ -78,10 +78,10 @@ ui_color = '#a0e08c' type_map = { - 1114: 'TIMESTAMP', + 1114: 'DATETIME', 1184: 'TIMESTAMP', - 1082: 'TIMESTAMP', - 1083: 'TIMESTAMP', + 1082: 'DATE', + 1083: 'TIME', 1005: 'INTEGER', 1007: 'INTEGER', 1016: 'INTEGER', @@ -131,18 +131,24 @@ def convert_type(self, value, schema_type): """ Takes a value from Postgres, and converts it to a value that's safe for - JSON/Google Cloud Storage/BigQuery. Dates are converted to UTC seconds. - Decimals are converted to floats. Times are converted to seconds. + JSON/Google Cloud Storage/BigQuery. + Timezone aware Datetime are converted to UTC seconds. + Unaware Datetime, Date and Time are converted to ISO formatted strings. + Decimals are converted to floats. """ - if isinstance(value, (datetime.datetime, datetime.date)): - return pendulum.parse(value.isoformat()).float_timestamp + if isinstance(value, datetime.datetime): + iso_format_value = value.isoformat() + if value.tzinfo is None: + return iso_format_value + return pendulum.parse(iso_format_value).float_timestamp + if isinstance(value, datetime.date): + return value.isoformat() if isinstance(value, datetime.time): formatted_time = time.strptime(str(value), "%H:%M:%S") - return int( - datetime.timedelta( - hours=formatted_time.tm_hour, minutes=formatted_time.tm_min, seconds=formatted_time.tm_sec - ).total_seconds() + time_delta = datetime.timedelta( + hours=formatted_time.tm_hour, minutes=formatted_time.tm_min, seconds=formatted_time.tm_sec ) + return str(time_delta) if isinstance(value, dict): return json.dumps(value) if isinstance(value, Decimal):
{"golden_diff": "diff --git a/airflow/providers/google/cloud/transfers/postgres_to_gcs.py b/airflow/providers/google/cloud/transfers/postgres_to_gcs.py\n--- a/airflow/providers/google/cloud/transfers/postgres_to_gcs.py\n+++ b/airflow/providers/google/cloud/transfers/postgres_to_gcs.py\n@@ -78,10 +78,10 @@\n ui_color = '#a0e08c'\n \n type_map = {\n- 1114: 'TIMESTAMP',\n+ 1114: 'DATETIME',\n 1184: 'TIMESTAMP',\n- 1082: 'TIMESTAMP',\n- 1083: 'TIMESTAMP',\n+ 1082: 'DATE',\n+ 1083: 'TIME',\n 1005: 'INTEGER',\n 1007: 'INTEGER',\n 1016: 'INTEGER',\n@@ -131,18 +131,24 @@\n def convert_type(self, value, schema_type):\n \"\"\"\n Takes a value from Postgres, and converts it to a value that's safe for\n- JSON/Google Cloud Storage/BigQuery. Dates are converted to UTC seconds.\n- Decimals are converted to floats. Times are converted to seconds.\n+ JSON/Google Cloud Storage/BigQuery.\n+ Timezone aware Datetime are converted to UTC seconds.\n+ Unaware Datetime, Date and Time are converted to ISO formatted strings.\n+ Decimals are converted to floats.\n \"\"\"\n- if isinstance(value, (datetime.datetime, datetime.date)):\n- return pendulum.parse(value.isoformat()).float_timestamp\n+ if isinstance(value, datetime.datetime):\n+ iso_format_value = value.isoformat()\n+ if value.tzinfo is None:\n+ return iso_format_value\n+ return pendulum.parse(iso_format_value).float_timestamp\n+ if isinstance(value, datetime.date):\n+ return value.isoformat()\n if isinstance(value, datetime.time):\n formatted_time = time.strptime(str(value), \"%H:%M:%S\")\n- return int(\n- datetime.timedelta(\n- hours=formatted_time.tm_hour, minutes=formatted_time.tm_min, seconds=formatted_time.tm_sec\n- ).total_seconds()\n+ time_delta = datetime.timedelta(\n+ hours=formatted_time.tm_hour, minutes=formatted_time.tm_min, seconds=formatted_time.tm_sec\n )\n+ return str(time_delta)\n if isinstance(value, dict):\n return json.dumps(value)\n if isinstance(value, Decimal):\n", "issue": "PostgresToGoogleCloudStorageOperator - Custom schema mapping\nVersion : 1.10.12\r\n\r\nI used PostgresToGoogleCloudStorageOperator to export the data and the schema file as well. But I saw a column on Postgres was `TIMESTAMP without time zone` but in BigQuery the auto-create table (via `GoogleCloudStorageToBigQueryOperator`) used the JSON schema file and created the table. When I checked the BQ table the data type was `TIMESTAMP`.\r\n\r\nFor without timezone data, **`DATETIME`** would be the right choice. So can we manually MAP the data types during the schema file export? \n", "before_files": [{"content": "#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\"\"\"PostgreSQL to GCS operator.\"\"\"\n\nimport datetime\nimport json\nimport time\nimport uuid\nfrom decimal import Decimal\nfrom typing import Dict\n\nimport pendulum\n\nfrom airflow.providers.google.cloud.transfers.sql_to_gcs import BaseSQLToGCSOperator\nfrom airflow.providers.postgres.hooks.postgres import PostgresHook\n\n\nclass _PostgresServerSideCursorDecorator:\n \"\"\"\n Inspired by `_PrestoToGCSPrestoCursorAdapter` to keep this consistent.\n\n Decorator for allowing description to be available for postgres cursor in case server side\n cursor is used. It doesn't provide other methods except those needed in BaseSQLToGCSOperator,\n which is more of a safety feature.\n \"\"\"\n\n def __init__(self, cursor):\n self.cursor = cursor\n self.rows = []\n self.initialized = False\n\n def __iter__(self):\n return self\n\n def __next__(self):\n if self.rows:\n return self.rows.pop()\n else:\n self.initialized = True\n return next(self.cursor)\n\n @property\n def description(self):\n \"\"\"Fetch first row to initialize cursor description when using server side cursor.\"\"\"\n if not self.initialized:\n element = self.cursor.fetchone()\n if element is not None:\n self.rows.append(element)\n self.initialized = True\n return self.cursor.description\n\n\nclass PostgresToGCSOperator(BaseSQLToGCSOperator):\n \"\"\"\n Copy data from Postgres to Google Cloud Storage in JSON or CSV format.\n\n :param postgres_conn_id: Reference to a specific Postgres hook.\n :param use_server_side_cursor: If server-side cursor should be used for querying postgres.\n For detailed info, check https://www.psycopg.org/docs/usage.html#server-side-cursors\n :param cursor_itersize: How many records are fetched at a time in case of server-side cursor.\n \"\"\"\n\n ui_color = '#a0e08c'\n\n type_map = {\n 1114: 'TIMESTAMP',\n 1184: 'TIMESTAMP',\n 1082: 'TIMESTAMP',\n 1083: 'TIMESTAMP',\n 1005: 'INTEGER',\n 1007: 'INTEGER',\n 1016: 'INTEGER',\n 20: 'INTEGER',\n 21: 'INTEGER',\n 23: 'INTEGER',\n 16: 'BOOLEAN',\n 700: 'FLOAT',\n 701: 'FLOAT',\n 1700: 'FLOAT',\n }\n\n def __init__(\n self,\n *,\n postgres_conn_id='postgres_default',\n use_server_side_cursor=False,\n cursor_itersize=2000,\n **kwargs,\n ):\n super().__init__(**kwargs)\n self.postgres_conn_id = postgres_conn_id\n self.use_server_side_cursor = use_server_side_cursor\n self.cursor_itersize = cursor_itersize\n\n def _unique_name(self):\n return f\"{self.dag_id}__{self.task_id}__{uuid.uuid4()}\" if self.use_server_side_cursor else None\n\n def query(self):\n \"\"\"Queries Postgres and returns a cursor to the results.\"\"\"\n hook = PostgresHook(postgres_conn_id=self.postgres_conn_id)\n conn = hook.get_conn()\n cursor = conn.cursor(name=self._unique_name())\n cursor.execute(self.sql, self.parameters)\n if self.use_server_side_cursor:\n cursor.itersize = self.cursor_itersize\n return _PostgresServerSideCursorDecorator(cursor)\n return cursor\n\n def field_to_bigquery(self, field) -> Dict[str, str]:\n return {\n 'name': field[0],\n 'type': self.type_map.get(field[1], \"STRING\"),\n 'mode': 'REPEATED' if field[1] in (1009, 1005, 1007, 1016) else 'NULLABLE',\n }\n\n def convert_type(self, value, schema_type):\n \"\"\"\n Takes a value from Postgres, and converts it to a value that's safe for\n JSON/Google Cloud Storage/BigQuery. Dates are converted to UTC seconds.\n Decimals are converted to floats. Times are converted to seconds.\n \"\"\"\n if isinstance(value, (datetime.datetime, datetime.date)):\n return pendulum.parse(value.isoformat()).float_timestamp\n if isinstance(value, datetime.time):\n formatted_time = time.strptime(str(value), \"%H:%M:%S\")\n return int(\n datetime.timedelta(\n hours=formatted_time.tm_hour, minutes=formatted_time.tm_min, seconds=formatted_time.tm_sec\n ).total_seconds()\n )\n if isinstance(value, dict):\n return json.dumps(value)\n if isinstance(value, Decimal):\n return float(value)\n return value\n", "path": "airflow/providers/google/cloud/transfers/postgres_to_gcs.py"}], "after_files": [{"content": "#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\"\"\"PostgreSQL to GCS operator.\"\"\"\n\nimport datetime\nimport json\nimport time\nimport uuid\nfrom decimal import Decimal\nfrom typing import Dict\n\nimport pendulum\n\nfrom airflow.providers.google.cloud.transfers.sql_to_gcs import BaseSQLToGCSOperator\nfrom airflow.providers.postgres.hooks.postgres import PostgresHook\n\n\nclass _PostgresServerSideCursorDecorator:\n \"\"\"\n Inspired by `_PrestoToGCSPrestoCursorAdapter` to keep this consistent.\n\n Decorator for allowing description to be available for postgres cursor in case server side\n cursor is used. It doesn't provide other methods except those needed in BaseSQLToGCSOperator,\n which is more of a safety feature.\n \"\"\"\n\n def __init__(self, cursor):\n self.cursor = cursor\n self.rows = []\n self.initialized = False\n\n def __iter__(self):\n return self\n\n def __next__(self):\n if self.rows:\n return self.rows.pop()\n else:\n self.initialized = True\n return next(self.cursor)\n\n @property\n def description(self):\n \"\"\"Fetch first row to initialize cursor description when using server side cursor.\"\"\"\n if not self.initialized:\n element = self.cursor.fetchone()\n if element is not None:\n self.rows.append(element)\n self.initialized = True\n return self.cursor.description\n\n\nclass PostgresToGCSOperator(BaseSQLToGCSOperator):\n \"\"\"\n Copy data from Postgres to Google Cloud Storage in JSON or CSV format.\n\n :param postgres_conn_id: Reference to a specific Postgres hook.\n :param use_server_side_cursor: If server-side cursor should be used for querying postgres.\n For detailed info, check https://www.psycopg.org/docs/usage.html#server-side-cursors\n :param cursor_itersize: How many records are fetched at a time in case of server-side cursor.\n \"\"\"\n\n ui_color = '#a0e08c'\n\n type_map = {\n 1114: 'DATETIME',\n 1184: 'TIMESTAMP',\n 1082: 'DATE',\n 1083: 'TIME',\n 1005: 'INTEGER',\n 1007: 'INTEGER',\n 1016: 'INTEGER',\n 20: 'INTEGER',\n 21: 'INTEGER',\n 23: 'INTEGER',\n 16: 'BOOLEAN',\n 700: 'FLOAT',\n 701: 'FLOAT',\n 1700: 'FLOAT',\n }\n\n def __init__(\n self,\n *,\n postgres_conn_id='postgres_default',\n use_server_side_cursor=False,\n cursor_itersize=2000,\n **kwargs,\n ):\n super().__init__(**kwargs)\n self.postgres_conn_id = postgres_conn_id\n self.use_server_side_cursor = use_server_side_cursor\n self.cursor_itersize = cursor_itersize\n\n def _unique_name(self):\n return f\"{self.dag_id}__{self.task_id}__{uuid.uuid4()}\" if self.use_server_side_cursor else None\n\n def query(self):\n \"\"\"Queries Postgres and returns a cursor to the results.\"\"\"\n hook = PostgresHook(postgres_conn_id=self.postgres_conn_id)\n conn = hook.get_conn()\n cursor = conn.cursor(name=self._unique_name())\n cursor.execute(self.sql, self.parameters)\n if self.use_server_side_cursor:\n cursor.itersize = self.cursor_itersize\n return _PostgresServerSideCursorDecorator(cursor)\n return cursor\n\n def field_to_bigquery(self, field) -> Dict[str, str]:\n return {\n 'name': field[0],\n 'type': self.type_map.get(field[1], \"STRING\"),\n 'mode': 'REPEATED' if field[1] in (1009, 1005, 1007, 1016) else 'NULLABLE',\n }\n\n def convert_type(self, value, schema_type):\n \"\"\"\n Takes a value from Postgres, and converts it to a value that's safe for\n JSON/Google Cloud Storage/BigQuery.\n Timezone aware Datetime are converted to UTC seconds.\n Unaware Datetime, Date and Time are converted to ISO formatted strings.\n Decimals are converted to floats.\n \"\"\"\n if isinstance(value, datetime.datetime):\n iso_format_value = value.isoformat()\n if value.tzinfo is None:\n return iso_format_value\n return pendulum.parse(iso_format_value).float_timestamp\n if isinstance(value, datetime.date):\n return value.isoformat()\n if isinstance(value, datetime.time):\n formatted_time = time.strptime(str(value), \"%H:%M:%S\")\n time_delta = datetime.timedelta(\n hours=formatted_time.tm_hour, minutes=formatted_time.tm_min, seconds=formatted_time.tm_sec\n )\n return str(time_delta)\n if isinstance(value, dict):\n return json.dumps(value)\n if isinstance(value, Decimal):\n return float(value)\n return value\n", "path": "airflow/providers/google/cloud/transfers/postgres_to_gcs.py"}]}
1,974
559
gh_patches_debug_21113
rasdani/github-patches
git_diff
sktime__sktime-6183
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [BUG] Unusual if statement in _lower_bounding_numba.py **Describe the bug** <!-- A clear and concise description of what the bug is. --> If statement with same code in both branches **To Reproduce** <!-- Add a Minimal, Complete, and Verifiable example (for more details, see e.g. https://stackoverflow.com/help/mcve If the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com --> See def create_shape_on_matrix, specifically lines 63-68 **Expected behavior** <!-- A clear and concise description of what you expected to happen. --> In the else statement, I would expect ceil and floor to be exchanged **Versions** 0.27.0 <!-- Please run the following code snippet and paste the output here: from sktime import show_versions; show_versions() --> System: python: 3.12.0 (v3.12.0:0fb18b02c8, Oct 2 2023, 09:45:56) [Clang 13.0.0 (clang-1300.0.29.30)] executable: /path machine: macOS-14.4-arm64-arm-64bit Python dependencies: pip: 24.0 sktime: 0.27.0 sklearn: 1.4.1.post1 skbase: 0.7.5 numpy: 1.26.4 scipy: 1.12.0 pandas: 2.1.4 matplotlib: 3.8.3 joblib: 1.3.2 numba: 0.59.0 statsmodels: None pmdarima: None statsforecast: None tsfresh: None tslearn: None torch: None tensorflow: None tensorflow_probability: None Backend MacOSX is interactive backend. Turning interactive mode on. </details> <!-- Thanks for contributing! --> --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `sktime/distances/_lower_bounding_numba.py` Content: ``` 1 """Isolated numba imports for lower_bounding.""" 2 3 __author__ = ["chrisholder", "TonyBagnall"] 4 5 import math 6 from typing import Union 7 8 import numpy as np 9 10 from sktime.utils.numba.njit import njit 11 12 13 @njit(cache=True) 14 def create_shape_on_matrix( 15 bounding_matrix: np.ndarray, 16 y_upper_line: np.ndarray, 17 y_lower_line: Union[np.ndarray, None] = None, 18 x_step_size: int = 1, 19 start_val: int = 0, 20 ) -> np.ndarray: 21 """Create a shape from a given upper line and lower line on a matrix. 22 23 Parameters 24 ---------- 25 bounding_matrix: np.ndarray (2d array) 26 Matrix of size mxn where m is len(x) and n is len(y). Values that 27 are inside the shape will be replaced with finite values (0.). 28 y_upper_line: np.ndarray (1d array) 29 Y points of the upper line. 30 y_lower_line: np.ndarray (1d array), defaults = None 31 Y points of the lower line. If no lower line specified, then y_upper_line 32 used as lower line. 33 x_step_size: int, defaults = 1 34 Step size each iteration will increase by 35 start_val: int, defaults = 0 36 Starting coordinate for x 37 38 Returns 39 ------- 40 np.ndarray (2d array) 41 Matrix with values of the shape set to 0. (finite), of the same shape 42 as the passed bounding_matrix. 43 """ 44 y_size = bounding_matrix.shape[0] 45 46 if y_lower_line is None: 47 y_lower_line = y_upper_line 48 49 upper_line_y_values = y_upper_line.shape[0] 50 lower_line_y_values = y_lower_line.shape[0] 51 52 if upper_line_y_values != lower_line_y_values: 53 raise ValueError( 54 "The number of upper line values must equal the number of lower line " 55 "values" 56 ) 57 58 half_way = math.floor(upper_line_y_values / 2) 59 60 for i in range(start_val, upper_line_y_values): 61 x = i * x_step_size 62 63 if i > half_way: 64 upper_y = max(0, min(y_size - 1, math.ceil(y_upper_line[i]))) 65 lower_y = max(0, min(y_size - 1, math.floor(y_lower_line[i]))) 66 else: 67 upper_y = max(0, min(y_size - 1, math.ceil(y_upper_line[i]))) 68 lower_y = max(0, min(y_size - 1, math.floor(y_lower_line[i]))) 69 70 if upper_line_y_values == lower_line_y_values: 71 if upper_y == lower_y: 72 bounding_matrix[upper_y, x] = 0.0 73 else: 74 bounding_matrix[upper_y : (lower_y + 1), x] = 0.0 75 else: 76 bounding_matrix[upper_y, x] = 0.0 77 bounding_matrix[lower_y, x] = 0.0 78 79 return bounding_matrix 80 81 82 @njit(cache=True) 83 def _check_line_steps(line: np.ndarray) -> np.ndarray: 84 """Check the next 'step' is along the line. 85 86 Parameters 87 ---------- 88 line: np.ndarray 89 line to check steps. 90 91 Returns 92 ------- 93 np.ndarray 94 Line with updated indexes. 95 """ 96 prev = line[0] 97 for i in range(1, len(line)): 98 curr_val = line[i] 99 if curr_val > (prev + 1): 100 line[i] = prev + 1 101 elif curr_val < (prev - 1): 102 line[i] = prev - 1 103 prev = curr_val 104 return line 105 106 107 @njit(cache=True) 108 def no_bounding(x: np.ndarray, y: np.ndarray) -> np.ndarray: 109 """Create a matrix with no bounding. 110 111 Parameters 112 ---------- 113 x: np.ndarray (2d array) 114 First time series. 115 y: np.ndarray (2d array) 116 Second time series. 117 118 Returns 119 ------- 120 np.ndarray (2d of size mxn where m is len(x) and n is len(y)). 121 Bounding matrix where the values inside the bound are finite values (0s) and 122 outside the bounds are infinity (non finite). 123 """ 124 return np.zeros((x.shape[1], y.shape[1])) 125 126 127 @njit(cache=True) 128 def sakoe_chiba(x: np.ndarray, y: np.ndarray, window: float) -> np.ndarray: 129 """Create a sakoe chiba lower bounding window on a matrix. 130 131 Parameters 132 ---------- 133 x: np.ndarray (2d array) 134 First time series. 135 y: np.ndarray (2d array) 136 Second time series. 137 window: float 138 Float that is the size of the window. Must be between 0 and 1. 139 140 Returns 141 ------- 142 np.ndarray (2d of size mxn where m is len(x) and n is len(y)). 143 Sakoe Chiba bounding matrix where the values inside the bound are finite 144 values (0s) and outside the bounds are infinity (non finite). 145 146 Raises 147 ------ 148 ValueError 149 If the sakoe_chiba_window_radius is not an integer. 150 """ 151 if window < 0 or window > 1: 152 raise ValueError("Window must between 0 and 1") 153 154 x_size = x.shape[1] 155 y_size = y.shape[1] 156 bounding_matrix = np.full((x_size, y_size), np.inf) 157 sakoe_chiba_window_radius = ((x_size / 100) * window) * 100 158 159 x_upper_line_values = np.interp( 160 list(range(x_size)), 161 [0, x_size - 1], 162 [0 - sakoe_chiba_window_radius, y_size - sakoe_chiba_window_radius - 1], 163 ) 164 x_lower_line_values = np.interp( 165 list(range(x_size)), 166 [0, x_size - 1], 167 [0 + sakoe_chiba_window_radius, y_size + sakoe_chiba_window_radius - 1], 168 ) 169 170 bounding_matrix = create_shape_on_matrix( 171 bounding_matrix, x_upper_line_values, x_lower_line_values 172 ) 173 174 return bounding_matrix 175 176 177 @njit(cache=True) 178 def itakura_parallelogram( 179 x: np.ndarray, y: np.ndarray, itakura_max_slope: float 180 ) -> np.ndarray: 181 """Create a itakura parallelogram bounding matrix. 182 183 Parameters 184 ---------- 185 x: np.ndarray (2d array) 186 First time series. 187 y: np.ndarray (2d array) 188 Second time series. 189 itakura_max_slope: float or int 190 Gradient of the slope must be between 0 and 1. 191 192 Returns 193 ------- 194 np.ndarray (2d of size mxn where m is len(x) and n is len(y)). 195 Sakoe Chiba bounding matrix where the values inside the bound are finite 196 values (0s) and outside the bounds are infinity (non finite). 197 198 Raises 199 ------ 200 ValueError 201 If the itakura_max_slope is not a float or int. 202 """ 203 if itakura_max_slope < 0 or itakura_max_slope > 1: 204 raise ValueError("Window must between 0 and 1") 205 x_size = x.shape[1] 206 y_size = y.shape[1] 207 bounding_matrix = np.full((y_size, x_size), np.inf) 208 itakura_max_slope = math.floor(((x_size / 100) * itakura_max_slope) * 100) / 2 209 210 middle_x_upper = math.ceil(x_size / 2) 211 middle_x_lower = math.floor(x_size / 2) 212 if middle_x_lower == middle_x_upper: 213 middle_x_lower = middle_x_lower - 1 214 middle_y = math.floor(y_size / 2) 215 216 difference_from_middle_y = abs((middle_x_lower * itakura_max_slope) - middle_y) 217 middle_y_lower = middle_y + difference_from_middle_y 218 middle_y_upper = middle_y - difference_from_middle_y 219 220 x_upper_line_values = np.interp( 221 list(range(x_size)), 222 [0, middle_x_lower, middle_x_upper, x_size - 1], 223 [0, middle_y_upper, middle_y_upper, y_size - 1], 224 ) 225 x_lower_line_values = np.interp( 226 list(range(x_size)), 227 [0, middle_x_lower, middle_x_upper, x_size - 1], 228 [0, middle_y_lower, middle_y_lower, y_size - 1], 229 ) 230 231 if np.array_equal(x_upper_line_values, x_lower_line_values): 232 x_upper_line_values = _check_line_steps(x_upper_line_values) 233 234 bounding_matrix = create_shape_on_matrix( 235 bounding_matrix, x_upper_line_values, x_lower_line_values 236 ) 237 238 return bounding_matrix 239 240 241 @njit(cache=True) 242 def numba_create_bounding_matrix( 243 x: np.ndarray, 244 y: np.ndarray, 245 window: float = -1.0, 246 itakura_max_slope: float = -1.0, 247 ) -> np.ndarray: 248 """Numba compiled way of creating bounding matrix. 249 250 Parameters 251 ---------- 252 x: np.ndarray (2d array) 253 First time series. 254 y: np.ndarray (2d array) 255 Second time series. 256 window: float, defaults = -1. 257 Float that is the % radius of the sakoe chiba window (if using Sakoe-Chiba 258 lower bounding). Must be between 0 and 1. 259 itakura_max_slope: float, defaults = -1. 260 Gradient of the slope for itakura parallelogram (if using Itakura 261 Parallelogram lower bounding). Must be between 0 and 1. 262 """ 263 if window != -1.0: 264 bounding_matrix = sakoe_chiba(x, y, window) 265 elif itakura_max_slope != -1.0: 266 bounding_matrix = itakura_parallelogram(x, y, itakura_max_slope) 267 else: 268 bounding_matrix = no_bounding(x, y) 269 270 return bounding_matrix 271 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/sktime/distances/_lower_bounding_numba.py b/sktime/distances/_lower_bounding_numba.py --- a/sktime/distances/_lower_bounding_numba.py +++ b/sktime/distances/_lower_bounding_numba.py @@ -55,26 +55,16 @@ "values" ) - half_way = math.floor(upper_line_y_values / 2) - for i in range(start_val, upper_line_y_values): x = i * x_step_size - if i > half_way: - upper_y = max(0, min(y_size - 1, math.ceil(y_upper_line[i]))) - lower_y = max(0, min(y_size - 1, math.floor(y_lower_line[i]))) - else: - upper_y = max(0, min(y_size - 1, math.ceil(y_upper_line[i]))) - lower_y = max(0, min(y_size - 1, math.floor(y_lower_line[i]))) - - if upper_line_y_values == lower_line_y_values: - if upper_y == lower_y: - bounding_matrix[upper_y, x] = 0.0 - else: - bounding_matrix[upper_y : (lower_y + 1), x] = 0.0 - else: + upper_y = max(0, min(y_size - 1, math.ceil(y_upper_line[i]))) + lower_y = max(0, min(y_size - 1, math.floor(y_lower_line[i]))) + + if upper_y == lower_y: bounding_matrix[upper_y, x] = 0.0 - bounding_matrix[lower_y, x] = 0.0 + else: + bounding_matrix[upper_y : (lower_y + 1), x] = 0.0 return bounding_matrix
{"golden_diff": "diff --git a/sktime/distances/_lower_bounding_numba.py b/sktime/distances/_lower_bounding_numba.py\n--- a/sktime/distances/_lower_bounding_numba.py\n+++ b/sktime/distances/_lower_bounding_numba.py\n@@ -55,26 +55,16 @@\n \"values\"\n )\n \n- half_way = math.floor(upper_line_y_values / 2)\n-\n for i in range(start_val, upper_line_y_values):\n x = i * x_step_size\n \n- if i > half_way:\n- upper_y = max(0, min(y_size - 1, math.ceil(y_upper_line[i])))\n- lower_y = max(0, min(y_size - 1, math.floor(y_lower_line[i])))\n- else:\n- upper_y = max(0, min(y_size - 1, math.ceil(y_upper_line[i])))\n- lower_y = max(0, min(y_size - 1, math.floor(y_lower_line[i])))\n-\n- if upper_line_y_values == lower_line_y_values:\n- if upper_y == lower_y:\n- bounding_matrix[upper_y, x] = 0.0\n- else:\n- bounding_matrix[upper_y : (lower_y + 1), x] = 0.0\n- else:\n+ upper_y = max(0, min(y_size - 1, math.ceil(y_upper_line[i])))\n+ lower_y = max(0, min(y_size - 1, math.floor(y_lower_line[i])))\n+\n+ if upper_y == lower_y:\n bounding_matrix[upper_y, x] = 0.0\n- bounding_matrix[lower_y, x] = 0.0\n+ else:\n+ bounding_matrix[upper_y : (lower_y + 1), x] = 0.0\n \n return bounding_matrix\n", "issue": "[BUG] Unusual if statement in _lower_bounding_numba.py\n**Describe the bug**\r\n<!--\r\nA clear and concise description of what the bug is.\r\n--> If statement with same code in both branches\r\n\r\n**To Reproduce**\r\n<!--\r\nAdd a Minimal, Complete, and Verifiable example (for more details, see e.g. https://stackoverflow.com/help/mcve\r\n\r\nIf the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com\r\n--> See def create_shape_on_matrix, specifically lines 63-68 \r\n\r\n\r\n**Expected behavior**\r\n<!--\r\nA clear and concise description of what you expected to happen.\r\n--> In the else statement, I would expect ceil and floor to be exchanged\r\n\r\n\r\n**Versions**\r\n0.27.0\r\n\r\n<!--\r\nPlease run the following code snippet and paste the output here:\r\n\r\nfrom sktime import show_versions; show_versions()\r\n-->\r\nSystem:\r\n python: 3.12.0 (v3.12.0:0fb18b02c8, Oct 2 2023, 09:45:56) [Clang 13.0.0 (clang-1300.0.29.30)]\r\nexecutable: /path\r\n machine: macOS-14.4-arm64-arm-64bit\r\n\r\nPython dependencies:\r\n pip: 24.0\r\n sktime: 0.27.0\r\n sklearn: 1.4.1.post1\r\n skbase: 0.7.5\r\n numpy: 1.26.4\r\n scipy: 1.12.0\r\n pandas: 2.1.4\r\n matplotlib: 3.8.3\r\n joblib: 1.3.2\r\n numba: 0.59.0\r\n statsmodels: None\r\n pmdarima: None\r\nstatsforecast: None\r\n tsfresh: None\r\n tslearn: None\r\n torch: None\r\n tensorflow: None\r\ntensorflow_probability: None\r\nBackend MacOSX is interactive backend. Turning interactive mode on.\r\n\r\n</details>\r\n\r\n<!-- Thanks for contributing! -->\r\n\n", "before_files": [{"content": "\"\"\"Isolated numba imports for lower_bounding.\"\"\"\n\n__author__ = [\"chrisholder\", \"TonyBagnall\"]\n\nimport math\nfrom typing import Union\n\nimport numpy as np\n\nfrom sktime.utils.numba.njit import njit\n\n\n@njit(cache=True)\ndef create_shape_on_matrix(\n bounding_matrix: np.ndarray,\n y_upper_line: np.ndarray,\n y_lower_line: Union[np.ndarray, None] = None,\n x_step_size: int = 1,\n start_val: int = 0,\n) -> np.ndarray:\n \"\"\"Create a shape from a given upper line and lower line on a matrix.\n\n Parameters\n ----------\n bounding_matrix: np.ndarray (2d array)\n Matrix of size mxn where m is len(x) and n is len(y). Values that\n are inside the shape will be replaced with finite values (0.).\n y_upper_line: np.ndarray (1d array)\n Y points of the upper line.\n y_lower_line: np.ndarray (1d array), defaults = None\n Y points of the lower line. If no lower line specified, then y_upper_line\n used as lower line.\n x_step_size: int, defaults = 1\n Step size each iteration will increase by\n start_val: int, defaults = 0\n Starting coordinate for x\n\n Returns\n -------\n np.ndarray (2d array)\n Matrix with values of the shape set to 0. (finite), of the same shape\n as the passed bounding_matrix.\n \"\"\"\n y_size = bounding_matrix.shape[0]\n\n if y_lower_line is None:\n y_lower_line = y_upper_line\n\n upper_line_y_values = y_upper_line.shape[0]\n lower_line_y_values = y_lower_line.shape[0]\n\n if upper_line_y_values != lower_line_y_values:\n raise ValueError(\n \"The number of upper line values must equal the number of lower line \"\n \"values\"\n )\n\n half_way = math.floor(upper_line_y_values / 2)\n\n for i in range(start_val, upper_line_y_values):\n x = i * x_step_size\n\n if i > half_way:\n upper_y = max(0, min(y_size - 1, math.ceil(y_upper_line[i])))\n lower_y = max(0, min(y_size - 1, math.floor(y_lower_line[i])))\n else:\n upper_y = max(0, min(y_size - 1, math.ceil(y_upper_line[i])))\n lower_y = max(0, min(y_size - 1, math.floor(y_lower_line[i])))\n\n if upper_line_y_values == lower_line_y_values:\n if upper_y == lower_y:\n bounding_matrix[upper_y, x] = 0.0\n else:\n bounding_matrix[upper_y : (lower_y + 1), x] = 0.0\n else:\n bounding_matrix[upper_y, x] = 0.0\n bounding_matrix[lower_y, x] = 0.0\n\n return bounding_matrix\n\n\n@njit(cache=True)\ndef _check_line_steps(line: np.ndarray) -> np.ndarray:\n \"\"\"Check the next 'step' is along the line.\n\n Parameters\n ----------\n line: np.ndarray\n line to check steps.\n\n Returns\n -------\n np.ndarray\n Line with updated indexes.\n \"\"\"\n prev = line[0]\n for i in range(1, len(line)):\n curr_val = line[i]\n if curr_val > (prev + 1):\n line[i] = prev + 1\n elif curr_val < (prev - 1):\n line[i] = prev - 1\n prev = curr_val\n return line\n\n\n@njit(cache=True)\ndef no_bounding(x: np.ndarray, y: np.ndarray) -> np.ndarray:\n \"\"\"Create a matrix with no bounding.\n\n Parameters\n ----------\n x: np.ndarray (2d array)\n First time series.\n y: np.ndarray (2d array)\n Second time series.\n\n Returns\n -------\n np.ndarray (2d of size mxn where m is len(x) and n is len(y)).\n Bounding matrix where the values inside the bound are finite values (0s) and\n outside the bounds are infinity (non finite).\n \"\"\"\n return np.zeros((x.shape[1], y.shape[1]))\n\n\n@njit(cache=True)\ndef sakoe_chiba(x: np.ndarray, y: np.ndarray, window: float) -> np.ndarray:\n \"\"\"Create a sakoe chiba lower bounding window on a matrix.\n\n Parameters\n ----------\n x: np.ndarray (2d array)\n First time series.\n y: np.ndarray (2d array)\n Second time series.\n window: float\n Float that is the size of the window. Must be between 0 and 1.\n\n Returns\n -------\n np.ndarray (2d of size mxn where m is len(x) and n is len(y)).\n Sakoe Chiba bounding matrix where the values inside the bound are finite\n values (0s) and outside the bounds are infinity (non finite).\n\n Raises\n ------\n ValueError\n If the sakoe_chiba_window_radius is not an integer.\n \"\"\"\n if window < 0 or window > 1:\n raise ValueError(\"Window must between 0 and 1\")\n\n x_size = x.shape[1]\n y_size = y.shape[1]\n bounding_matrix = np.full((x_size, y_size), np.inf)\n sakoe_chiba_window_radius = ((x_size / 100) * window) * 100\n\n x_upper_line_values = np.interp(\n list(range(x_size)),\n [0, x_size - 1],\n [0 - sakoe_chiba_window_radius, y_size - sakoe_chiba_window_radius - 1],\n )\n x_lower_line_values = np.interp(\n list(range(x_size)),\n [0, x_size - 1],\n [0 + sakoe_chiba_window_radius, y_size + sakoe_chiba_window_radius - 1],\n )\n\n bounding_matrix = create_shape_on_matrix(\n bounding_matrix, x_upper_line_values, x_lower_line_values\n )\n\n return bounding_matrix\n\n\n@njit(cache=True)\ndef itakura_parallelogram(\n x: np.ndarray, y: np.ndarray, itakura_max_slope: float\n) -> np.ndarray:\n \"\"\"Create a itakura parallelogram bounding matrix.\n\n Parameters\n ----------\n x: np.ndarray (2d array)\n First time series.\n y: np.ndarray (2d array)\n Second time series.\n itakura_max_slope: float or int\n Gradient of the slope must be between 0 and 1.\n\n Returns\n -------\n np.ndarray (2d of size mxn where m is len(x) and n is len(y)).\n Sakoe Chiba bounding matrix where the values inside the bound are finite\n values (0s) and outside the bounds are infinity (non finite).\n\n Raises\n ------\n ValueError\n If the itakura_max_slope is not a float or int.\n \"\"\"\n if itakura_max_slope < 0 or itakura_max_slope > 1:\n raise ValueError(\"Window must between 0 and 1\")\n x_size = x.shape[1]\n y_size = y.shape[1]\n bounding_matrix = np.full((y_size, x_size), np.inf)\n itakura_max_slope = math.floor(((x_size / 100) * itakura_max_slope) * 100) / 2\n\n middle_x_upper = math.ceil(x_size / 2)\n middle_x_lower = math.floor(x_size / 2)\n if middle_x_lower == middle_x_upper:\n middle_x_lower = middle_x_lower - 1\n middle_y = math.floor(y_size / 2)\n\n difference_from_middle_y = abs((middle_x_lower * itakura_max_slope) - middle_y)\n middle_y_lower = middle_y + difference_from_middle_y\n middle_y_upper = middle_y - difference_from_middle_y\n\n x_upper_line_values = np.interp(\n list(range(x_size)),\n [0, middle_x_lower, middle_x_upper, x_size - 1],\n [0, middle_y_upper, middle_y_upper, y_size - 1],\n )\n x_lower_line_values = np.interp(\n list(range(x_size)),\n [0, middle_x_lower, middle_x_upper, x_size - 1],\n [0, middle_y_lower, middle_y_lower, y_size - 1],\n )\n\n if np.array_equal(x_upper_line_values, x_lower_line_values):\n x_upper_line_values = _check_line_steps(x_upper_line_values)\n\n bounding_matrix = create_shape_on_matrix(\n bounding_matrix, x_upper_line_values, x_lower_line_values\n )\n\n return bounding_matrix\n\n\n@njit(cache=True)\ndef numba_create_bounding_matrix(\n x: np.ndarray,\n y: np.ndarray,\n window: float = -1.0,\n itakura_max_slope: float = -1.0,\n) -> np.ndarray:\n \"\"\"Numba compiled way of creating bounding matrix.\n\n Parameters\n ----------\n x: np.ndarray (2d array)\n First time series.\n y: np.ndarray (2d array)\n Second time series.\n window: float, defaults = -1.\n Float that is the % radius of the sakoe chiba window (if using Sakoe-Chiba\n lower bounding). Must be between 0 and 1.\n itakura_max_slope: float, defaults = -1.\n Gradient of the slope for itakura parallelogram (if using Itakura\n Parallelogram lower bounding). Must be between 0 and 1.\n \"\"\"\n if window != -1.0:\n bounding_matrix = sakoe_chiba(x, y, window)\n elif itakura_max_slope != -1.0:\n bounding_matrix = itakura_parallelogram(x, y, itakura_max_slope)\n else:\n bounding_matrix = no_bounding(x, y)\n\n return bounding_matrix\n", "path": "sktime/distances/_lower_bounding_numba.py"}], "after_files": [{"content": "\"\"\"Isolated numba imports for lower_bounding.\"\"\"\n\n__author__ = [\"chrisholder\", \"TonyBagnall\"]\n\nimport math\nfrom typing import Union\n\nimport numpy as np\n\nfrom sktime.utils.numba.njit import njit\n\n\n@njit(cache=True)\ndef create_shape_on_matrix(\n bounding_matrix: np.ndarray,\n y_upper_line: np.ndarray,\n y_lower_line: Union[np.ndarray, None] = None,\n x_step_size: int = 1,\n start_val: int = 0,\n) -> np.ndarray:\n \"\"\"Create a shape from a given upper line and lower line on a matrix.\n\n Parameters\n ----------\n bounding_matrix: np.ndarray (2d array)\n Matrix of size mxn where m is len(x) and n is len(y). Values that\n are inside the shape will be replaced with finite values (0.).\n y_upper_line: np.ndarray (1d array)\n Y points of the upper line.\n y_lower_line: np.ndarray (1d array), defaults = None\n Y points of the lower line. If no lower line specified, then y_upper_line\n used as lower line.\n x_step_size: int, defaults = 1\n Step size each iteration will increase by\n start_val: int, defaults = 0\n Starting coordinate for x\n\n Returns\n -------\n np.ndarray (2d array)\n Matrix with values of the shape set to 0. (finite), of the same shape\n as the passed bounding_matrix.\n \"\"\"\n y_size = bounding_matrix.shape[0]\n\n if y_lower_line is None:\n y_lower_line = y_upper_line\n\n upper_line_y_values = y_upper_line.shape[0]\n lower_line_y_values = y_lower_line.shape[0]\n\n if upper_line_y_values != lower_line_y_values:\n raise ValueError(\n \"The number of upper line values must equal the number of lower line \"\n \"values\"\n )\n\n for i in range(start_val, upper_line_y_values):\n x = i * x_step_size\n\n upper_y = max(0, min(y_size - 1, math.ceil(y_upper_line[i])))\n lower_y = max(0, min(y_size - 1, math.floor(y_lower_line[i])))\n\n if upper_y == lower_y:\n bounding_matrix[upper_y, x] = 0.0\n else:\n bounding_matrix[upper_y : (lower_y + 1), x] = 0.0\n\n return bounding_matrix\n\n\n@njit(cache=True)\ndef _check_line_steps(line: np.ndarray) -> np.ndarray:\n \"\"\"Check the next 'step' is along the line.\n\n Parameters\n ----------\n line: np.ndarray\n line to check steps.\n\n Returns\n -------\n np.ndarray\n Line with updated indexes.\n \"\"\"\n prev = line[0]\n for i in range(1, len(line)):\n curr_val = line[i]\n if curr_val > (prev + 1):\n line[i] = prev + 1\n elif curr_val < (prev - 1):\n line[i] = prev - 1\n prev = curr_val\n return line\n\n\n@njit(cache=True)\ndef no_bounding(x: np.ndarray, y: np.ndarray) -> np.ndarray:\n \"\"\"Create a matrix with no bounding.\n\n Parameters\n ----------\n x: np.ndarray (2d array)\n First time series.\n y: np.ndarray (2d array)\n Second time series.\n\n Returns\n -------\n np.ndarray (2d of size mxn where m is len(x) and n is len(y)).\n Bounding matrix where the values inside the bound are finite values (0s) and\n outside the bounds are infinity (non finite).\n \"\"\"\n return np.zeros((x.shape[1], y.shape[1]))\n\n\n@njit(cache=True)\ndef sakoe_chiba(x: np.ndarray, y: np.ndarray, window: float) -> np.ndarray:\n \"\"\"Create a sakoe chiba lower bounding window on a matrix.\n\n Parameters\n ----------\n x: np.ndarray (2d array)\n First time series.\n y: np.ndarray (2d array)\n Second time series.\n window: float\n Float that is the size of the window. Must be between 0 and 1.\n\n Returns\n -------\n np.ndarray (2d of size mxn where m is len(x) and n is len(y)).\n Sakoe Chiba bounding matrix where the values inside the bound are finite\n values (0s) and outside the bounds are infinity (non finite).\n\n Raises\n ------\n ValueError\n If the sakoe_chiba_window_radius is not an integer.\n \"\"\"\n if window < 0 or window > 1:\n raise ValueError(\"Window must between 0 and 1\")\n\n x_size = x.shape[1]\n y_size = y.shape[1]\n bounding_matrix = np.full((x_size, y_size), np.inf)\n sakoe_chiba_window_radius = ((x_size / 100) * window) * 100\n\n x_upper_line_values = np.interp(\n list(range(x_size)),\n [0, x_size - 1],\n [0 - sakoe_chiba_window_radius, y_size - sakoe_chiba_window_radius - 1],\n )\n x_lower_line_values = np.interp(\n list(range(x_size)),\n [0, x_size - 1],\n [0 + sakoe_chiba_window_radius, y_size + sakoe_chiba_window_radius - 1],\n )\n\n bounding_matrix = create_shape_on_matrix(\n bounding_matrix, x_upper_line_values, x_lower_line_values\n )\n\n return bounding_matrix\n\n\n@njit(cache=True)\ndef itakura_parallelogram(\n x: np.ndarray, y: np.ndarray, itakura_max_slope: float\n) -> np.ndarray:\n \"\"\"Create a itakura parallelogram bounding matrix.\n\n Parameters\n ----------\n x: np.ndarray (2d array)\n First time series.\n y: np.ndarray (2d array)\n Second time series.\n itakura_max_slope: float or int\n Gradient of the slope must be between 0 and 1.\n\n Returns\n -------\n np.ndarray (2d of size mxn where m is len(x) and n is len(y)).\n Sakoe Chiba bounding matrix where the values inside the bound are finite\n values (0s) and outside the bounds are infinity (non finite).\n\n Raises\n ------\n ValueError\n If the itakura_max_slope is not a float or int.\n \"\"\"\n if itakura_max_slope < 0 or itakura_max_slope > 1:\n raise ValueError(\"Window must between 0 and 1\")\n x_size = x.shape[1]\n y_size = y.shape[1]\n bounding_matrix = np.full((y_size, x_size), np.inf)\n itakura_max_slope = math.floor(((x_size / 100) * itakura_max_slope) * 100) / 2\n\n middle_x_upper = math.ceil(x_size / 2)\n middle_x_lower = math.floor(x_size / 2)\n if middle_x_lower == middle_x_upper:\n middle_x_lower = middle_x_lower - 1\n middle_y = math.floor(y_size / 2)\n\n difference_from_middle_y = abs((middle_x_lower * itakura_max_slope) - middle_y)\n middle_y_lower = middle_y + difference_from_middle_y\n middle_y_upper = middle_y - difference_from_middle_y\n\n x_upper_line_values = np.interp(\n list(range(x_size)),\n [0, middle_x_lower, middle_x_upper, x_size - 1],\n [0, middle_y_upper, middle_y_upper, y_size - 1],\n )\n x_lower_line_values = np.interp(\n list(range(x_size)),\n [0, middle_x_lower, middle_x_upper, x_size - 1],\n [0, middle_y_lower, middle_y_lower, y_size - 1],\n )\n\n if np.array_equal(x_upper_line_values, x_lower_line_values):\n x_upper_line_values = _check_line_steps(x_upper_line_values)\n\n bounding_matrix = create_shape_on_matrix(\n bounding_matrix, x_upper_line_values, x_lower_line_values\n )\n\n return bounding_matrix\n\n\n@njit(cache=True)\ndef numba_create_bounding_matrix(\n x: np.ndarray,\n y: np.ndarray,\n window: float = -1.0,\n itakura_max_slope: float = -1.0,\n) -> np.ndarray:\n \"\"\"Numba compiled way of creating bounding matrix.\n\n Parameters\n ----------\n x: np.ndarray (2d array)\n First time series.\n y: np.ndarray (2d array)\n Second time series.\n window: float, defaults = -1.\n Float that is the % radius of the sakoe chiba window (if using Sakoe-Chiba\n lower bounding). Must be between 0 and 1.\n itakura_max_slope: float, defaults = -1.\n Gradient of the slope for itakura parallelogram (if using Itakura\n Parallelogram lower bounding). Must be between 0 and 1.\n \"\"\"\n if window != -1.0:\n bounding_matrix = sakoe_chiba(x, y, window)\n elif itakura_max_slope != -1.0:\n bounding_matrix = itakura_parallelogram(x, y, itakura_max_slope)\n else:\n bounding_matrix = no_bounding(x, y)\n\n return bounding_matrix\n", "path": "sktime/distances/_lower_bounding_numba.py"}]}
3,684
413