problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
10.2k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 582
21k
| num_tokens
int64 271
2.05k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_9340
|
rasdani/github-patches
|
git_diff
|
spack__spack-15918
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Automake version mismatch in cryptsetup
### Spack version
Pulled e1238bae9053d3c283c1721f85af78f3539f473d (2020-03-27) without any additional configuration.
```console
$ spack --version
fatal: No names found, cannot describe anything.
0.14.1
```
### Steps to reproduce the issue
```console
$ spack install cryptsetup
...
219 ==> 32048: cryptsetup: Executing phase: 'build'
220 ==> [2020-03-27-11:07:25.493154, 7479] 'make' '-j2'
221 cd . && /bin/bash /tmp/ubuntu/spack-stage/spack-stage-cryptsetup-2.2.1-rsdkky2pjkptxrmbuckroajv
tcz4qkl5/spack-src/missing automake-1.16 --gnu Makefile
>> 222 configure.ac:19: error: version mismatch. This is Automake 1.16.2,
223 configure.ac:19: but the definition used by this AM_INIT_AUTOMAKE
224 configure.ac:19: comes from Automake 1.16.1. You should recreate
225 configure.ac:19: aclocal.m4 with aclocal and run automake again.
...
```
### Platform and user environment
```commandline
$ uname -a
Linux roman-spack-debug 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
$ lsb_release -d
Description: Ubuntu 18.04.4 LTS
```
I do not have any experience with automake/autoconf. However, the [documentation](https://spack.readthedocs.io/en/latest/build_systems/autotoolspackage.html) states that the `autoreconf` phase includes running `aclocal`. The debug file shows that `autoreconf` did run but `aclocal.m4` is not updated (its timestamp is 2019-09-06). I cannot tell whether it is a bug in `autoreconf` phase or `aclocal` only runs when `aclocal.m4` is not present. If I add `force_autoreconf = True` to the recipe, `aclocal.m4` is updated and installation is successful.
If the `force_autoreconf` setting is indeed required, I can create a pull request to fix the issue.
### General information
- [x] I have run `spack --version` and reported the version of Spack
- [x] I have searched the issues of this repo and believe this is not a duplicate
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `var/spack/repos/builtin/packages/cryptsetup/package.py`
Content:
```
1 # Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 from spack import *
7
8
9 class Cryptsetup(AutotoolsPackage):
10 """Cryptsetup and LUKS - open-source disk encryption."""
11
12 homepage = "https://gitlab.com/cryptsetup/cryptsetup"
13 url = "https://www.kernel.org/pub/linux/utils/cryptsetup/v2.2/cryptsetup-2.2.1.tar.xz"
14 list_url = "https://www.kernel.org/pub/linux/utils/cryptsetup/"
15 list_depth = 1
16
17 # If you're adding newer versions, check whether the patch below
18 # still needs to be applied.
19 version('2.3.1', sha256='92aba4d559a2cf7043faed92e0f22c5addea36bd63f8c039ba5a8f3a159fe7d2')
20 version('2.2.3', sha256='2af0ec9551ab9c870074cae9d3f68d82cab004f4095fa89db0e4413713424a46')
21 version('2.2.2', sha256='2af0ec9551ab9c870074cae9d3f68d82cab004f4095fa89db0e4413713424a46')
22 version('2.2.1', sha256='94e79a31ed38bdb0acd9af7ccca1605a2ac62ca850ed640202876b1ee11c1c61')
23
24 depends_on('libuuid', type=('build', 'link'))
25 depends_on('lvm2', type=('build', 'link'))
26 depends_on('popt', type=('build', 'link'))
27 depends_on('json-c', type=('build', 'link'))
28 depends_on('util-linux~libuuid', type=('build', 'link'))
29 depends_on('gettext', type=('build', 'link'))
30
31 depends_on('pkgconfig', type='build')
32 depends_on('autoconf', type='build')
33 depends_on('automake', type='build')
34 depends_on('libtool', type='build')
35 depends_on('m4', type='build')
36
37 # Upstream includes support for discovering the location of the libintl
38 # library but is missing the bit in the Makefile.ac that includes it in
39 # the LDFLAGS. See https://gitlab.com/cryptsetup/cryptsetup/issues/479
40 # This *should* be unnecessary starting with release 2.2.2, see
41 # https://gitlab.com/cryptsetup/cryptsetup/issues/479#note_227617031
42 patch('autotools-libintl.patch', when='@:2.2.1')
43
44 def url_for_version(self, version):
45 url = "https://www.kernel.org/pub/linux/utils/cryptsetup/v{0}/cryptsetup-{1}.tar.xz"
46 return url.format(version.up_to(2), version)
47
48 def configure_args(self):
49 args = [
50 'systemd_tmpfilesdir={0}/tmpfiles.d'.format(self.prefix)
51 ]
52 return args
53
54 def setup_dependent_build_environment(self, env, dependent_spec):
55 """Prepend the sbin directory to PATH."""
56 env.prepend_path('PATH', self.prefix.sbin)
57
58 def setup_dependent_run_environment(self, env, dependent_spec):
59 """Prepend the sbin directory to PATH."""
60 env.prepend_path('PATH', self.prefix.sbin)
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/var/spack/repos/builtin/packages/cryptsetup/package.py b/var/spack/repos/builtin/packages/cryptsetup/package.py
--- a/var/spack/repos/builtin/packages/cryptsetup/package.py
+++ b/var/spack/repos/builtin/packages/cryptsetup/package.py
@@ -34,6 +34,8 @@
depends_on('libtool', type='build')
depends_on('m4', type='build')
+ depends_on('automake@:1.16.1', when='@2.2.1', type='build')
+
# Upstream includes support for discovering the location of the libintl
# library but is missing the bit in the Makefile.ac that includes it in
# the LDFLAGS. See https://gitlab.com/cryptsetup/cryptsetup/issues/479
|
{"golden_diff": "diff --git a/var/spack/repos/builtin/packages/cryptsetup/package.py b/var/spack/repos/builtin/packages/cryptsetup/package.py\n--- a/var/spack/repos/builtin/packages/cryptsetup/package.py\n+++ b/var/spack/repos/builtin/packages/cryptsetup/package.py\n@@ -34,6 +34,8 @@\n depends_on('libtool', type='build')\n depends_on('m4', type='build')\n \n+ depends_on('automake@:1.16.1', when='@2.2.1', type='build')\n+\n # Upstream includes support for discovering the location of the libintl\n # library but is missing the bit in the Makefile.ac that includes it in\n # the LDFLAGS. See https://gitlab.com/cryptsetup/cryptsetup/issues/479\n", "issue": "Automake version mismatch in cryptsetup\n### Spack version\r\nPulled e1238bae9053d3c283c1721f85af78f3539f473d (2020-03-27) without any additional configuration. \r\n```console\r\n$ spack --version\r\nfatal: No names found, cannot describe anything.\r\n0.14.1\r\n```\r\n\r\n### Steps to reproduce the issue\r\n\r\n```console\r\n$ spack install cryptsetup\r\n...\r\n 219 ==> 32048: cryptsetup: Executing phase: 'build'\r\n 220 ==> [2020-03-27-11:07:25.493154, 7479] 'make' '-j2'\r\n 221 cd . && /bin/bash /tmp/ubuntu/spack-stage/spack-stage-cryptsetup-2.2.1-rsdkky2pjkptxrmbuckroajv\r\n tcz4qkl5/spack-src/missing automake-1.16 --gnu Makefile\r\n >> 222 configure.ac:19: error: version mismatch. This is Automake 1.16.2,\r\n 223 configure.ac:19: but the definition used by this AM_INIT_AUTOMAKE\r\n 224 configure.ac:19: comes from Automake 1.16.1. You should recreate\r\n 225 configure.ac:19: aclocal.m4 with aclocal and run automake again.\r\n...\r\n```\r\n\r\n### Platform and user environment\r\n```commandline\r\n$ uname -a \r\nLinux roman-spack-debug 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux\r\n$ lsb_release -d\r\nDescription: Ubuntu 18.04.4 LTS\r\n``` \r\n\r\nI do not have any experience with automake/autoconf. However, the [documentation](https://spack.readthedocs.io/en/latest/build_systems/autotoolspackage.html) states that the `autoreconf` phase includes running `aclocal`. The debug file shows that `autoreconf` did run but `aclocal.m4` is not updated (its timestamp is 2019-09-06). I cannot tell whether it is a bug in `autoreconf` phase or `aclocal` only runs when `aclocal.m4` is not present. If I add `force_autoreconf = True` to the recipe, `aclocal.m4` is updated and installation is successful.\r\n\r\nIf the `force_autoreconf` setting is indeed required, I can create a pull request to fix the issue.\r\n\r\n### General information\r\n\r\n- [x] I have run `spack --version` and reported the version of Spack\r\n- [x] I have searched the issues of this repo and believe this is not a duplicate\r\n\n", "before_files": [{"content": "# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack import *\n\n\nclass Cryptsetup(AutotoolsPackage):\n \"\"\"Cryptsetup and LUKS - open-source disk encryption.\"\"\"\n\n homepage = \"https://gitlab.com/cryptsetup/cryptsetup\"\n url = \"https://www.kernel.org/pub/linux/utils/cryptsetup/v2.2/cryptsetup-2.2.1.tar.xz\"\n list_url = \"https://www.kernel.org/pub/linux/utils/cryptsetup/\"\n list_depth = 1\n\n # If you're adding newer versions, check whether the patch below\n # still needs to be applied.\n version('2.3.1', sha256='92aba4d559a2cf7043faed92e0f22c5addea36bd63f8c039ba5a8f3a159fe7d2')\n version('2.2.3', sha256='2af0ec9551ab9c870074cae9d3f68d82cab004f4095fa89db0e4413713424a46')\n version('2.2.2', sha256='2af0ec9551ab9c870074cae9d3f68d82cab004f4095fa89db0e4413713424a46')\n version('2.2.1', sha256='94e79a31ed38bdb0acd9af7ccca1605a2ac62ca850ed640202876b1ee11c1c61')\n\n depends_on('libuuid', type=('build', 'link'))\n depends_on('lvm2', type=('build', 'link'))\n depends_on('popt', type=('build', 'link'))\n depends_on('json-c', type=('build', 'link'))\n depends_on('util-linux~libuuid', type=('build', 'link'))\n depends_on('gettext', type=('build', 'link'))\n\n depends_on('pkgconfig', type='build')\n depends_on('autoconf', type='build')\n depends_on('automake', type='build')\n depends_on('libtool', type='build')\n depends_on('m4', type='build')\n\n # Upstream includes support for discovering the location of the libintl\n # library but is missing the bit in the Makefile.ac that includes it in\n # the LDFLAGS. See https://gitlab.com/cryptsetup/cryptsetup/issues/479\n # This *should* be unnecessary starting with release 2.2.2, see\n # https://gitlab.com/cryptsetup/cryptsetup/issues/479#note_227617031\n patch('autotools-libintl.patch', when='@:2.2.1')\n\n def url_for_version(self, version):\n url = \"https://www.kernel.org/pub/linux/utils/cryptsetup/v{0}/cryptsetup-{1}.tar.xz\"\n return url.format(version.up_to(2), version)\n\n def configure_args(self):\n args = [\n 'systemd_tmpfilesdir={0}/tmpfiles.d'.format(self.prefix)\n ]\n return args\n\n def setup_dependent_build_environment(self, env, dependent_spec):\n \"\"\"Prepend the sbin directory to PATH.\"\"\"\n env.prepend_path('PATH', self.prefix.sbin)\n\n def setup_dependent_run_environment(self, env, dependent_spec):\n \"\"\"Prepend the sbin directory to PATH.\"\"\"\n env.prepend_path('PATH', self.prefix.sbin)\n", "path": "var/spack/repos/builtin/packages/cryptsetup/package.py"}], "after_files": [{"content": "# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack import *\n\n\nclass Cryptsetup(AutotoolsPackage):\n \"\"\"Cryptsetup and LUKS - open-source disk encryption.\"\"\"\n\n homepage = \"https://gitlab.com/cryptsetup/cryptsetup\"\n url = \"https://www.kernel.org/pub/linux/utils/cryptsetup/v2.2/cryptsetup-2.2.1.tar.xz\"\n list_url = \"https://www.kernel.org/pub/linux/utils/cryptsetup/\"\n list_depth = 1\n\n # If you're adding newer versions, check whether the patch below\n # still needs to be applied.\n version('2.3.1', sha256='92aba4d559a2cf7043faed92e0f22c5addea36bd63f8c039ba5a8f3a159fe7d2')\n version('2.2.3', sha256='2af0ec9551ab9c870074cae9d3f68d82cab004f4095fa89db0e4413713424a46')\n version('2.2.2', sha256='2af0ec9551ab9c870074cae9d3f68d82cab004f4095fa89db0e4413713424a46')\n version('2.2.1', sha256='94e79a31ed38bdb0acd9af7ccca1605a2ac62ca850ed640202876b1ee11c1c61')\n\n depends_on('libuuid', type=('build', 'link'))\n depends_on('lvm2', type=('build', 'link'))\n depends_on('popt', type=('build', 'link'))\n depends_on('json-c', type=('build', 'link'))\n depends_on('util-linux~libuuid', type=('build', 'link'))\n depends_on('gettext', type=('build', 'link'))\n\n depends_on('pkgconfig', type='build')\n depends_on('autoconf', type='build')\n depends_on('automake', type='build')\n depends_on('libtool', type='build')\n depends_on('m4', type='build')\n\n depends_on('automake@:1.16.1', when='@2.2.1', type='build')\n\n # Upstream includes support for discovering the location of the libintl\n # library but is missing the bit in the Makefile.ac that includes it in\n # the LDFLAGS. See https://gitlab.com/cryptsetup/cryptsetup/issues/479\n # This *should* be unnecessary starting with release 2.2.2, see\n # https://gitlab.com/cryptsetup/cryptsetup/issues/479#note_227617031\n patch('autotools-libintl.patch', when='@:2.2.1')\n\n def url_for_version(self, version):\n url = \"https://www.kernel.org/pub/linux/utils/cryptsetup/v{0}/cryptsetup-{1}.tar.xz\"\n return url.format(version.up_to(2), version)\n\n def configure_args(self):\n args = [\n 'systemd_tmpfilesdir={0}/tmpfiles.d'.format(self.prefix)\n ]\n return args\n\n def setup_dependent_build_environment(self, env, dependent_spec):\n \"\"\"Prepend the sbin directory to PATH.\"\"\"\n env.prepend_path('PATH', self.prefix.sbin)\n\n def setup_dependent_run_environment(self, env, dependent_spec):\n \"\"\"Prepend the sbin directory to PATH.\"\"\"\n env.prepend_path('PATH', self.prefix.sbin)\n", "path": "var/spack/repos/builtin/packages/cryptsetup/package.py"}]}
| 1,966 | 186 |
gh_patches_debug_13861
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-1011
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
FileExistsError : [WinError 183] Cannot create a file when that file already exists
version : 1.15.2
OS : Windows 10
```console
$ pre-commit install
An unexpected error has occurred: FileExistsError: [WinError 183] Cannot create a file when that file already exists: '.git\\hooks\\pre-commit' -> '.git\\hooks\\pre-commit.legacy'
```
when i tried second times this was the error.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/commands/install_uninstall.py`
Content:
```
1 from __future__ import print_function
2 from __future__ import unicode_literals
3
4 import io
5 import itertools
6 import logging
7 import os.path
8 import sys
9
10 from pre_commit import git
11 from pre_commit import output
12 from pre_commit.clientlib import load_config
13 from pre_commit.repository import all_hooks
14 from pre_commit.repository import install_hook_envs
15 from pre_commit.util import cmd_output
16 from pre_commit.util import make_executable
17 from pre_commit.util import mkdirp
18 from pre_commit.util import resource_text
19
20
21 logger = logging.getLogger(__name__)
22
23 # This is used to identify the hook file we install
24 PRIOR_HASHES = (
25 '4d9958c90bc262f47553e2c073f14cfe',
26 'd8ee923c46731b42cd95cc869add4062',
27 '49fd668cb42069aa1b6048464be5d395',
28 '79f09a650522a87b0da915d0d983b2de',
29 'e358c9dae00eac5d06b38dfdb1e33a8c',
30 )
31 CURRENT_HASH = '138fd403232d2ddd5efb44317e38bf03'
32 TEMPLATE_START = '# start templated\n'
33 TEMPLATE_END = '# end templated\n'
34
35
36 def _hook_paths(hook_type):
37 pth = os.path.join(git.get_git_dir(), 'hooks', hook_type)
38 return pth, '{}.legacy'.format(pth)
39
40
41 def is_our_script(filename):
42 if not os.path.exists(filename): # pragma: windows no cover (symlink)
43 return False
44 with io.open(filename) as f:
45 contents = f.read()
46 return any(h in contents for h in (CURRENT_HASH,) + PRIOR_HASHES)
47
48
49 def shebang():
50 if sys.platform == 'win32':
51 py = 'python'
52 else:
53 # Homebrew/homebrew-core#35825: be more timid about appropriate `PATH`
54 path_choices = [p for p in os.defpath.split(os.pathsep) if p]
55 exe_choices = [
56 'python{}'.format('.'.join(str(v) for v in sys.version_info[:i]))
57 for i in range(3)
58 ]
59 for path, exe in itertools.product(path_choices, exe_choices):
60 if os.path.exists(os.path.join(path, exe)):
61 py = exe
62 break
63 else:
64 py = 'python'
65 return '#!/usr/bin/env {}'.format(py)
66
67
68 def install(
69 config_file, store,
70 overwrite=False, hooks=False, hook_type='pre-commit',
71 skip_on_missing_conf=False,
72 ):
73 """Install the pre-commit hooks."""
74 if cmd_output('git', 'config', 'core.hooksPath', retcode=None)[1].strip():
75 logger.error(
76 'Cowardly refusing to install hooks with `core.hooksPath` set.\n'
77 'hint: `git config --unset-all core.hooksPath`',
78 )
79 return 1
80
81 hook_path, legacy_path = _hook_paths(hook_type)
82
83 mkdirp(os.path.dirname(hook_path))
84
85 # If we have an existing hook, move it to pre-commit.legacy
86 if os.path.lexists(hook_path) and not is_our_script(hook_path):
87 os.rename(hook_path, legacy_path)
88
89 # If we specify overwrite, we simply delete the legacy file
90 if overwrite and os.path.exists(legacy_path):
91 os.remove(legacy_path)
92 elif os.path.exists(legacy_path):
93 output.write_line(
94 'Running in migration mode with existing hooks at {}\n'
95 'Use -f to use only pre-commit.'.format(legacy_path),
96 )
97
98 params = {
99 'CONFIG': config_file,
100 'HOOK_TYPE': hook_type,
101 'INSTALL_PYTHON': sys.executable,
102 'SKIP_ON_MISSING_CONFIG': skip_on_missing_conf,
103 }
104
105 with io.open(hook_path, 'w') as hook_file:
106 contents = resource_text('hook-tmpl')
107 before, rest = contents.split(TEMPLATE_START)
108 to_template, after = rest.split(TEMPLATE_END)
109
110 before = before.replace('#!/usr/bin/env python3', shebang())
111
112 hook_file.write(before + TEMPLATE_START)
113 for line in to_template.splitlines():
114 var = line.split()[0]
115 hook_file.write('{} = {!r}\n'.format(var, params[var]))
116 hook_file.write(TEMPLATE_END + after)
117 make_executable(hook_path)
118
119 output.write_line('pre-commit installed at {}'.format(hook_path))
120
121 # If they requested we install all of the hooks, do so.
122 if hooks:
123 install_hooks(config_file, store)
124
125 return 0
126
127
128 def install_hooks(config_file, store):
129 install_hook_envs(all_hooks(load_config(config_file), store), store)
130
131
132 def uninstall(hook_type='pre-commit'):
133 """Uninstall the pre-commit hooks."""
134 hook_path, legacy_path = _hook_paths(hook_type)
135
136 # If our file doesn't exist or it isn't ours, gtfo.
137 if not os.path.exists(hook_path) or not is_our_script(hook_path):
138 return 0
139
140 os.remove(hook_path)
141 output.write_line('{} uninstalled'.format(hook_type))
142
143 if os.path.exists(legacy_path):
144 os.rename(legacy_path, hook_path)
145 output.write_line('Restored previous hooks to {}'.format(hook_path))
146
147 return 0
148
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pre_commit/commands/install_uninstall.py b/pre_commit/commands/install_uninstall.py
--- a/pre_commit/commands/install_uninstall.py
+++ b/pre_commit/commands/install_uninstall.py
@@ -5,6 +5,7 @@
import itertools
import logging
import os.path
+import shutil
import sys
from pre_commit import git
@@ -84,7 +85,7 @@
# If we have an existing hook, move it to pre-commit.legacy
if os.path.lexists(hook_path) and not is_our_script(hook_path):
- os.rename(hook_path, legacy_path)
+ shutil.move(hook_path, legacy_path)
# If we specify overwrite, we simply delete the legacy file
if overwrite and os.path.exists(legacy_path):
|
{"golden_diff": "diff --git a/pre_commit/commands/install_uninstall.py b/pre_commit/commands/install_uninstall.py\n--- a/pre_commit/commands/install_uninstall.py\n+++ b/pre_commit/commands/install_uninstall.py\n@@ -5,6 +5,7 @@\n import itertools\n import logging\n import os.path\n+import shutil\n import sys\n \n from pre_commit import git\n@@ -84,7 +85,7 @@\n \n # If we have an existing hook, move it to pre-commit.legacy\n if os.path.lexists(hook_path) and not is_our_script(hook_path):\n- os.rename(hook_path, legacy_path)\n+ shutil.move(hook_path, legacy_path)\n \n # If we specify overwrite, we simply delete the legacy file\n if overwrite and os.path.exists(legacy_path):\n", "issue": "FileExistsError : [WinError 183] Cannot create a file when that file already exists\nversion : 1.15.2\r\nOS : Windows 10\r\n\r\n```console\r\n$ pre-commit install\r\n\r\nAn unexpected error has occurred: FileExistsError: [WinError 183] Cannot create a file when that file already exists: '.git\\\\hooks\\\\pre-commit' -> '.git\\\\hooks\\\\pre-commit.legacy'\r\n```\r\n\r\nwhen i tried second times this was the error.\n", "before_files": [{"content": "from __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport io\nimport itertools\nimport logging\nimport os.path\nimport sys\n\nfrom pre_commit import git\nfrom pre_commit import output\nfrom pre_commit.clientlib import load_config\nfrom pre_commit.repository import all_hooks\nfrom pre_commit.repository import install_hook_envs\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import make_executable\nfrom pre_commit.util import mkdirp\nfrom pre_commit.util import resource_text\n\n\nlogger = logging.getLogger(__name__)\n\n# This is used to identify the hook file we install\nPRIOR_HASHES = (\n '4d9958c90bc262f47553e2c073f14cfe',\n 'd8ee923c46731b42cd95cc869add4062',\n '49fd668cb42069aa1b6048464be5d395',\n '79f09a650522a87b0da915d0d983b2de',\n 'e358c9dae00eac5d06b38dfdb1e33a8c',\n)\nCURRENT_HASH = '138fd403232d2ddd5efb44317e38bf03'\nTEMPLATE_START = '# start templated\\n'\nTEMPLATE_END = '# end templated\\n'\n\n\ndef _hook_paths(hook_type):\n pth = os.path.join(git.get_git_dir(), 'hooks', hook_type)\n return pth, '{}.legacy'.format(pth)\n\n\ndef is_our_script(filename):\n if not os.path.exists(filename): # pragma: windows no cover (symlink)\n return False\n with io.open(filename) as f:\n contents = f.read()\n return any(h in contents for h in (CURRENT_HASH,) + PRIOR_HASHES)\n\n\ndef shebang():\n if sys.platform == 'win32':\n py = 'python'\n else:\n # Homebrew/homebrew-core#35825: be more timid about appropriate `PATH`\n path_choices = [p for p in os.defpath.split(os.pathsep) if p]\n exe_choices = [\n 'python{}'.format('.'.join(str(v) for v in sys.version_info[:i]))\n for i in range(3)\n ]\n for path, exe in itertools.product(path_choices, exe_choices):\n if os.path.exists(os.path.join(path, exe)):\n py = exe\n break\n else:\n py = 'python'\n return '#!/usr/bin/env {}'.format(py)\n\n\ndef install(\n config_file, store,\n overwrite=False, hooks=False, hook_type='pre-commit',\n skip_on_missing_conf=False,\n):\n \"\"\"Install the pre-commit hooks.\"\"\"\n if cmd_output('git', 'config', 'core.hooksPath', retcode=None)[1].strip():\n logger.error(\n 'Cowardly refusing to install hooks with `core.hooksPath` set.\\n'\n 'hint: `git config --unset-all core.hooksPath`',\n )\n return 1\n\n hook_path, legacy_path = _hook_paths(hook_type)\n\n mkdirp(os.path.dirname(hook_path))\n\n # If we have an existing hook, move it to pre-commit.legacy\n if os.path.lexists(hook_path) and not is_our_script(hook_path):\n os.rename(hook_path, legacy_path)\n\n # If we specify overwrite, we simply delete the legacy file\n if overwrite and os.path.exists(legacy_path):\n os.remove(legacy_path)\n elif os.path.exists(legacy_path):\n output.write_line(\n 'Running in migration mode with existing hooks at {}\\n'\n 'Use -f to use only pre-commit.'.format(legacy_path),\n )\n\n params = {\n 'CONFIG': config_file,\n 'HOOK_TYPE': hook_type,\n 'INSTALL_PYTHON': sys.executable,\n 'SKIP_ON_MISSING_CONFIG': skip_on_missing_conf,\n }\n\n with io.open(hook_path, 'w') as hook_file:\n contents = resource_text('hook-tmpl')\n before, rest = contents.split(TEMPLATE_START)\n to_template, after = rest.split(TEMPLATE_END)\n\n before = before.replace('#!/usr/bin/env python3', shebang())\n\n hook_file.write(before + TEMPLATE_START)\n for line in to_template.splitlines():\n var = line.split()[0]\n hook_file.write('{} = {!r}\\n'.format(var, params[var]))\n hook_file.write(TEMPLATE_END + after)\n make_executable(hook_path)\n\n output.write_line('pre-commit installed at {}'.format(hook_path))\n\n # If they requested we install all of the hooks, do so.\n if hooks:\n install_hooks(config_file, store)\n\n return 0\n\n\ndef install_hooks(config_file, store):\n install_hook_envs(all_hooks(load_config(config_file), store), store)\n\n\ndef uninstall(hook_type='pre-commit'):\n \"\"\"Uninstall the pre-commit hooks.\"\"\"\n hook_path, legacy_path = _hook_paths(hook_type)\n\n # If our file doesn't exist or it isn't ours, gtfo.\n if not os.path.exists(hook_path) or not is_our_script(hook_path):\n return 0\n\n os.remove(hook_path)\n output.write_line('{} uninstalled'.format(hook_type))\n\n if os.path.exists(legacy_path):\n os.rename(legacy_path, hook_path)\n output.write_line('Restored previous hooks to {}'.format(hook_path))\n\n return 0\n", "path": "pre_commit/commands/install_uninstall.py"}], "after_files": [{"content": "from __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport io\nimport itertools\nimport logging\nimport os.path\nimport shutil\nimport sys\n\nfrom pre_commit import git\nfrom pre_commit import output\nfrom pre_commit.clientlib import load_config\nfrom pre_commit.repository import all_hooks\nfrom pre_commit.repository import install_hook_envs\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import make_executable\nfrom pre_commit.util import mkdirp\nfrom pre_commit.util import resource_text\n\n\nlogger = logging.getLogger(__name__)\n\n# This is used to identify the hook file we install\nPRIOR_HASHES = (\n '4d9958c90bc262f47553e2c073f14cfe',\n 'd8ee923c46731b42cd95cc869add4062',\n '49fd668cb42069aa1b6048464be5d395',\n '79f09a650522a87b0da915d0d983b2de',\n 'e358c9dae00eac5d06b38dfdb1e33a8c',\n)\nCURRENT_HASH = '138fd403232d2ddd5efb44317e38bf03'\nTEMPLATE_START = '# start templated\\n'\nTEMPLATE_END = '# end templated\\n'\n\n\ndef _hook_paths(hook_type):\n pth = os.path.join(git.get_git_dir(), 'hooks', hook_type)\n return pth, '{}.legacy'.format(pth)\n\n\ndef is_our_script(filename):\n if not os.path.exists(filename): # pragma: windows no cover (symlink)\n return False\n with io.open(filename) as f:\n contents = f.read()\n return any(h in contents for h in (CURRENT_HASH,) + PRIOR_HASHES)\n\n\ndef shebang():\n if sys.platform == 'win32':\n py = 'python'\n else:\n # Homebrew/homebrew-core#35825: be more timid about appropriate `PATH`\n path_choices = [p for p in os.defpath.split(os.pathsep) if p]\n exe_choices = [\n 'python{}'.format('.'.join(str(v) for v in sys.version_info[:i]))\n for i in range(3)\n ]\n for path, exe in itertools.product(path_choices, exe_choices):\n if os.path.exists(os.path.join(path, exe)):\n py = exe\n break\n else:\n py = 'python'\n return '#!/usr/bin/env {}'.format(py)\n\n\ndef install(\n config_file, store,\n overwrite=False, hooks=False, hook_type='pre-commit',\n skip_on_missing_conf=False,\n):\n \"\"\"Install the pre-commit hooks.\"\"\"\n if cmd_output('git', 'config', 'core.hooksPath', retcode=None)[1].strip():\n logger.error(\n 'Cowardly refusing to install hooks with `core.hooksPath` set.\\n'\n 'hint: `git config --unset-all core.hooksPath`',\n )\n return 1\n\n hook_path, legacy_path = _hook_paths(hook_type)\n\n mkdirp(os.path.dirname(hook_path))\n\n # If we have an existing hook, move it to pre-commit.legacy\n if os.path.lexists(hook_path) and not is_our_script(hook_path):\n shutil.move(hook_path, legacy_path)\n\n # If we specify overwrite, we simply delete the legacy file\n if overwrite and os.path.exists(legacy_path):\n os.remove(legacy_path)\n elif os.path.exists(legacy_path):\n output.write_line(\n 'Running in migration mode with existing hooks at {}\\n'\n 'Use -f to use only pre-commit.'.format(legacy_path),\n )\n\n params = {\n 'CONFIG': config_file,\n 'HOOK_TYPE': hook_type,\n 'INSTALL_PYTHON': sys.executable,\n 'SKIP_ON_MISSING_CONFIG': skip_on_missing_conf,\n }\n\n with io.open(hook_path, 'w') as hook_file:\n contents = resource_text('hook-tmpl')\n before, rest = contents.split(TEMPLATE_START)\n to_template, after = rest.split(TEMPLATE_END)\n\n before = before.replace('#!/usr/bin/env python3', shebang())\n\n hook_file.write(before + TEMPLATE_START)\n for line in to_template.splitlines():\n var = line.split()[0]\n hook_file.write('{} = {!r}\\n'.format(var, params[var]))\n hook_file.write(TEMPLATE_END + after)\n make_executable(hook_path)\n\n output.write_line('pre-commit installed at {}'.format(hook_path))\n\n # If they requested we install all of the hooks, do so.\n if hooks:\n install_hooks(config_file, store)\n\n return 0\n\n\ndef install_hooks(config_file, store):\n install_hook_envs(all_hooks(load_config(config_file), store), store)\n\n\ndef uninstall(hook_type='pre-commit'):\n \"\"\"Uninstall the pre-commit hooks.\"\"\"\n hook_path, legacy_path = _hook_paths(hook_type)\n\n # If our file doesn't exist or it isn't ours, gtfo.\n if not os.path.exists(hook_path) or not is_our_script(hook_path):\n return 0\n\n os.remove(hook_path)\n output.write_line('{} uninstalled'.format(hook_type))\n\n if os.path.exists(legacy_path):\n os.rename(legacy_path, hook_path)\n output.write_line('Restored previous hooks to {}'.format(hook_path))\n\n return 0\n", "path": "pre_commit/commands/install_uninstall.py"}]}
| 1,961 | 175 |
gh_patches_debug_34787
|
rasdani/github-patches
|
git_diff
|
netket__netket-394
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Rename `netket.stats.Stats.R` to `Rhat`?
As we have seen in the discussion on #365, the `stats.R` parameter is ambiguously named; it is sometimes called "Gelman-Rubin" statistic or "potential scale reduction", but I think the most common way to refer to it is $\hat R$ in LaTeX math or "Rhat" in words (this name is used, e.g., by the [stan](https://mc-stan.org/rstan/reference/Rhat.html) framework). So I propose we also call it `Rhat` starting in 3.0.
What do you think?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netket/stats/mc_stats.py`
Content:
```
1 from numba import jit
2 import numpy as _np
3 from . import mean as _mean
4 from . import var as _var
5 from . import total_size as _total_size
6
7
8 def _format_decimal(value, std):
9 decimals = max(int(_np.ceil(-_np.log10(std))), 0)
10 return (
11 "{0:.{1}f}".format(value, decimals),
12 "{0:.{1}f}".format(std, decimals + 1),
13 )
14
15
16 class Stats:
17 """A dict-compatible class containing the result of the statistics function."""
18
19 _NaN = float("NaN")
20
21 def __init__(
22 self, mean=_NaN, error_of_mean=_NaN, variance=_NaN, tau_corr=_NaN, R=_NaN
23 ):
24 self.mean = mean
25 self.error_of_mean = error_of_mean
26 self.variance = variance
27 self.tau_corr = tau_corr
28 self.R = R
29
30 def to_json(self):
31 jsd = {}
32 jsd["Mean"] = self.mean.real
33 jsd["Variance"] = self.variance
34 jsd["Sigma"] = self.error_of_mean
35 jsd["R"] = self.R
36 jsd["TauCorr"] = self.tau_corr
37 return jsd
38
39 def __repr__(self):
40 mean, err = _format_decimal(self.mean, self.error_of_mean)
41 return "{} ± {} [var={:.1e}, R={:.4f}]".format(
42 mean, err, self.variance, self.R
43 )
44
45 def __getitem__(self, name):
46 if name is "mean" or "Mean":
47 return self.mean
48 elif name is "variance" or "Variance":
49 return self.variance
50 elif name is "error_of_mean" or "Sigma":
51 return self.error_of_mean
52 elif name is "R":
53 return self.R
54 elif name is "tau_corr" or "TauCorr":
55 return self.tau_corr
56
57
58 @jit(nopython=True)
59 def _get_blocks(data, l):
60 n_blocks = int(_np.floor(data.shape[1] / float(l)))
61 blocks = _np.empty(data.shape[0] * n_blocks, dtype=data.dtype)
62 k = 0
63 for i in range(data.shape[0]):
64 for b in range(n_blocks):
65 blocks[k] = data[i, b * l : (b + 1) * l].mean()
66 k += 1
67 return blocks
68
69
70 def _block_variance(data, l):
71 blocks = _get_blocks(data, l)
72 ts = _total_size(blocks)
73 if ts > 0:
74 return _var(blocks) / float(ts), ts
75 else:
76 return _np.nan, 0
77
78
79 def _batch_variance(data):
80 b_means = _np.mean(data, axis=1)
81 ts = _total_size(b_means)
82 return _var(b_means), ts
83
84
85 def statistics(data):
86 r"""
87 Returns statistics of a given array (or matrix, see below) containing a stream of data.
88 This is particularly useful to analyze Markov Chain data, but it can be used
89 also for other type of time series.
90
91 Args:
92 data (vector or matrix): The input data. It can be real or complex valued.
93 * if a vector, it is assumed that this is a time
94 series of data (not necessarily independent).
95 * if a matrix, it is assumed that that rows data[i]
96 contain independent time series.
97
98 Returns:
99 Stats: A dictionary-compatible class containing the average (mean),
100 the variance (variance),
101 the error of the mean (error_of_mean), and an estimate of the
102 autocorrelation time (tau_corr). In addition to accessing the elements with the standard
103 dict sintax (e.g. res['mean']), one can also access them directly with the dot operator
104 (e.g. res.mean).
105 """
106
107 stats = Stats()
108 data = _np.atleast_1d(data)
109 if data.ndim == 1:
110 data = data.reshape((1, -1))
111
112 if data.ndim > 2:
113 raise NotImplementedError("Statistics are implemented only for ndim<=2")
114
115 stats.mean = _mean(data)
116 stats.variance = _var(data)
117
118 ts = _total_size(data)
119
120 bare_var = stats.variance / ts
121
122 batch_var, n_batches = _batch_variance(data)
123
124 b_s = 32
125 l_block = max(1, data.shape[1] // b_s)
126
127 block_var, n_blocks = _block_variance(data, l_block)
128
129 tau_batch = (batch_var / bare_var - 1) * 0.5
130 tau_block = (block_var / bare_var - 1) * 0.5
131
132 block_good = tau_block < 6 * l_block and n_blocks >= b_s
133 batch_good = tau_batch < 6 * data.shape[1] and n_batches >= b_s
134
135 if batch_good:
136 stats.error_of_mean = _np.sqrt(batch_var / ts)
137 stats.tau_corr = max(0, tau_batch)
138 elif block_good:
139 stats.error_of_mean = _np.sqrt(block_var)
140 stats.tau_corr = max(0, tau_block)
141 else:
142 stats.error_of_mean = _np.nan
143 stats.tau_corr = _np.nan
144
145 if n_batches > 1:
146 N = data.shape[-1]
147 stats.R = _np.sqrt((N - 1) / N + batch_var / stats.variance)
148
149 return stats
150
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/netket/stats/mc_stats.py b/netket/stats/mc_stats.py
--- a/netket/stats/mc_stats.py
+++ b/netket/stats/mc_stats.py
@@ -19,27 +19,32 @@
_NaN = float("NaN")
def __init__(
- self, mean=_NaN, error_of_mean=_NaN, variance=_NaN, tau_corr=_NaN, R=_NaN
+ self,
+ mean=_NaN,
+ error_of_mean=_NaN,
+ variance=_NaN,
+ tau_corr=_NaN,
+ R_hat=_NaN,
):
self.mean = mean
self.error_of_mean = error_of_mean
self.variance = variance
self.tau_corr = tau_corr
- self.R = R
+ self.R_hat = R_hat
def to_json(self):
jsd = {}
jsd["Mean"] = self.mean.real
jsd["Variance"] = self.variance
jsd["Sigma"] = self.error_of_mean
- jsd["R"] = self.R
+ jsd["R_hat"] = self.R_hat
jsd["TauCorr"] = self.tau_corr
return jsd
def __repr__(self):
mean, err = _format_decimal(self.mean, self.error_of_mean)
- return "{} ± {} [var={:.1e}, R={:.4f}]".format(
- mean, err, self.variance, self.R
+ return "{} ± {} [var={:.1e}, R_hat={:.4f}]".format(
+ mean, err, self.variance, self.R_hat
)
def __getitem__(self, name):
@@ -49,8 +54,8 @@
return self.variance
elif name is "error_of_mean" or "Sigma":
return self.error_of_mean
- elif name is "R":
- return self.R
+ elif name is "R_hat":
+ return self.R_hat
elif name is "tau_corr" or "TauCorr":
return self.tau_corr
@@ -144,6 +149,13 @@
if n_batches > 1:
N = data.shape[-1]
- stats.R = _np.sqrt((N - 1) / N + batch_var / stats.variance)
+
+ # V_loc = _np.var(data, axis=-1, ddof=0)
+ # W_loc = _np.mean(V_loc)
+ # W = _mean(W_loc)
+ # # This approximation seems to hold well enough for larger n_samples
+ W = stats.variance
+
+ stats.R_hat = _np.sqrt((N - 1) / N + batch_var / W)
return stats
|
{"golden_diff": "diff --git a/netket/stats/mc_stats.py b/netket/stats/mc_stats.py\n--- a/netket/stats/mc_stats.py\n+++ b/netket/stats/mc_stats.py\n@@ -19,27 +19,32 @@\n _NaN = float(\"NaN\")\n \n def __init__(\n- self, mean=_NaN, error_of_mean=_NaN, variance=_NaN, tau_corr=_NaN, R=_NaN\n+ self,\n+ mean=_NaN,\n+ error_of_mean=_NaN,\n+ variance=_NaN,\n+ tau_corr=_NaN,\n+ R_hat=_NaN,\n ):\n self.mean = mean\n self.error_of_mean = error_of_mean\n self.variance = variance\n self.tau_corr = tau_corr\n- self.R = R\n+ self.R_hat = R_hat\n \n def to_json(self):\n jsd = {}\n jsd[\"Mean\"] = self.mean.real\n jsd[\"Variance\"] = self.variance\n jsd[\"Sigma\"] = self.error_of_mean\n- jsd[\"R\"] = self.R\n+ jsd[\"R_hat\"] = self.R_hat\n jsd[\"TauCorr\"] = self.tau_corr\n return jsd\n \n def __repr__(self):\n mean, err = _format_decimal(self.mean, self.error_of_mean)\n- return \"{} \u00b1 {} [var={:.1e}, R={:.4f}]\".format(\n- mean, err, self.variance, self.R\n+ return \"{} \u00b1 {} [var={:.1e}, R_hat={:.4f}]\".format(\n+ mean, err, self.variance, self.R_hat\n )\n \n def __getitem__(self, name):\n@@ -49,8 +54,8 @@\n return self.variance\n elif name is \"error_of_mean\" or \"Sigma\":\n return self.error_of_mean\n- elif name is \"R\":\n- return self.R\n+ elif name is \"R_hat\":\n+ return self.R_hat\n elif name is \"tau_corr\" or \"TauCorr\":\n return self.tau_corr\n \n@@ -144,6 +149,13 @@\n \n if n_batches > 1:\n N = data.shape[-1]\n- stats.R = _np.sqrt((N - 1) / N + batch_var / stats.variance)\n+\n+ # V_loc = _np.var(data, axis=-1, ddof=0)\n+ # W_loc = _np.mean(V_loc)\n+ # W = _mean(W_loc)\n+ # # This approximation seems to hold well enough for larger n_samples\n+ W = stats.variance\n+\n+ stats.R_hat = _np.sqrt((N - 1) / N + batch_var / W)\n \n return stats\n", "issue": "Rename `netket.stats.Stats.R` to `Rhat`?\nAs we have seen in the discussion on #365, the `stats.R` parameter is ambiguously named; it is sometimes called \"Gelman-Rubin\" statistic or \"potential scale reduction\", but I think the most common way to refer to it is $\\hat R$ in LaTeX math or \"Rhat\" in words (this name is used, e.g., by the [stan](https://mc-stan.org/rstan/reference/Rhat.html) framework). So I propose we also call it `Rhat` starting in 3.0.\r\n\r\nWhat do you think?\n", "before_files": [{"content": "from numba import jit\nimport numpy as _np\nfrom . import mean as _mean\nfrom . import var as _var\nfrom . import total_size as _total_size\n\n\ndef _format_decimal(value, std):\n decimals = max(int(_np.ceil(-_np.log10(std))), 0)\n return (\n \"{0:.{1}f}\".format(value, decimals),\n \"{0:.{1}f}\".format(std, decimals + 1),\n )\n\n\nclass Stats:\n \"\"\"A dict-compatible class containing the result of the statistics function.\"\"\"\n\n _NaN = float(\"NaN\")\n\n def __init__(\n self, mean=_NaN, error_of_mean=_NaN, variance=_NaN, tau_corr=_NaN, R=_NaN\n ):\n self.mean = mean\n self.error_of_mean = error_of_mean\n self.variance = variance\n self.tau_corr = tau_corr\n self.R = R\n\n def to_json(self):\n jsd = {}\n jsd[\"Mean\"] = self.mean.real\n jsd[\"Variance\"] = self.variance\n jsd[\"Sigma\"] = self.error_of_mean\n jsd[\"R\"] = self.R\n jsd[\"TauCorr\"] = self.tau_corr\n return jsd\n\n def __repr__(self):\n mean, err = _format_decimal(self.mean, self.error_of_mean)\n return \"{} \u00b1 {} [var={:.1e}, R={:.4f}]\".format(\n mean, err, self.variance, self.R\n )\n\n def __getitem__(self, name):\n if name is \"mean\" or \"Mean\":\n return self.mean\n elif name is \"variance\" or \"Variance\":\n return self.variance\n elif name is \"error_of_mean\" or \"Sigma\":\n return self.error_of_mean\n elif name is \"R\":\n return self.R\n elif name is \"tau_corr\" or \"TauCorr\":\n return self.tau_corr\n\n\n@jit(nopython=True)\ndef _get_blocks(data, l):\n n_blocks = int(_np.floor(data.shape[1] / float(l)))\n blocks = _np.empty(data.shape[0] * n_blocks, dtype=data.dtype)\n k = 0\n for i in range(data.shape[0]):\n for b in range(n_blocks):\n blocks[k] = data[i, b * l : (b + 1) * l].mean()\n k += 1\n return blocks\n\n\ndef _block_variance(data, l):\n blocks = _get_blocks(data, l)\n ts = _total_size(blocks)\n if ts > 0:\n return _var(blocks) / float(ts), ts\n else:\n return _np.nan, 0\n\n\ndef _batch_variance(data):\n b_means = _np.mean(data, axis=1)\n ts = _total_size(b_means)\n return _var(b_means), ts\n\n\ndef statistics(data):\n r\"\"\"\n Returns statistics of a given array (or matrix, see below) containing a stream of data.\n This is particularly useful to analyze Markov Chain data, but it can be used\n also for other type of time series.\n\n Args:\n data (vector or matrix): The input data. It can be real or complex valued.\n * if a vector, it is assumed that this is a time\n series of data (not necessarily independent).\n * if a matrix, it is assumed that that rows data[i]\n contain independent time series.\n\n Returns:\n Stats: A dictionary-compatible class containing the average (mean),\n the variance (variance),\n the error of the mean (error_of_mean), and an estimate of the\n autocorrelation time (tau_corr). In addition to accessing the elements with the standard\n dict sintax (e.g. res['mean']), one can also access them directly with the dot operator\n (e.g. res.mean).\n \"\"\"\n\n stats = Stats()\n data = _np.atleast_1d(data)\n if data.ndim == 1:\n data = data.reshape((1, -1))\n\n if data.ndim > 2:\n raise NotImplementedError(\"Statistics are implemented only for ndim<=2\")\n\n stats.mean = _mean(data)\n stats.variance = _var(data)\n\n ts = _total_size(data)\n\n bare_var = stats.variance / ts\n\n batch_var, n_batches = _batch_variance(data)\n\n b_s = 32\n l_block = max(1, data.shape[1] // b_s)\n\n block_var, n_blocks = _block_variance(data, l_block)\n\n tau_batch = (batch_var / bare_var - 1) * 0.5\n tau_block = (block_var / bare_var - 1) * 0.5\n\n block_good = tau_block < 6 * l_block and n_blocks >= b_s\n batch_good = tau_batch < 6 * data.shape[1] and n_batches >= b_s\n\n if batch_good:\n stats.error_of_mean = _np.sqrt(batch_var / ts)\n stats.tau_corr = max(0, tau_batch)\n elif block_good:\n stats.error_of_mean = _np.sqrt(block_var)\n stats.tau_corr = max(0, tau_block)\n else:\n stats.error_of_mean = _np.nan\n stats.tau_corr = _np.nan\n\n if n_batches > 1:\n N = data.shape[-1]\n stats.R = _np.sqrt((N - 1) / N + batch_var / stats.variance)\n\n return stats\n", "path": "netket/stats/mc_stats.py"}], "after_files": [{"content": "from numba import jit\nimport numpy as _np\nfrom . import mean as _mean\nfrom . import var as _var\nfrom . import total_size as _total_size\n\n\ndef _format_decimal(value, std):\n decimals = max(int(_np.ceil(-_np.log10(std))), 0)\n return (\n \"{0:.{1}f}\".format(value, decimals),\n \"{0:.{1}f}\".format(std, decimals + 1),\n )\n\n\nclass Stats:\n \"\"\"A dict-compatible class containing the result of the statistics function.\"\"\"\n\n _NaN = float(\"NaN\")\n\n def __init__(\n self,\n mean=_NaN,\n error_of_mean=_NaN,\n variance=_NaN,\n tau_corr=_NaN,\n R_hat=_NaN,\n ):\n self.mean = mean\n self.error_of_mean = error_of_mean\n self.variance = variance\n self.tau_corr = tau_corr\n self.R_hat = R_hat\n\n def to_json(self):\n jsd = {}\n jsd[\"Mean\"] = self.mean.real\n jsd[\"Variance\"] = self.variance\n jsd[\"Sigma\"] = self.error_of_mean\n jsd[\"R_hat\"] = self.R_hat\n jsd[\"TauCorr\"] = self.tau_corr\n return jsd\n\n def __repr__(self):\n mean, err = _format_decimal(self.mean, self.error_of_mean)\n return \"{} \u00b1 {} [var={:.1e}, R_hat={:.4f}]\".format(\n mean, err, self.variance, self.R_hat\n )\n\n def __getitem__(self, name):\n if name is \"mean\" or \"Mean\":\n return self.mean\n elif name is \"variance\" or \"Variance\":\n return self.variance\n elif name is \"error_of_mean\" or \"Sigma\":\n return self.error_of_mean\n elif name is \"R_hat\":\n return self.R_hat\n elif name is \"tau_corr\" or \"TauCorr\":\n return self.tau_corr\n\n\n@jit(nopython=True)\ndef _get_blocks(data, l):\n n_blocks = int(_np.floor(data.shape[1] / float(l)))\n blocks = _np.empty(data.shape[0] * n_blocks, dtype=data.dtype)\n k = 0\n for i in range(data.shape[0]):\n for b in range(n_blocks):\n blocks[k] = data[i, b * l : (b + 1) * l].mean()\n k += 1\n return blocks\n\n\ndef _block_variance(data, l):\n blocks = _get_blocks(data, l)\n ts = _total_size(blocks)\n if ts > 0:\n return _var(blocks) / float(ts), ts\n else:\n return _np.nan, 0\n\n\ndef _batch_variance(data):\n b_means = _np.mean(data, axis=1)\n ts = _total_size(b_means)\n return _var(b_means), ts\n\n\ndef statistics(data):\n r\"\"\"\n Returns statistics of a given array (or matrix, see below) containing a stream of data.\n This is particularly useful to analyze Markov Chain data, but it can be used\n also for other type of time series.\n\n Args:\n data (vector or matrix): The input data. It can be real or complex valued.\n * if a vector, it is assumed that this is a time\n series of data (not necessarily independent).\n * if a matrix, it is assumed that that rows data[i]\n contain independent time series.\n\n Returns:\n Stats: A dictionary-compatible class containing the average (mean),\n the variance (variance),\n the error of the mean (error_of_mean), and an estimate of the\n autocorrelation time (tau_corr). In addition to accessing the elements with the standard\n dict sintax (e.g. res['mean']), one can also access them directly with the dot operator\n (e.g. res.mean).\n \"\"\"\n\n stats = Stats()\n data = _np.atleast_1d(data)\n if data.ndim == 1:\n data = data.reshape((1, -1))\n\n if data.ndim > 2:\n raise NotImplementedError(\"Statistics are implemented only for ndim<=2\")\n\n stats.mean = _mean(data)\n stats.variance = _var(data)\n\n ts = _total_size(data)\n\n bare_var = stats.variance / ts\n\n batch_var, n_batches = _batch_variance(data)\n\n b_s = 32\n l_block = max(1, data.shape[1] // b_s)\n\n block_var, n_blocks = _block_variance(data, l_block)\n\n tau_batch = (batch_var / bare_var - 1) * 0.5\n tau_block = (block_var / bare_var - 1) * 0.5\n\n block_good = tau_block < 6 * l_block and n_blocks >= b_s\n batch_good = tau_batch < 6 * data.shape[1] and n_batches >= b_s\n\n if batch_good:\n stats.error_of_mean = _np.sqrt(batch_var / ts)\n stats.tau_corr = max(0, tau_batch)\n elif block_good:\n stats.error_of_mean = _np.sqrt(block_var)\n stats.tau_corr = max(0, tau_block)\n else:\n stats.error_of_mean = _np.nan\n stats.tau_corr = _np.nan\n\n if n_batches > 1:\n N = data.shape[-1]\n\n # V_loc = _np.var(data, axis=-1, ddof=0)\n # W_loc = _np.mean(V_loc)\n # W = _mean(W_loc)\n # # This approximation seems to hold well enough for larger n_samples\n W = stats.variance\n\n stats.R_hat = _np.sqrt((N - 1) / N + batch_var / W)\n\n return stats\n", "path": "netket/stats/mc_stats.py"}]}
| 1,966 | 620 |
gh_patches_debug_2052
|
rasdani/github-patches
|
git_diff
|
ansible__molecule-649
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ansible-lint called from Molecule fails when no Ansible-installed-with-pip is present
# Issue Type
- Bug report
# Molecule and Ansible details
```
# ansible --version
ansible 2.2.1.0 (stable-2.2 acad2ba246) last updated 2016/12/11 20:27:02 (GMT +900)
lib/ansible/modules/core: (detached HEAD 8139278530) last updated 2016/12/11 20:30:10 (GMT +900)
lib/ansible/modules/extras: (detached HEAD f5f1fc934a) last updated 2016/12/11 20:30:10 (GMT +900)
config file =
configured module search path = Default w/o overrides
# molecule --version
molecule, version 1.16.1
```
- Molecule installation method: pip
- Ansible installation method: source
# Desired Behaviour
```
# molecule verify
--> Executing ansible-lint...
[ANSIBLE0002] Trailing whitespace
playbook.yml:7
- ansible-unix-python-environment
```
# Actual Behaviour (Bug report only)
```
# pip uninstall ansible
((( cut )))
Successfully uninstalled ansible-2.2.0.0
# . /usr/local/src/ansible/hacking/env-setup
((( cut )))
PYTHONPATH=/usr/local/src/ansible/lib:
((( cut )))
# ansible --version
ansible 2.2.1.0 (stable-2.2 acad2ba246) last updated 2016/12/11 20:27:02 (GMT +900)
lib/ansible/modules/core: (detached HEAD 8139278530) last updated 2016/12/11 20:30:10 (GMT +900)
lib/ansible/modules/extras: (detached HEAD f5f1fc934a) last updated 2016/12/11 20:30:10 (GMT +900)
config file =
configured module search path = Default w/o overrides
# molecule --debug verify
--> Executing ansible-lint...
DEBUG: COMMAND
/usr/local/bin/ansible-lint playbook.yml --exclude .git --exclude .vagrant --exclude .molecule
Traceback (most recent call last):
File "/usr/local/bin/ansible-lint", line 30, in <module>
import ansiblelint
File "/usr/local/lib/python2.7/site-packages/ansiblelint/__init__.py", line 26, in <module>
import ansiblelint.utils
File "/usr/local/lib/python2.7/site-packages/ansiblelint/utils.py", line 25, in <module>
import ansible.constants as C
ImportError: No module named ansible.constants
# /usr/local/bin/ansible-lint playbook.yml --exclude .git --exclude .vagrant --exclude .molecule
[ANSIBLE0002] Trailing whitespace
playbook.yml:7
- ansible-unix-python-environment
```
# Further tests
With Ansible 2.2.0 installed with `pip` (regardless if the one from source configured or not; configured in the example below):
```
# pip install ansible
((( cut )))
Successfully installed ansible-2.2.0.0
# . /usr/local/src/ansible/hacking/env-setup
((( cut )))
# ansible --version
ansible 2.2.1.0 (stable-2.2 acad2ba246) last updated 2016/12/11 20:27:02 (GMT +900)
lib/ansible/modules/core: (detached HEAD 8139278530) last updated 2016/12/11 20:30:10 (GMT +900)
lib/ansible/modules/extras: (detached HEAD f5f1fc934a) last updated 2016/12/11 20:30:10 (GMT +900)
config file =
configured module search path = Default w/o overrides
# molecule verify
--> Executing ansible-lint...
[ANSIBLE0002] Trailing whitespace
playbook.yml:7
- ansible-unix-python-environment
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `molecule/verifier/ansible_lint.py`
Content:
```
1 # Copyright (c) 2015-2016 Cisco Systems, Inc.
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a copy
4 # of this software and associated documentation files (the "Software"), to
5 # deal in the Software without restriction, including without limitation the
6 # rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
7 # sell copies of the Software, and to permit persons to whom the Software is
8 # furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
18 # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
19 # DEALINGS IN THE SOFTWARE.
20
21 import os
22
23 import sh
24
25 from molecule import util
26 from molecule.verifier import base
27
28
29 class AnsibleLint(base.Base):
30 """
31 This is likely to be the source of issues. The class was implemented to
32 bring standardization to roles managed by molecule. How we further refine
33 this class, and its usage is up for discussion.
34 """
35
36 def __init__(self, molecule):
37 super(AnsibleLint, self).__init__(molecule)
38 self._playbook = molecule.config.config['ansible']['playbook']
39 self._ignore_paths = molecule.config.config['molecule']['ignore_paths']
40 self._debug = molecule.args.get('debug')
41
42 def execute(self):
43 """
44 Executes ansible-lint against the configured playbook and returns
45 None.
46
47 :return: None
48 """
49 env = {
50 'ANSIBLE_CONFIG':
51 self._molecule.config.config['ansible']['config_file'],
52 'HOME': os.environ.get('HOME')
53 }
54
55 if 'ansible_lint' not in self._molecule.disabled:
56 msg = 'Executing ansible-lint...'
57 util.print_info(msg)
58 args = [self._playbook]
59 [args.extend(["--exclude", path]) for path in self._ignore_paths]
60 cmd = sh.ansible_lint.bake(
61 *args,
62 _env=env,
63 _out=util.callback_info,
64 _err=util.callback_error)
65 util.run_command(cmd, debug=self._debug)
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/molecule/verifier/ansible_lint.py b/molecule/verifier/ansible_lint.py
--- a/molecule/verifier/ansible_lint.py
+++ b/molecule/verifier/ansible_lint.py
@@ -49,6 +49,7 @@
env = {
'ANSIBLE_CONFIG':
self._molecule.config.config['ansible']['config_file'],
+ 'PYTHONPATH': os.environ.get('PYTHONPATH'),
'HOME': os.environ.get('HOME')
}
|
{"golden_diff": "diff --git a/molecule/verifier/ansible_lint.py b/molecule/verifier/ansible_lint.py\n--- a/molecule/verifier/ansible_lint.py\n+++ b/molecule/verifier/ansible_lint.py\n@@ -49,6 +49,7 @@\n env = {\n 'ANSIBLE_CONFIG':\n self._molecule.config.config['ansible']['config_file'],\n+ 'PYTHONPATH': os.environ.get('PYTHONPATH'),\n 'HOME': os.environ.get('HOME')\n }\n", "issue": "ansible-lint called from Molecule fails when no Ansible-installed-with-pip is present\n# Issue Type\r\n\r\n- Bug report\r\n\r\n# Molecule and Ansible details\r\n\r\n```\r\n# ansible --version\r\nansible 2.2.1.0 (stable-2.2 acad2ba246) last updated 2016/12/11 20:27:02 (GMT +900)\r\n lib/ansible/modules/core: (detached HEAD 8139278530) last updated 2016/12/11 20:30:10 (GMT +900)\r\n lib/ansible/modules/extras: (detached HEAD f5f1fc934a) last updated 2016/12/11 20:30:10 (GMT +900)\r\n config file =\r\n configured module search path = Default w/o overrides\r\n# molecule --version\r\nmolecule, version 1.16.1\r\n```\r\n\r\n- Molecule installation method: pip\r\n- Ansible installation method: source\r\n\r\n# Desired Behaviour\r\n\r\n```\r\n# molecule verify\r\n--> Executing ansible-lint...\r\n[ANSIBLE0002] Trailing whitespace\r\nplaybook.yml:7\r\n - ansible-unix-python-environment\r\n```\r\n\r\n# Actual Behaviour (Bug report only)\r\n\r\n```\r\n# pip uninstall ansible\r\n((( cut )))\r\n Successfully uninstalled ansible-2.2.0.0\r\n\r\n# . /usr/local/src/ansible/hacking/env-setup\r\n((( cut )))\r\nPYTHONPATH=/usr/local/src/ansible/lib:\r\n((( cut )))\r\n\r\n# ansible --version\r\nansible 2.2.1.0 (stable-2.2 acad2ba246) last updated 2016/12/11 20:27:02 (GMT +900)\r\n lib/ansible/modules/core: (detached HEAD 8139278530) last updated 2016/12/11 20:30:10 (GMT +900)\r\n lib/ansible/modules/extras: (detached HEAD f5f1fc934a) last updated 2016/12/11 20:30:10 (GMT +900)\r\n config file =\r\n configured module search path = Default w/o overrides\r\n\r\n# molecule --debug verify\r\n--> Executing ansible-lint...\r\nDEBUG: COMMAND\r\n/usr/local/bin/ansible-lint playbook.yml --exclude .git --exclude .vagrant --exclude .molecule\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/ansible-lint\", line 30, in <module>\r\n import ansiblelint\r\n File \"/usr/local/lib/python2.7/site-packages/ansiblelint/__init__.py\", line 26, in <module>\r\n import ansiblelint.utils\r\n File \"/usr/local/lib/python2.7/site-packages/ansiblelint/utils.py\", line 25, in <module>\r\n import ansible.constants as C\r\nImportError: No module named ansible.constants\r\n\r\n# /usr/local/bin/ansible-lint playbook.yml --exclude .git --exclude .vagrant --exclude .molecule\r\n[ANSIBLE0002] Trailing whitespace\r\nplaybook.yml:7\r\n - ansible-unix-python-environment\r\n```\r\n\r\n# Further tests\r\n\r\nWith Ansible 2.2.0 installed with `pip` (regardless if the one from source configured or not; configured in the example below):\r\n\r\n```\r\n# pip install ansible\r\n((( cut )))\r\nSuccessfully installed ansible-2.2.0.0\r\n\r\n# . /usr/local/src/ansible/hacking/env-setup\r\n((( cut )))\r\n\r\n# ansible --version\r\nansible 2.2.1.0 (stable-2.2 acad2ba246) last updated 2016/12/11 20:27:02 (GMT +900)\r\n lib/ansible/modules/core: (detached HEAD 8139278530) last updated 2016/12/11 20:30:10 (GMT +900)\r\n lib/ansible/modules/extras: (detached HEAD f5f1fc934a) last updated 2016/12/11 20:30:10 (GMT +900)\r\n config file =\r\n configured module search path = Default w/o overrides\r\n\r\n# molecule verify\r\n--> Executing ansible-lint...\r\n[ANSIBLE0002] Trailing whitespace\r\nplaybook.yml:7\r\n - ansible-unix-python-environment\r\n```\n", "before_files": [{"content": "# Copyright (c) 2015-2016 Cisco Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to\n# deal in the Software without restriction, including without limitation the\n# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n# sell copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nimport os\n\nimport sh\n\nfrom molecule import util\nfrom molecule.verifier import base\n\n\nclass AnsibleLint(base.Base):\n \"\"\"\n This is likely to be the source of issues. The class was implemented to\n bring standardization to roles managed by molecule. How we further refine\n this class, and its usage is up for discussion.\n \"\"\"\n\n def __init__(self, molecule):\n super(AnsibleLint, self).__init__(molecule)\n self._playbook = molecule.config.config['ansible']['playbook']\n self._ignore_paths = molecule.config.config['molecule']['ignore_paths']\n self._debug = molecule.args.get('debug')\n\n def execute(self):\n \"\"\"\n Executes ansible-lint against the configured playbook and returns\n None.\n\n :return: None\n \"\"\"\n env = {\n 'ANSIBLE_CONFIG':\n self._molecule.config.config['ansible']['config_file'],\n 'HOME': os.environ.get('HOME')\n }\n\n if 'ansible_lint' not in self._molecule.disabled:\n msg = 'Executing ansible-lint...'\n util.print_info(msg)\n args = [self._playbook]\n [args.extend([\"--exclude\", path]) for path in self._ignore_paths]\n cmd = sh.ansible_lint.bake(\n *args,\n _env=env,\n _out=util.callback_info,\n _err=util.callback_error)\n util.run_command(cmd, debug=self._debug)\n", "path": "molecule/verifier/ansible_lint.py"}], "after_files": [{"content": "# Copyright (c) 2015-2016 Cisco Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to\n# deal in the Software without restriction, including without limitation the\n# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n# sell copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nimport os\n\nimport sh\n\nfrom molecule import util\nfrom molecule.verifier import base\n\n\nclass AnsibleLint(base.Base):\n \"\"\"\n This is likely to be the source of issues. The class was implemented to\n bring standardization to roles managed by molecule. How we further refine\n this class, and its usage is up for discussion.\n \"\"\"\n\n def __init__(self, molecule):\n super(AnsibleLint, self).__init__(molecule)\n self._playbook = molecule.config.config['ansible']['playbook']\n self._ignore_paths = molecule.config.config['molecule']['ignore_paths']\n self._debug = molecule.args.get('debug')\n\n def execute(self):\n \"\"\"\n Executes ansible-lint against the configured playbook and returns\n None.\n\n :return: None\n \"\"\"\n env = {\n 'ANSIBLE_CONFIG':\n self._molecule.config.config['ansible']['config_file'],\n 'PYTHONPATH': os.environ.get('PYTHONPATH'),\n 'HOME': os.environ.get('HOME')\n }\n\n if 'ansible_lint' not in self._molecule.disabled:\n msg = 'Executing ansible-lint...'\n util.print_info(msg)\n args = [self._playbook]\n [args.extend([\"--exclude\", path]) for path in self._ignore_paths]\n cmd = sh.ansible_lint.bake(\n *args,\n _env=env,\n _out=util.callback_info,\n _err=util.callback_error)\n util.run_command(cmd, debug=self._debug)\n", "path": "molecule/verifier/ansible_lint.py"}]}
| 1,991 | 112 |
gh_patches_debug_40054
|
rasdani/github-patches
|
git_diff
|
opendatacube__datacube-core-435
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Multiple issues with write_geotiff() method
### Expected behaviour
I should be able to write a small GeoTiff file using the `datacube.helpers.write_geotiff()` method.
I should also not have to specify a `time_index`.
### Actual behaviour
For a 50x50 raster it fails with:
```python
ValueError: blockxsize exceeds raster width
```
And if `time_index` is not specified it fails with:
```pytb
Exception ignored in: 'pandas._libs.lib.is_bool_array'
TypeError: len() of unsized object
Traceback (most recent call last):
File "testlanwei.py", line 30, in <module>
write_geotiff(filename='H8_test_1.tif', dataset=data, profile_override={'blockxsize':16, 'blockysize':16})
File "/home/547/dra547/github/datacube-core/datacube/helpers.py", line 48, in write_geotiff
dest.write(data.isel(time=time_index).data, bandnum)
File "/g/data/v10/public/modules/agdc-py3-env/20170713/envs/agdc/lib/python3.6/site-packages/xarray/core/dataarray.py", line 679, in isel
ds = self._to_temp_dataset().isel(drop=drop, **indexers)
File "/g/data/v10/public/modules/agdc-py3-env/20170713/envs/agdc/lib/python3.6/site-packages/xarray/core/dataset.py", line 1143, in isel
new_var = var.isel(**var_indexers)
File "/g/data/v10/public/modules/agdc-py3-env/20170713/envs/agdc/lib/python3.6/site-packages/xarray/core/variable.py", line 570, in isel
return self[tuple(key)]
File "/g/data/v10/public/modules/agdc-py3-env/20170713/envs/agdc/lib/python3.6/site-packages/xarray/core/variable.py", line 1225, in __getitem__
values = self._indexable_data[key]
File "/g/data/v10/public/modules/agdc-py3-env/20170713/envs/agdc/lib/python3.6/site-packages/xarray/core/indexing.py", line 545, in __getitem__
result = self.array[key]
File "/g/data/v10/public/modules/agdc-py3-env/20170713/envs/agdc/lib/python3.6/site-packages/pandas/core/indexes/datetimelike.py", line 274, in __getitem__
if com.is_bool_indexer(key):
File "/g/data/v10/public/modules/agdc-py3-env/20170713/envs/agdc/lib/python3.6/site-packages/pandas/core/common.py", line 189, in is_bool_indexer
raise ValueError('cannot index with vector containing '
ValueError: cannot index with vector containing NA / NaN values
```
The function should be changed to take more useful parameters, and should have unit test coverage.
### Steps to reproduce the behaviour
Run the following script [broken_geotiff_write.txt](https://github.com/opendatacube/datacube-core/files/1204141/broken_geotiff_write.txt).
### Environment information
* Which ``datacube --version`` are you using?
`1.5.1`
* What datacube deployment/enviornment are you running against?
`agdc-py3-prod/1.5.1`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `datacube/helpers.py`
Content:
```
1 """
2 Useful functions for Datacube users
3
4 Not used internally, those should go in `utils.py`
5 """
6 from __future__ import absolute_import
7 import rasterio
8 import numpy as np
9
10 DEFAULT_PROFILE = {
11 'blockxsize': 256,
12 'blockysize': 256,
13 'compress': 'lzw',
14 'driver': 'GTiff',
15 'interleave': 'band',
16 'nodata': 0.0,
17 'photometric': 'RGBA',
18 'tiled': True}
19
20
21 def write_geotiff(filename, dataset, time_index=None, profile_override=None):
22 """
23 Write an xarray dataset to a geotiff
24
25 :param filename: Output filename
26 :attr dataset: xarray dataset containing multiple bands to write to file
27 :attr time_index: time index to write to file
28 :attr profile_override: option dict, overrides rasterio file creation options.
29 """
30 profile_override = profile_override or {}
31
32 try:
33 dtypes = {val.dtype for val in dataset.data_vars.values()}
34 assert len(dtypes) == 1 # Check for multiple dtypes
35 except AttributeError:
36 dtypes = [dataset.dtype]
37
38 profile = DEFAULT_PROFILE.copy()
39 profile.update({
40 'width': dataset.dims[dataset.crs.dimensions[1]],
41 'height': dataset.dims[dataset.crs.dimensions[0]],
42 'affine': dataset.affine,
43 'crs': dataset.crs.crs_str,
44 'count': len(dataset.data_vars),
45 'dtype': str(dtypes.pop())
46 })
47 profile.update(profile_override)
48
49 with rasterio.open(str(filename), 'w', **profile) as dest:
50 if hasattr(dataset, 'data_vars'):
51 for bandnum, data in enumerate(dataset.data_vars.values(), start=1):
52 dest.write(data.isel(time=time_index).data, bandnum)
53 else: # Assume that we have a DataArray
54 dest.write(dataset.isel(time=time_index).data, 0)
55
56
57 def ga_pq_fuser(dest, src):
58 """
59 Fuse two Geoscience Australia Pixel Quality ndarrays
60
61 To be used as a `fuse_func` when loaded `grouped` data, for example when grouping
62 by solar day to avoid duplicate data from scene overlaps.
63 """
64 valid_bit = 8
65 valid_val = (1 << valid_bit)
66
67 no_data_dest_mask = ~(dest & valid_val).astype(bool)
68 np.copyto(dest, src, where=no_data_dest_mask)
69
70 both_data_mask = (valid_val & dest & src).astype(bool)
71 np.copyto(dest, src & dest, where=both_data_mask)
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/datacube/helpers.py b/datacube/helpers.py
--- a/datacube/helpers.py
+++ b/datacube/helpers.py
@@ -4,8 +4,9 @@
Not used internally, those should go in `utils.py`
"""
from __future__ import absolute_import
-import rasterio
+
import numpy as np
+import rasterio
DEFAULT_PROFILE = {
'blockxsize': 256,
@@ -18,17 +19,22 @@
'tiled': True}
-def write_geotiff(filename, dataset, time_index=None, profile_override=None):
+def write_geotiff(filename, dataset, profile_override=None, time_index=None):
"""
- Write an xarray dataset to a geotiff
+ Write an ODC style xarray.Dataset to a GeoTIFF file.
:param filename: Output filename
- :attr dataset: xarray dataset containing multiple bands to write to file
- :attr time_index: time index to write to file
- :attr profile_override: option dict, overrides rasterio file creation options.
+ :param dataset: xarray dataset containing one or more bands to write to a file.
+ :param profile_override: option dict, overrides rasterio file creation options.
+ :param time_index: DEPRECATED
"""
profile_override = profile_override or {}
+ if time_index is not None:
+ raise ValueError('''The write_geotiff function no longer supports passing in `time_index`.
+ The same function can be achieved by calling `dataset.isel(time=<time_index>)` before passing
+ in your dataset. It was removed because it made the function much less useful for more advanced cases.''')
+
try:
dtypes = {val.dtype for val in dataset.data_vars.values()}
assert len(dtypes) == 1 # Check for multiple dtypes
@@ -39,19 +45,35 @@
profile.update({
'width': dataset.dims[dataset.crs.dimensions[1]],
'height': dataset.dims[dataset.crs.dimensions[0]],
- 'affine': dataset.affine,
+ 'transform': dataset.affine,
'crs': dataset.crs.crs_str,
'count': len(dataset.data_vars),
'dtype': str(dtypes.pop())
})
profile.update(profile_override)
+ _calculate_blocksize(profile)
+
with rasterio.open(str(filename), 'w', **profile) as dest:
if hasattr(dataset, 'data_vars'):
for bandnum, data in enumerate(dataset.data_vars.values(), start=1):
- dest.write(data.isel(time=time_index).data, bandnum)
- else: # Assume that we have a DataArray
- dest.write(dataset.isel(time=time_index).data, 0)
+ dest.write(data.data, bandnum)
+
+
+def _calculate_blocksize(profile):
+ # Block size must be smaller than the image size, and for geotiffs must be divisible by 16
+ # Fix for small images.
+ if profile['blockxsize'] > profile['width']:
+ if profile['width'] % 16 == 0 or profile['width'] < 16:
+ profile['blockxsize'] = profile['width']
+ else:
+ profile['blockxsize'] = 16
+
+ if profile['blockysize'] > profile['height']:
+ if profile['height'] % 16 == 0 or profile['height'] < 16:
+ profile['blockysize'] = profile['height']
+ else:
+ profile['blockysize'] = 16
def ga_pq_fuser(dest, src):
|
{"golden_diff": "diff --git a/datacube/helpers.py b/datacube/helpers.py\n--- a/datacube/helpers.py\n+++ b/datacube/helpers.py\n@@ -4,8 +4,9 @@\n Not used internally, those should go in `utils.py`\n \"\"\"\n from __future__ import absolute_import\n-import rasterio\n+\n import numpy as np\n+import rasterio\n \n DEFAULT_PROFILE = {\n 'blockxsize': 256,\n@@ -18,17 +19,22 @@\n 'tiled': True}\n \n \n-def write_geotiff(filename, dataset, time_index=None, profile_override=None):\n+def write_geotiff(filename, dataset, profile_override=None, time_index=None):\n \"\"\"\n- Write an xarray dataset to a geotiff\n+ Write an ODC style xarray.Dataset to a GeoTIFF file.\n \n :param filename: Output filename\n- :attr dataset: xarray dataset containing multiple bands to write to file\n- :attr time_index: time index to write to file\n- :attr profile_override: option dict, overrides rasterio file creation options.\n+ :param dataset: xarray dataset containing one or more bands to write to a file.\n+ :param profile_override: option dict, overrides rasterio file creation options.\n+ :param time_index: DEPRECATED\n \"\"\"\n profile_override = profile_override or {}\n \n+ if time_index is not None:\n+ raise ValueError('''The write_geotiff function no longer supports passing in `time_index`.\n+ The same function can be achieved by calling `dataset.isel(time=<time_index>)` before passing\n+ in your dataset. It was removed because it made the function much less useful for more advanced cases.''')\n+\n try:\n dtypes = {val.dtype for val in dataset.data_vars.values()}\n assert len(dtypes) == 1 # Check for multiple dtypes\n@@ -39,19 +45,35 @@\n profile.update({\n 'width': dataset.dims[dataset.crs.dimensions[1]],\n 'height': dataset.dims[dataset.crs.dimensions[0]],\n- 'affine': dataset.affine,\n+ 'transform': dataset.affine,\n 'crs': dataset.crs.crs_str,\n 'count': len(dataset.data_vars),\n 'dtype': str(dtypes.pop())\n })\n profile.update(profile_override)\n \n+ _calculate_blocksize(profile)\n+\n with rasterio.open(str(filename), 'w', **profile) as dest:\n if hasattr(dataset, 'data_vars'):\n for bandnum, data in enumerate(dataset.data_vars.values(), start=1):\n- dest.write(data.isel(time=time_index).data, bandnum)\n- else: # Assume that we have a DataArray\n- dest.write(dataset.isel(time=time_index).data, 0)\n+ dest.write(data.data, bandnum)\n+\n+\n+def _calculate_blocksize(profile):\n+ # Block size must be smaller than the image size, and for geotiffs must be divisible by 16\n+ # Fix for small images.\n+ if profile['blockxsize'] > profile['width']:\n+ if profile['width'] % 16 == 0 or profile['width'] < 16:\n+ profile['blockxsize'] = profile['width']\n+ else:\n+ profile['blockxsize'] = 16\n+\n+ if profile['blockysize'] > profile['height']:\n+ if profile['height'] % 16 == 0 or profile['height'] < 16:\n+ profile['blockysize'] = profile['height']\n+ else:\n+ profile['blockysize'] = 16\n \n \n def ga_pq_fuser(dest, src):\n", "issue": "Multiple issues with write_geotiff() method\n### Expected behaviour\r\nI should be able to write a small GeoTiff file using the `datacube.helpers.write_geotiff()` method.\r\n\r\nI should also not have to specify a `time_index`.\r\n\r\n\r\n### Actual behaviour\r\n\r\n For a 50x50 raster it fails with:\r\n```python\r\n ValueError: blockxsize exceeds raster width\r\n```\r\n\r\nAnd if `time_index` is not specified it fails with:\r\n```pytb\r\nException ignored in: 'pandas._libs.lib.is_bool_array'\r\nTypeError: len() of unsized object\r\nTraceback (most recent call last):\r\n File \"testlanwei.py\", line 30, in <module>\r\n write_geotiff(filename='H8_test_1.tif', dataset=data, profile_override={'blockxsize':16, 'blockysize':16})\r\n File \"/home/547/dra547/github/datacube-core/datacube/helpers.py\", line 48, in write_geotiff\r\n dest.write(data.isel(time=time_index).data, bandnum)\r\n File \"/g/data/v10/public/modules/agdc-py3-env/20170713/envs/agdc/lib/python3.6/site-packages/xarray/core/dataarray.py\", line 679, in isel\r\n ds = self._to_temp_dataset().isel(drop=drop, **indexers)\r\n File \"/g/data/v10/public/modules/agdc-py3-env/20170713/envs/agdc/lib/python3.6/site-packages/xarray/core/dataset.py\", line 1143, in isel\r\n new_var = var.isel(**var_indexers)\r\n File \"/g/data/v10/public/modules/agdc-py3-env/20170713/envs/agdc/lib/python3.6/site-packages/xarray/core/variable.py\", line 570, in isel\r\n return self[tuple(key)]\r\n File \"/g/data/v10/public/modules/agdc-py3-env/20170713/envs/agdc/lib/python3.6/site-packages/xarray/core/variable.py\", line 1225, in __getitem__\r\n values = self._indexable_data[key]\r\n File \"/g/data/v10/public/modules/agdc-py3-env/20170713/envs/agdc/lib/python3.6/site-packages/xarray/core/indexing.py\", line 545, in __getitem__\r\n result = self.array[key]\r\n File \"/g/data/v10/public/modules/agdc-py3-env/20170713/envs/agdc/lib/python3.6/site-packages/pandas/core/indexes/datetimelike.py\", line 274, in __getitem__\r\n if com.is_bool_indexer(key):\r\n File \"/g/data/v10/public/modules/agdc-py3-env/20170713/envs/agdc/lib/python3.6/site-packages/pandas/core/common.py\", line 189, in is_bool_indexer\r\n raise ValueError('cannot index with vector containing '\r\nValueError: cannot index with vector containing NA / NaN values\r\n```\r\n\r\n\r\nThe function should be changed to take more useful parameters, and should have unit test coverage. \r\n\r\n\r\n### Steps to reproduce the behaviour\r\n\r\nRun the following script [broken_geotiff_write.txt](https://github.com/opendatacube/datacube-core/files/1204141/broken_geotiff_write.txt).\r\n\r\n\r\n\r\n### Environment information\r\n\r\n* Which ``datacube --version`` are you using?\r\n`1.5.1`\r\n* What datacube deployment/enviornment are you running against?\r\n`agdc-py3-prod/1.5.1`\n", "before_files": [{"content": "\"\"\"\nUseful functions for Datacube users\n\nNot used internally, those should go in `utils.py`\n\"\"\"\nfrom __future__ import absolute_import\nimport rasterio\nimport numpy as np\n\nDEFAULT_PROFILE = {\n 'blockxsize': 256,\n 'blockysize': 256,\n 'compress': 'lzw',\n 'driver': 'GTiff',\n 'interleave': 'band',\n 'nodata': 0.0,\n 'photometric': 'RGBA',\n 'tiled': True}\n\n\ndef write_geotiff(filename, dataset, time_index=None, profile_override=None):\n \"\"\"\n Write an xarray dataset to a geotiff\n\n :param filename: Output filename\n :attr dataset: xarray dataset containing multiple bands to write to file\n :attr time_index: time index to write to file\n :attr profile_override: option dict, overrides rasterio file creation options.\n \"\"\"\n profile_override = profile_override or {}\n\n try:\n dtypes = {val.dtype for val in dataset.data_vars.values()}\n assert len(dtypes) == 1 # Check for multiple dtypes\n except AttributeError:\n dtypes = [dataset.dtype]\n\n profile = DEFAULT_PROFILE.copy()\n profile.update({\n 'width': dataset.dims[dataset.crs.dimensions[1]],\n 'height': dataset.dims[dataset.crs.dimensions[0]],\n 'affine': dataset.affine,\n 'crs': dataset.crs.crs_str,\n 'count': len(dataset.data_vars),\n 'dtype': str(dtypes.pop())\n })\n profile.update(profile_override)\n\n with rasterio.open(str(filename), 'w', **profile) as dest:\n if hasattr(dataset, 'data_vars'):\n for bandnum, data in enumerate(dataset.data_vars.values(), start=1):\n dest.write(data.isel(time=time_index).data, bandnum)\n else: # Assume that we have a DataArray\n dest.write(dataset.isel(time=time_index).data, 0)\n\n\ndef ga_pq_fuser(dest, src):\n \"\"\"\n Fuse two Geoscience Australia Pixel Quality ndarrays\n\n To be used as a `fuse_func` when loaded `grouped` data, for example when grouping\n by solar day to avoid duplicate data from scene overlaps.\n \"\"\"\n valid_bit = 8\n valid_val = (1 << valid_bit)\n\n no_data_dest_mask = ~(dest & valid_val).astype(bool)\n np.copyto(dest, src, where=no_data_dest_mask)\n\n both_data_mask = (valid_val & dest & src).astype(bool)\n np.copyto(dest, src & dest, where=both_data_mask)\n", "path": "datacube/helpers.py"}], "after_files": [{"content": "\"\"\"\nUseful functions for Datacube users\n\nNot used internally, those should go in `utils.py`\n\"\"\"\nfrom __future__ import absolute_import\n\nimport numpy as np\nimport rasterio\n\nDEFAULT_PROFILE = {\n 'blockxsize': 256,\n 'blockysize': 256,\n 'compress': 'lzw',\n 'driver': 'GTiff',\n 'interleave': 'band',\n 'nodata': 0.0,\n 'photometric': 'RGBA',\n 'tiled': True}\n\n\ndef write_geotiff(filename, dataset, profile_override=None, time_index=None):\n \"\"\"\n Write an ODC style xarray.Dataset to a GeoTIFF file.\n\n :param filename: Output filename\n :param dataset: xarray dataset containing one or more bands to write to a file.\n :param profile_override: option dict, overrides rasterio file creation options.\n :param time_index: DEPRECATED\n \"\"\"\n profile_override = profile_override or {}\n\n if time_index is not None:\n raise ValueError('''The write_geotiff function no longer supports passing in `time_index`.\n The same function can be achieved by calling `dataset.isel(time=<time_index>)` before passing\n in your dataset. It was removed because it made the function much less useful for more advanced cases.''')\n\n try:\n dtypes = {val.dtype for val in dataset.data_vars.values()}\n assert len(dtypes) == 1 # Check for multiple dtypes\n except AttributeError:\n dtypes = [dataset.dtype]\n\n profile = DEFAULT_PROFILE.copy()\n profile.update({\n 'width': dataset.dims[dataset.crs.dimensions[1]],\n 'height': dataset.dims[dataset.crs.dimensions[0]],\n 'transform': dataset.affine,\n 'crs': dataset.crs.crs_str,\n 'count': len(dataset.data_vars),\n 'dtype': str(dtypes.pop())\n })\n profile.update(profile_override)\n\n _calculate_blocksize(profile)\n\n with rasterio.open(str(filename), 'w', **profile) as dest:\n if hasattr(dataset, 'data_vars'):\n for bandnum, data in enumerate(dataset.data_vars.values(), start=1):\n dest.write(data.data, bandnum)\n\n\ndef _calculate_blocksize(profile):\n # Block size must be smaller than the image size, and for geotiffs must be divisible by 16\n # Fix for small images.\n if profile['blockxsize'] > profile['width']:\n if profile['width'] % 16 == 0 or profile['width'] < 16:\n profile['blockxsize'] = profile['width']\n else:\n profile['blockxsize'] = 16\n\n if profile['blockysize'] > profile['height']:\n if profile['height'] % 16 == 0 or profile['height'] < 16:\n profile['blockysize'] = profile['height']\n else:\n profile['blockysize'] = 16\n\n\ndef ga_pq_fuser(dest, src):\n \"\"\"\n Fuse two Geoscience Australia Pixel Quality ndarrays\n\n To be used as a `fuse_func` when loaded `grouped` data, for example when grouping\n by solar day to avoid duplicate data from scene overlaps.\n \"\"\"\n valid_bit = 8\n valid_val = (1 << valid_bit)\n\n no_data_dest_mask = ~(dest & valid_val).astype(bool)\n np.copyto(dest, src, where=no_data_dest_mask)\n\n both_data_mask = (valid_val & dest & src).astype(bool)\n np.copyto(dest, src & dest, where=both_data_mask)\n", "path": "datacube/helpers.py"}]}
| 1,790 | 809 |
gh_patches_debug_40259
|
rasdani/github-patches
|
git_diff
|
saleor__saleor-3809
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add query to fetch draft orders
In the Dashboard 2.0 we want to have separate links to view draft orders and confirmed ones. For that reason we need to add a separate resolver to return only draft orders. Exiting `orders` resolver should be changed to return confirmed orders.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/graphql/order/resolvers.py`
Content:
```
1 import graphene
2 import graphene_django_optimizer as gql_optimizer
3
4 from ...order import OrderEvents, OrderStatus, models
5 from ...order.utils import sum_order_totals
6 from ..utils import filter_by_period, filter_by_query_param
7 from .enums import OrderStatusFilter
8 from .types import Order
9 from .utils import applicable_shipping_methods
10
11 ORDER_SEARCH_FIELDS = (
12 'id', 'discount_name', 'token', 'user_email', 'user__email')
13
14
15 def resolve_orders(info, created, status, query):
16 user = info.context.user
17 if user.has_perm('order.manage_orders'):
18 qs = models.Order.objects.all()
19 else:
20 qs = user.orders.confirmed()
21 qs = filter_by_query_param(qs, query, ORDER_SEARCH_FIELDS)
22
23 # filter orders by status
24 if status is not None:
25 if status == OrderStatusFilter.READY_TO_FULFILL:
26 qs = qs.ready_to_fulfill()
27 elif status == OrderStatusFilter.READY_TO_CAPTURE:
28 qs = qs.ready_to_capture()
29
30 # filter orders by creation date
31 if created is not None:
32 qs = filter_by_period(qs, created, 'created')
33
34 return gql_optimizer.query(qs, info)
35
36
37 def resolve_orders_total(info, period):
38 qs = models.Order.objects.confirmed().exclude(status=OrderStatus.CANCELED)
39 qs = filter_by_period(qs, period, 'created')
40 return sum_order_totals(qs)
41
42
43 def resolve_order(info, id):
44 """Return order only for user assigned to it or proper staff user."""
45 user = info.context.user
46 order = graphene.Node.get_node_from_global_id(info, id, Order)
47 if user.has_perm('order.manage_orders') or order.user == user:
48 return order
49 return None
50
51
52 def resolve_shipping_methods(obj, info, price):
53 return applicable_shipping_methods(obj, info, price)
54
55
56 def resolve_homepage_events(info):
57 # Filter only selected events to be displayed on homepage.
58 types = [
59 OrderEvents.PLACED.value, OrderEvents.PLACED_FROM_DRAFT.value,
60 OrderEvents.ORDER_FULLY_PAID.value]
61 return models.OrderEvent.objects.filter(type__in=types)
62
63
64 def resolve_order_by_token(info, token):
65 return models.Order.objects.filter(token=token).first()
66
```
Path: `saleor/graphql/order/schema.py`
Content:
```
1 from textwrap import dedent
2
3 import graphene
4 from graphql_jwt.decorators import login_required, permission_required
5
6 from ..core.enums import ReportingPeriod
7 from ..core.fields import PrefetchingConnectionField
8 from ..core.types import TaxedMoney
9 from ..descriptions import DESCRIPTIONS
10 from .bulk_mutations.draft_orders import DraftOrderBulkDelete
11 from .enums import OrderStatusFilter
12 from .mutations.draft_orders import (
13 DraftOrderComplete, DraftOrderCreate, DraftOrderDelete,
14 DraftOrderLineDelete, DraftOrderLinesCreate, DraftOrderLineUpdate,
15 DraftOrderUpdate)
16 from .mutations.fulfillments import (
17 FulfillmentCancel, FulfillmentCreate, FulfillmentUpdateTracking)
18 from .mutations.orders import (
19 OrderAddNote, OrderCancel, OrderCapture, OrderMarkAsPaid, OrderRefund,
20 OrderUpdate, OrderUpdateShipping, OrderVoid)
21 from .resolvers import (
22 resolve_homepage_events, resolve_order, resolve_order_by_token,
23 resolve_orders, resolve_orders_total)
24 from .types import Order, OrderEvent
25
26
27 class OrderQueries(graphene.ObjectType):
28 homepage_events = PrefetchingConnectionField(
29 OrderEvent, description=dedent('''List of activity events to display on
30 homepage (at the moment it only contains order-events).'''))
31 order = graphene.Field(
32 Order, description='Lookup an order by ID.',
33 id=graphene.Argument(graphene.ID, required=True))
34 orders = PrefetchingConnectionField(
35 Order,
36 query=graphene.String(description=DESCRIPTIONS['order']),
37 created=graphene.Argument(
38 ReportingPeriod,
39 description='Filter orders from a selected timespan.'),
40 status=graphene.Argument(
41 OrderStatusFilter, description='Filter order by status'),
42 description='List of the shop\'s orders.')
43 orders_total = graphene.Field(
44 TaxedMoney, description='Total sales.',
45 period=graphene.Argument(
46 ReportingPeriod,
47 description='Get total sales for selected span of time.'))
48 order_by_token = graphene.Field(
49 Order, description='Lookup an order by token.',
50 token=graphene.Argument(graphene.String, required=True))
51
52 @permission_required('order.manage_orders')
53 def resolve_homepage_events(self, info, **kwargs):
54 return resolve_homepage_events(info)
55
56 @login_required
57 def resolve_order(self, info, id):
58 return resolve_order(info, id)
59
60 @login_required
61 def resolve_orders(
62 self, info, created=None, status=None, query=None, **kwargs):
63 return resolve_orders(info, created, status, query)
64
65 @permission_required('order.manage_orders')
66 def resolve_orders_total(self, info, period, **kwargs):
67 return resolve_orders_total(info, period)
68
69 def resolve_order_by_token(self, info, token):
70 return resolve_order_by_token(info, token)
71
72
73 class OrderMutations(graphene.ObjectType):
74 draft_order_complete = DraftOrderComplete.Field()
75 draft_order_create = DraftOrderCreate.Field()
76 draft_order_delete = DraftOrderDelete.Field()
77 draft_order_bulk_delete = DraftOrderBulkDelete.Field()
78 draft_order_lines_create = DraftOrderLinesCreate.Field()
79 draft_order_line_delete = DraftOrderLineDelete.Field()
80 draft_order_line_update = DraftOrderLineUpdate.Field()
81 draft_order_update = DraftOrderUpdate.Field()
82
83 order_add_note = OrderAddNote.Field()
84 order_cancel = OrderCancel.Field()
85 order_capture = OrderCapture.Field()
86 order_fulfillment_cancel = FulfillmentCancel.Field()
87 order_fulfillment_create = FulfillmentCreate.Field()
88 order_fulfillment_update_tracking = FulfillmentUpdateTracking.Field()
89 order_mark_as_paid = OrderMarkAsPaid.Field()
90 order_refund = OrderRefund.Field()
91 order_update = OrderUpdate.Field()
92 order_update_shipping = OrderUpdateShipping.Field()
93 order_void = OrderVoid.Field()
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/saleor/graphql/order/resolvers.py b/saleor/graphql/order/resolvers.py
--- a/saleor/graphql/order/resolvers.py
+++ b/saleor/graphql/order/resolvers.py
@@ -12,12 +12,7 @@
'id', 'discount_name', 'token', 'user_email', 'user__email')
-def resolve_orders(info, created, status, query):
- user = info.context.user
- if user.has_perm('order.manage_orders'):
- qs = models.Order.objects.all()
- else:
- qs = user.orders.confirmed()
+def filter_orders(qs, info, created, status, query):
qs = filter_by_query_param(qs, query, ORDER_SEARCH_FIELDS)
# filter orders by status
@@ -34,6 +29,20 @@
return gql_optimizer.query(qs, info)
+def resolve_orders(info, created, status, query):
+ user = info.context.user
+ if user.has_perm('order.manage_orders'):
+ qs = models.Order.objects.confirmed()
+ else:
+ qs = user.orders.confirmed()
+ return filter_orders(qs, info, created, status, query)
+
+
+def resolve_draft_orders(info, created, query):
+ qs = models.Order.objects.drafts()
+ return filter_orders(qs, info, created, None, query)
+
+
def resolve_orders_total(info, period):
qs = models.Order.objects.confirmed().exclude(status=OrderStatus.CANCELED)
qs = filter_by_period(qs, period, 'created')
diff --git a/saleor/graphql/order/schema.py b/saleor/graphql/order/schema.py
--- a/saleor/graphql/order/schema.py
+++ b/saleor/graphql/order/schema.py
@@ -19,8 +19,8 @@
OrderAddNote, OrderCancel, OrderCapture, OrderMarkAsPaid, OrderRefund,
OrderUpdate, OrderUpdateShipping, OrderVoid)
from .resolvers import (
- resolve_homepage_events, resolve_order, resolve_order_by_token,
- resolve_orders, resolve_orders_total)
+ resolve_draft_orders, resolve_homepage_events, resolve_order,
+ resolve_order_by_token, resolve_orders, resolve_orders_total)
from .types import Order, OrderEvent
@@ -40,6 +40,13 @@
status=graphene.Argument(
OrderStatusFilter, description='Filter order by status'),
description='List of the shop\'s orders.')
+ draft_orders = PrefetchingConnectionField(
+ Order,
+ query=graphene.String(description=DESCRIPTIONS['order']),
+ created=graphene.Argument(
+ ReportingPeriod,
+ description='Filter draft orders from a selected timespan.'),
+ description='List of the shop\'s draft orders.')
orders_total = graphene.Field(
TaxedMoney, description='Total sales.',
period=graphene.Argument(
@@ -62,6 +69,10 @@
self, info, created=None, status=None, query=None, **kwargs):
return resolve_orders(info, created, status, query)
+ @permission_required('order.manage_orders')
+ def resolve_draft_orders(self, info, created=None, query=None, **kwargs):
+ return resolve_draft_orders(info, created, query)
+
@permission_required('order.manage_orders')
def resolve_orders_total(self, info, period, **kwargs):
return resolve_orders_total(info, period)
|
{"golden_diff": "diff --git a/saleor/graphql/order/resolvers.py b/saleor/graphql/order/resolvers.py\n--- a/saleor/graphql/order/resolvers.py\n+++ b/saleor/graphql/order/resolvers.py\n@@ -12,12 +12,7 @@\n 'id', 'discount_name', 'token', 'user_email', 'user__email')\n \n \n-def resolve_orders(info, created, status, query):\n- user = info.context.user\n- if user.has_perm('order.manage_orders'):\n- qs = models.Order.objects.all()\n- else:\n- qs = user.orders.confirmed()\n+def filter_orders(qs, info, created, status, query):\n qs = filter_by_query_param(qs, query, ORDER_SEARCH_FIELDS)\n \n # filter orders by status\n@@ -34,6 +29,20 @@\n return gql_optimizer.query(qs, info)\n \n \n+def resolve_orders(info, created, status, query):\n+ user = info.context.user\n+ if user.has_perm('order.manage_orders'):\n+ qs = models.Order.objects.confirmed()\n+ else:\n+ qs = user.orders.confirmed()\n+ return filter_orders(qs, info, created, status, query)\n+\n+\n+def resolve_draft_orders(info, created, query):\n+ qs = models.Order.objects.drafts()\n+ return filter_orders(qs, info, created, None, query)\n+\n+\n def resolve_orders_total(info, period):\n qs = models.Order.objects.confirmed().exclude(status=OrderStatus.CANCELED)\n qs = filter_by_period(qs, period, 'created')\ndiff --git a/saleor/graphql/order/schema.py b/saleor/graphql/order/schema.py\n--- a/saleor/graphql/order/schema.py\n+++ b/saleor/graphql/order/schema.py\n@@ -19,8 +19,8 @@\n OrderAddNote, OrderCancel, OrderCapture, OrderMarkAsPaid, OrderRefund,\n OrderUpdate, OrderUpdateShipping, OrderVoid)\n from .resolvers import (\n- resolve_homepage_events, resolve_order, resolve_order_by_token,\n- resolve_orders, resolve_orders_total)\n+ resolve_draft_orders, resolve_homepage_events, resolve_order,\n+ resolve_order_by_token, resolve_orders, resolve_orders_total)\n from .types import Order, OrderEvent\n \n \n@@ -40,6 +40,13 @@\n status=graphene.Argument(\n OrderStatusFilter, description='Filter order by status'),\n description='List of the shop\\'s orders.')\n+ draft_orders = PrefetchingConnectionField(\n+ Order,\n+ query=graphene.String(description=DESCRIPTIONS['order']),\n+ created=graphene.Argument(\n+ ReportingPeriod,\n+ description='Filter draft orders from a selected timespan.'),\n+ description='List of the shop\\'s draft orders.')\n orders_total = graphene.Field(\n TaxedMoney, description='Total sales.',\n period=graphene.Argument(\n@@ -62,6 +69,10 @@\n self, info, created=None, status=None, query=None, **kwargs):\n return resolve_orders(info, created, status, query)\n \n+ @permission_required('order.manage_orders')\n+ def resolve_draft_orders(self, info, created=None, query=None, **kwargs):\n+ return resolve_draft_orders(info, created, query)\n+\n @permission_required('order.manage_orders')\n def resolve_orders_total(self, info, period, **kwargs):\n return resolve_orders_total(info, period)\n", "issue": "Add query to fetch draft orders\nIn the Dashboard 2.0 we want to have separate links to view draft orders and confirmed ones. For that reason we need to add a separate resolver to return only draft orders. Exiting `orders` resolver should be changed to return confirmed orders.\n", "before_files": [{"content": "import graphene\nimport graphene_django_optimizer as gql_optimizer\n\nfrom ...order import OrderEvents, OrderStatus, models\nfrom ...order.utils import sum_order_totals\nfrom ..utils import filter_by_period, filter_by_query_param\nfrom .enums import OrderStatusFilter\nfrom .types import Order\nfrom .utils import applicable_shipping_methods\n\nORDER_SEARCH_FIELDS = (\n 'id', 'discount_name', 'token', 'user_email', 'user__email')\n\n\ndef resolve_orders(info, created, status, query):\n user = info.context.user\n if user.has_perm('order.manage_orders'):\n qs = models.Order.objects.all()\n else:\n qs = user.orders.confirmed()\n qs = filter_by_query_param(qs, query, ORDER_SEARCH_FIELDS)\n\n # filter orders by status\n if status is not None:\n if status == OrderStatusFilter.READY_TO_FULFILL:\n qs = qs.ready_to_fulfill()\n elif status == OrderStatusFilter.READY_TO_CAPTURE:\n qs = qs.ready_to_capture()\n\n # filter orders by creation date\n if created is not None:\n qs = filter_by_period(qs, created, 'created')\n\n return gql_optimizer.query(qs, info)\n\n\ndef resolve_orders_total(info, period):\n qs = models.Order.objects.confirmed().exclude(status=OrderStatus.CANCELED)\n qs = filter_by_period(qs, period, 'created')\n return sum_order_totals(qs)\n\n\ndef resolve_order(info, id):\n \"\"\"Return order only for user assigned to it or proper staff user.\"\"\"\n user = info.context.user\n order = graphene.Node.get_node_from_global_id(info, id, Order)\n if user.has_perm('order.manage_orders') or order.user == user:\n return order\n return None\n\n\ndef resolve_shipping_methods(obj, info, price):\n return applicable_shipping_methods(obj, info, price)\n\n\ndef resolve_homepage_events(info):\n # Filter only selected events to be displayed on homepage.\n types = [\n OrderEvents.PLACED.value, OrderEvents.PLACED_FROM_DRAFT.value,\n OrderEvents.ORDER_FULLY_PAID.value]\n return models.OrderEvent.objects.filter(type__in=types)\n\n\ndef resolve_order_by_token(info, token):\n return models.Order.objects.filter(token=token).first()\n", "path": "saleor/graphql/order/resolvers.py"}, {"content": "from textwrap import dedent\n\nimport graphene\nfrom graphql_jwt.decorators import login_required, permission_required\n\nfrom ..core.enums import ReportingPeriod\nfrom ..core.fields import PrefetchingConnectionField\nfrom ..core.types import TaxedMoney\nfrom ..descriptions import DESCRIPTIONS\nfrom .bulk_mutations.draft_orders import DraftOrderBulkDelete\nfrom .enums import OrderStatusFilter\nfrom .mutations.draft_orders import (\n DraftOrderComplete, DraftOrderCreate, DraftOrderDelete,\n DraftOrderLineDelete, DraftOrderLinesCreate, DraftOrderLineUpdate,\n DraftOrderUpdate)\nfrom .mutations.fulfillments import (\n FulfillmentCancel, FulfillmentCreate, FulfillmentUpdateTracking)\nfrom .mutations.orders import (\n OrderAddNote, OrderCancel, OrderCapture, OrderMarkAsPaid, OrderRefund,\n OrderUpdate, OrderUpdateShipping, OrderVoid)\nfrom .resolvers import (\n resolve_homepage_events, resolve_order, resolve_order_by_token,\n resolve_orders, resolve_orders_total)\nfrom .types import Order, OrderEvent\n\n\nclass OrderQueries(graphene.ObjectType):\n homepage_events = PrefetchingConnectionField(\n OrderEvent, description=dedent('''List of activity events to display on\n homepage (at the moment it only contains order-events).'''))\n order = graphene.Field(\n Order, description='Lookup an order by ID.',\n id=graphene.Argument(graphene.ID, required=True))\n orders = PrefetchingConnectionField(\n Order,\n query=graphene.String(description=DESCRIPTIONS['order']),\n created=graphene.Argument(\n ReportingPeriod,\n description='Filter orders from a selected timespan.'),\n status=graphene.Argument(\n OrderStatusFilter, description='Filter order by status'),\n description='List of the shop\\'s orders.')\n orders_total = graphene.Field(\n TaxedMoney, description='Total sales.',\n period=graphene.Argument(\n ReportingPeriod,\n description='Get total sales for selected span of time.'))\n order_by_token = graphene.Field(\n Order, description='Lookup an order by token.',\n token=graphene.Argument(graphene.String, required=True))\n\n @permission_required('order.manage_orders')\n def resolve_homepage_events(self, info, **kwargs):\n return resolve_homepage_events(info)\n\n @login_required\n def resolve_order(self, info, id):\n return resolve_order(info, id)\n\n @login_required\n def resolve_orders(\n self, info, created=None, status=None, query=None, **kwargs):\n return resolve_orders(info, created, status, query)\n\n @permission_required('order.manage_orders')\n def resolve_orders_total(self, info, period, **kwargs):\n return resolve_orders_total(info, period)\n\n def resolve_order_by_token(self, info, token):\n return resolve_order_by_token(info, token)\n\n\nclass OrderMutations(graphene.ObjectType):\n draft_order_complete = DraftOrderComplete.Field()\n draft_order_create = DraftOrderCreate.Field()\n draft_order_delete = DraftOrderDelete.Field()\n draft_order_bulk_delete = DraftOrderBulkDelete.Field()\n draft_order_lines_create = DraftOrderLinesCreate.Field()\n draft_order_line_delete = DraftOrderLineDelete.Field()\n draft_order_line_update = DraftOrderLineUpdate.Field()\n draft_order_update = DraftOrderUpdate.Field()\n\n order_add_note = OrderAddNote.Field()\n order_cancel = OrderCancel.Field()\n order_capture = OrderCapture.Field()\n order_fulfillment_cancel = FulfillmentCancel.Field()\n order_fulfillment_create = FulfillmentCreate.Field()\n order_fulfillment_update_tracking = FulfillmentUpdateTracking.Field()\n order_mark_as_paid = OrderMarkAsPaid.Field()\n order_refund = OrderRefund.Field()\n order_update = OrderUpdate.Field()\n order_update_shipping = OrderUpdateShipping.Field()\n order_void = OrderVoid.Field()\n", "path": "saleor/graphql/order/schema.py"}], "after_files": [{"content": "import graphene\nimport graphene_django_optimizer as gql_optimizer\n\nfrom ...order import OrderEvents, OrderStatus, models\nfrom ...order.utils import sum_order_totals\nfrom ..utils import filter_by_period, filter_by_query_param\nfrom .enums import OrderStatusFilter\nfrom .types import Order\nfrom .utils import applicable_shipping_methods\n\nORDER_SEARCH_FIELDS = (\n 'id', 'discount_name', 'token', 'user_email', 'user__email')\n\n\ndef filter_orders(qs, info, created, status, query):\n qs = filter_by_query_param(qs, query, ORDER_SEARCH_FIELDS)\n\n # filter orders by status\n if status is not None:\n if status == OrderStatusFilter.READY_TO_FULFILL:\n qs = qs.ready_to_fulfill()\n elif status == OrderStatusFilter.READY_TO_CAPTURE:\n qs = qs.ready_to_capture()\n\n # filter orders by creation date\n if created is not None:\n qs = filter_by_period(qs, created, 'created')\n\n return gql_optimizer.query(qs, info)\n\n\ndef resolve_orders(info, created, status, query):\n user = info.context.user\n if user.has_perm('order.manage_orders'):\n qs = models.Order.objects.confirmed()\n else:\n qs = user.orders.confirmed()\n return filter_orders(qs, info, created, status, query)\n\n\ndef resolve_draft_orders(info, created, query):\n qs = models.Order.objects.drafts()\n return filter_orders(qs, info, created, None, query)\n\n\ndef resolve_orders_total(info, period):\n qs = models.Order.objects.confirmed().exclude(status=OrderStatus.CANCELED)\n qs = filter_by_period(qs, period, 'created')\n return sum_order_totals(qs)\n\n\ndef resolve_order(info, id):\n \"\"\"Return order only for user assigned to it or proper staff user.\"\"\"\n user = info.context.user\n order = graphene.Node.get_node_from_global_id(info, id, Order)\n if user.has_perm('order.manage_orders') or order.user == user:\n return order\n return None\n\n\ndef resolve_shipping_methods(obj, info, price):\n return applicable_shipping_methods(obj, info, price)\n\n\ndef resolve_homepage_events(info):\n # Filter only selected events to be displayed on homepage.\n types = [\n OrderEvents.PLACED.value, OrderEvents.PLACED_FROM_DRAFT.value,\n OrderEvents.ORDER_FULLY_PAID.value]\n return models.OrderEvent.objects.filter(type__in=types)\n\n\ndef resolve_order_by_token(info, token):\n return models.Order.objects.filter(token=token).first()\n", "path": "saleor/graphql/order/resolvers.py"}, {"content": "from textwrap import dedent\n\nimport graphene\nfrom graphql_jwt.decorators import login_required, permission_required\n\nfrom ..core.enums import ReportingPeriod\nfrom ..core.fields import PrefetchingConnectionField\nfrom ..core.types import TaxedMoney\nfrom ..descriptions import DESCRIPTIONS\nfrom .bulk_mutations.draft_orders import DraftOrderBulkDelete\nfrom .enums import OrderStatusFilter\nfrom .mutations.draft_orders import (\n DraftOrderComplete, DraftOrderCreate, DraftOrderDelete,\n DraftOrderLineDelete, DraftOrderLinesCreate, DraftOrderLineUpdate,\n DraftOrderUpdate)\nfrom .mutations.fulfillments import (\n FulfillmentCancel, FulfillmentCreate, FulfillmentUpdateTracking)\nfrom .mutations.orders import (\n OrderAddNote, OrderCancel, OrderCapture, OrderMarkAsPaid, OrderRefund,\n OrderUpdate, OrderUpdateShipping, OrderVoid)\nfrom .resolvers import (\n resolve_draft_orders, resolve_homepage_events, resolve_order,\n resolve_order_by_token, resolve_orders, resolve_orders_total)\nfrom .types import Order, OrderEvent\n\n\nclass OrderQueries(graphene.ObjectType):\n homepage_events = PrefetchingConnectionField(\n OrderEvent, description=dedent('''List of activity events to display on\n homepage (at the moment it only contains order-events).'''))\n order = graphene.Field(\n Order, description='Lookup an order by ID.',\n id=graphene.Argument(graphene.ID, required=True))\n orders = PrefetchingConnectionField(\n Order,\n query=graphene.String(description=DESCRIPTIONS['order']),\n created=graphene.Argument(\n ReportingPeriod,\n description='Filter orders from a selected timespan.'),\n status=graphene.Argument(\n OrderStatusFilter, description='Filter order by status'),\n description='List of the shop\\'s orders.')\n draft_orders = PrefetchingConnectionField(\n Order,\n query=graphene.String(description=DESCRIPTIONS['order']),\n created=graphene.Argument(\n ReportingPeriod,\n description='Filter draft orders from a selected timespan.'),\n description='List of the shop\\'s draft orders.')\n orders_total = graphene.Field(\n TaxedMoney, description='Total sales.',\n period=graphene.Argument(\n ReportingPeriod,\n description='Get total sales for selected span of time.'))\n order_by_token = graphene.Field(\n Order, description='Lookup an order by token.',\n token=graphene.Argument(graphene.String, required=True))\n\n @permission_required('order.manage_orders')\n def resolve_homepage_events(self, info, **kwargs):\n return resolve_homepage_events(info)\n\n @login_required\n def resolve_order(self, info, id):\n return resolve_order(info, id)\n\n @login_required\n def resolve_orders(\n self, info, created=None, status=None, query=None, **kwargs):\n return resolve_orders(info, created, status, query)\n\n @permission_required('order.manage_orders')\n def resolve_draft_orders(self, info, created=None, query=None, **kwargs):\n return resolve_draft_orders(info, created, query)\n\n @permission_required('order.manage_orders')\n def resolve_orders_total(self, info, period, **kwargs):\n return resolve_orders_total(info, period)\n\n def resolve_order_by_token(self, info, token):\n return resolve_order_by_token(info, token)\n\n\nclass OrderMutations(graphene.ObjectType):\n draft_order_complete = DraftOrderComplete.Field()\n draft_order_create = DraftOrderCreate.Field()\n draft_order_delete = DraftOrderDelete.Field()\n draft_order_bulk_delete = DraftOrderBulkDelete.Field()\n draft_order_lines_create = DraftOrderLinesCreate.Field()\n draft_order_line_delete = DraftOrderLineDelete.Field()\n draft_order_line_update = DraftOrderLineUpdate.Field()\n draft_order_update = DraftOrderUpdate.Field()\n\n order_add_note = OrderAddNote.Field()\n order_cancel = OrderCancel.Field()\n order_capture = OrderCapture.Field()\n order_fulfillment_cancel = FulfillmentCancel.Field()\n order_fulfillment_create = FulfillmentCreate.Field()\n order_fulfillment_update_tracking = FulfillmentUpdateTracking.Field()\n order_mark_as_paid = OrderMarkAsPaid.Field()\n order_refund = OrderRefund.Field()\n order_update = OrderUpdate.Field()\n order_update_shipping = OrderUpdateShipping.Field()\n order_void = OrderVoid.Field()\n", "path": "saleor/graphql/order/schema.py"}]}
| 1,948 | 747 |
gh_patches_debug_28007
|
rasdani/github-patches
|
git_diff
|
PlasmaPy__PlasmaPy-228
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use Dawson function for dispersion
`plasma_dispersion_func` under `mathematics.py` currently uses `erf()` along with some other terms. This can be simplified to Dawson function, [dawsn](https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.dawsn.html), and may even offer some minor speedups if scipy implements it in C code.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plasmapy/mathematics/mathematics.py`
Content:
```
1 """Functions related to the plasma dispersion function"""
2
3 import numpy as np
4 from scipy import special
5 from astropy import units as u
6
7
8 def plasma_dispersion_func(zeta):
9 r"""
10 Calculate the plasma dispersion function
11
12 Parameters
13 ----------
14 zeta : complex, int, float, ndarray, or Quantity
15 Argument of plasma dispersion function.
16
17 Returns
18 -------
19 Z : complex, float, or ndarray
20 Value of plasma dispersion function.
21
22 Raises
23 ------
24 TypeError
25 If the argument is invalid.
26 UnitsError
27 If the argument is a Quantity but is not dimensionless
28 ValueError
29 If the argument is not entirely finite
30
31 See also
32 --------
33 plasma_dispersion_func_deriv
34
35 Notes
36 -----
37 The plasma dispersion function is defined as:
38
39 .. math::
40 Z(\zeta) = \pi^{-0.5} \int_{-\infty}^{+\infty} \frac{e^{-x^2}}{x-\zeta} dx
41
42 where the argument is a complex number [fried.conte-1961]_.
43
44 In plasma wave theory, the plasma dispersion function appears
45 frequently when the background medium has a Maxwellian
46 distribution function. The argument of this function then refers
47 to the ratio of a wave's phase velocity to a thermal velocity.
48
49 References
50 ----------
51 .. [fried.conte-1961] Fried, Burton D. and Samuel D. Conte. 1961.
52 The Plasma Dispersion Function: The Hilbert Transformation of the
53 Gaussian. Academic Press (New York and London).
54
55 Examples
56 --------
57 >>> plasma_dispersion_func(0)
58 1.7724538509055159j
59 >>> plasma_dispersion_func(1j)
60 0.7578721561413119j
61 >>> plasma_dispersion_func(-1.52+0.47j)
62 (0.6088888957234255+0.3349458388287403j)
63
64 """
65
66 if not isinstance(zeta, (int, float, complex, np.ndarray, u.Quantity)):
67 raise TypeError("The argument to plasma_dispersion_function "
68 "must be one of the following types: complex, float, "
69 "int, ndarray, or Quantity.")
70
71 if isinstance(zeta, u.Quantity):
72 if zeta.unit == u.dimensionless_unscaled:
73 zeta = zeta.value
74 else:
75 raise u.UnitsError("The argument to plasma_dispersion_function "
76 "must be dimensionless if it is a Quantity")
77
78 if not np.all(np.isfinite(zeta)):
79 raise ValueError("The argument to plasma_dispersion_function is "
80 "not finite.")
81
82 Z = 1j * np.sqrt(np.pi) * np.exp(-zeta**2) * (1.0 + special.erf(1j * zeta))
83
84 return Z
85
86
87 def plasma_dispersion_func_deriv(zeta):
88 r"""Calculate the derivative of the plasma dispersion function
89
90 Parameters
91 ----------
92 zeta : complex, int, float, ndarray, or Quantity
93 Argument of plasma dispersion function.
94
95 Returns
96 -------
97 Zprime : complex, int, float, or ndarray
98 First derivative of plasma dispersion function.
99
100 Raises
101 ------
102 TypeError
103 If the argument is invalid.
104 UnitsError
105 If the argument is a Quantity but is not dimensionless
106 ValueError
107 If the argument is not entirely finite
108
109 See also
110 --------
111 plasma_dispersion_func
112
113 Notes
114 -----
115 The derivative of the plasma dispersion function is defined as:
116
117 .. math::
118 Z'(\zeta) = \pi^{-0.5} \int_{-\infty}^{+\infty} \frac{e^{-x^2}}{(x-\zeta)^2} dx
119
120 where the argument is a complex number [fried.conte-1961]_.
121
122 Examples
123 --------
124 >>> plasma_dispersion_func_deriv(0)
125 (-2+0j)
126 >>> plasma_dispersion_func_deriv(1j)
127 (-0.48425568771737626+0j)
128 >>> plasma_dispersion_func_deriv(-1.52+0.47j)
129 (0.1658713314982294+0.4458797880593507j)
130
131 """
132
133 if not isinstance(zeta, (int, float, complex, np.ndarray, u.Quantity)):
134 raise TypeError("The argument to plasma_dispersion_function_deriv "
135 "must be one of the following types: complex, float, "
136 "int, ndarray, or Quantity.")
137
138 if isinstance(zeta, u.Quantity):
139 if zeta.unit == u.dimensionless_unscaled:
140 zeta = zeta.value
141 else:
142 raise u.UnitsError("The argument to "
143 "plasma_dispersion_function_deriv "
144 "must be dimensionless if it is a Quantity")
145
146 if not np.all(np.isfinite(zeta)):
147 raise ValueError("The argument to plasma_dispersion_function_deriv is "
148 "not finite.")
149
150 Zprime = -2 * (1 + zeta * plasma_dispersion_func(zeta))
151
152 return Zprime
153
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/plasmapy/mathematics/mathematics.py b/plasmapy/mathematics/mathematics.py
--- a/plasmapy/mathematics/mathematics.py
+++ b/plasmapy/mathematics/mathematics.py
@@ -3,6 +3,7 @@
import numpy as np
from scipy import special
from astropy import units as u
+from scipy.special import wofz as Faddeeva_function
def plasma_dispersion_func(zeta):
@@ -57,9 +58,9 @@
>>> plasma_dispersion_func(0)
1.7724538509055159j
>>> plasma_dispersion_func(1j)
- 0.7578721561413119j
+ 0.757872156141312j
>>> plasma_dispersion_func(-1.52+0.47j)
- (0.6088888957234255+0.3349458388287403j)
+ (0.6088888957234254+0.33494583882874024j)
"""
@@ -79,7 +80,7 @@
raise ValueError("The argument to plasma_dispersion_function is "
"not finite.")
- Z = 1j * np.sqrt(np.pi) * np.exp(-zeta**2) * (1.0 + special.erf(1j * zeta))
+ Z = 1j * np.sqrt(np.pi) * Faddeeva_function(zeta)
return Z
@@ -124,9 +125,9 @@
>>> plasma_dispersion_func_deriv(0)
(-2+0j)
>>> plasma_dispersion_func_deriv(1j)
- (-0.48425568771737626+0j)
+ (-0.48425568771737604+0j)
>>> plasma_dispersion_func_deriv(-1.52+0.47j)
- (0.1658713314982294+0.4458797880593507j)
+ (0.16587133149822897+0.44587978805935047j)
"""
|
{"golden_diff": "diff --git a/plasmapy/mathematics/mathematics.py b/plasmapy/mathematics/mathematics.py\n--- a/plasmapy/mathematics/mathematics.py\n+++ b/plasmapy/mathematics/mathematics.py\n@@ -3,6 +3,7 @@\n import numpy as np\n from scipy import special\n from astropy import units as u\n+from scipy.special import wofz as Faddeeva_function\n \n \n def plasma_dispersion_func(zeta):\n@@ -57,9 +58,9 @@\n >>> plasma_dispersion_func(0)\n 1.7724538509055159j\n >>> plasma_dispersion_func(1j)\n- 0.7578721561413119j\n+ 0.757872156141312j\n >>> plasma_dispersion_func(-1.52+0.47j)\n- (0.6088888957234255+0.3349458388287403j)\n+ (0.6088888957234254+0.33494583882874024j)\n \n \"\"\"\n \n@@ -79,7 +80,7 @@\n raise ValueError(\"The argument to plasma_dispersion_function is \"\n \"not finite.\")\n \n- Z = 1j * np.sqrt(np.pi) * np.exp(-zeta**2) * (1.0 + special.erf(1j * zeta))\n+ Z = 1j * np.sqrt(np.pi) * Faddeeva_function(zeta)\n \n return Z\n \n@@ -124,9 +125,9 @@\n >>> plasma_dispersion_func_deriv(0)\n (-2+0j)\n >>> plasma_dispersion_func_deriv(1j)\n- (-0.48425568771737626+0j)\n+ (-0.48425568771737604+0j)\n >>> plasma_dispersion_func_deriv(-1.52+0.47j)\n- (0.1658713314982294+0.4458797880593507j)\n+ (0.16587133149822897+0.44587978805935047j)\n \n \"\"\"\n", "issue": "Use Dawson function for dispersion\n`plasma_dispersion_func` under `mathematics.py` currently uses `erf()` along with some other terms. This can be simplified to Dawson function, [dawsn](https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.dawsn.html), and may even offer some minor speedups if scipy implements it in C code.\n", "before_files": [{"content": "\"\"\"Functions related to the plasma dispersion function\"\"\"\n\nimport numpy as np\nfrom scipy import special\nfrom astropy import units as u\n\n\ndef plasma_dispersion_func(zeta):\n r\"\"\"\n Calculate the plasma dispersion function\n\n Parameters\n ----------\n zeta : complex, int, float, ndarray, or Quantity\n Argument of plasma dispersion function.\n\n Returns\n -------\n Z : complex, float, or ndarray\n Value of plasma dispersion function.\n\n Raises\n ------\n TypeError\n If the argument is invalid.\n UnitsError\n If the argument is a Quantity but is not dimensionless\n ValueError\n If the argument is not entirely finite\n\n See also\n --------\n plasma_dispersion_func_deriv\n\n Notes\n -----\n The plasma dispersion function is defined as:\n\n .. math::\n Z(\\zeta) = \\pi^{-0.5} \\int_{-\\infty}^{+\\infty} \\frac{e^{-x^2}}{x-\\zeta} dx\n\n where the argument is a complex number [fried.conte-1961]_.\n\n In plasma wave theory, the plasma dispersion function appears\n frequently when the background medium has a Maxwellian\n distribution function. The argument of this function then refers\n to the ratio of a wave's phase velocity to a thermal velocity.\n\n References\n ----------\n .. [fried.conte-1961] Fried, Burton D. and Samuel D. Conte. 1961.\n The Plasma Dispersion Function: The Hilbert Transformation of the\n Gaussian. Academic Press (New York and London).\n\n Examples\n --------\n >>> plasma_dispersion_func(0)\n 1.7724538509055159j\n >>> plasma_dispersion_func(1j)\n 0.7578721561413119j\n >>> plasma_dispersion_func(-1.52+0.47j)\n (0.6088888957234255+0.3349458388287403j)\n\n \"\"\"\n\n if not isinstance(zeta, (int, float, complex, np.ndarray, u.Quantity)):\n raise TypeError(\"The argument to plasma_dispersion_function \"\n \"must be one of the following types: complex, float, \"\n \"int, ndarray, or Quantity.\")\n\n if isinstance(zeta, u.Quantity):\n if zeta.unit == u.dimensionless_unscaled:\n zeta = zeta.value\n else:\n raise u.UnitsError(\"The argument to plasma_dispersion_function \"\n \"must be dimensionless if it is a Quantity\")\n\n if not np.all(np.isfinite(zeta)):\n raise ValueError(\"The argument to plasma_dispersion_function is \"\n \"not finite.\")\n\n Z = 1j * np.sqrt(np.pi) * np.exp(-zeta**2) * (1.0 + special.erf(1j * zeta))\n\n return Z\n\n\ndef plasma_dispersion_func_deriv(zeta):\n r\"\"\"Calculate the derivative of the plasma dispersion function\n\n Parameters\n ----------\n zeta : complex, int, float, ndarray, or Quantity\n Argument of plasma dispersion function.\n\n Returns\n -------\n Zprime : complex, int, float, or ndarray\n First derivative of plasma dispersion function.\n\n Raises\n ------\n TypeError\n If the argument is invalid.\n UnitsError\n If the argument is a Quantity but is not dimensionless\n ValueError\n If the argument is not entirely finite\n\n See also\n --------\n plasma_dispersion_func\n\n Notes\n -----\n The derivative of the plasma dispersion function is defined as:\n\n .. math::\n Z'(\\zeta) = \\pi^{-0.5} \\int_{-\\infty}^{+\\infty} \\frac{e^{-x^2}}{(x-\\zeta)^2} dx\n\n where the argument is a complex number [fried.conte-1961]_.\n\n Examples\n --------\n >>> plasma_dispersion_func_deriv(0)\n (-2+0j)\n >>> plasma_dispersion_func_deriv(1j)\n (-0.48425568771737626+0j)\n >>> plasma_dispersion_func_deriv(-1.52+0.47j)\n (0.1658713314982294+0.4458797880593507j)\n\n \"\"\"\n\n if not isinstance(zeta, (int, float, complex, np.ndarray, u.Quantity)):\n raise TypeError(\"The argument to plasma_dispersion_function_deriv \"\n \"must be one of the following types: complex, float, \"\n \"int, ndarray, or Quantity.\")\n\n if isinstance(zeta, u.Quantity):\n if zeta.unit == u.dimensionless_unscaled:\n zeta = zeta.value\n else:\n raise u.UnitsError(\"The argument to \"\n \"plasma_dispersion_function_deriv \"\n \"must be dimensionless if it is a Quantity\")\n\n if not np.all(np.isfinite(zeta)):\n raise ValueError(\"The argument to plasma_dispersion_function_deriv is \"\n \"not finite.\")\n\n Zprime = -2 * (1 + zeta * plasma_dispersion_func(zeta))\n\n return Zprime\n", "path": "plasmapy/mathematics/mathematics.py"}], "after_files": [{"content": "\"\"\"Functions related to the plasma dispersion function\"\"\"\n\nimport numpy as np\nfrom scipy import special\nfrom astropy import units as u\nfrom scipy.special import wofz as Faddeeva_function\n\n\ndef plasma_dispersion_func(zeta):\n r\"\"\"\n Calculate the plasma dispersion function\n\n Parameters\n ----------\n zeta : complex, int, float, ndarray, or Quantity\n Argument of plasma dispersion function.\n\n Returns\n -------\n Z : complex, float, or ndarray\n Value of plasma dispersion function.\n\n Raises\n ------\n TypeError\n If the argument is invalid.\n UnitsError\n If the argument is a Quantity but is not dimensionless\n ValueError\n If the argument is not entirely finite\n\n See also\n --------\n plasma_dispersion_func_deriv\n\n Notes\n -----\n The plasma dispersion function is defined as:\n\n .. math::\n Z(\\zeta) = \\pi^{-0.5} \\int_{-\\infty}^{+\\infty} \\frac{e^{-x^2}}{x-\\zeta} dx\n\n where the argument is a complex number [fried.conte-1961]_.\n\n In plasma wave theory, the plasma dispersion function appears\n frequently when the background medium has a Maxwellian\n distribution function. The argument of this function then refers\n to the ratio of a wave's phase velocity to a thermal velocity.\n\n References\n ----------\n .. [fried.conte-1961] Fried, Burton D. and Samuel D. Conte. 1961.\n The Plasma Dispersion Function: The Hilbert Transformation of the\n Gaussian. Academic Press (New York and London).\n\n Examples\n --------\n >>> plasma_dispersion_func(0)\n 1.7724538509055159j\n >>> plasma_dispersion_func(1j)\n 0.757872156141312j\n >>> plasma_dispersion_func(-1.52+0.47j)\n (0.6088888957234254+0.33494583882874024j)\n\n \"\"\"\n\n if not isinstance(zeta, (int, float, complex, np.ndarray, u.Quantity)):\n raise TypeError(\"The argument to plasma_dispersion_function \"\n \"must be one of the following types: complex, float, \"\n \"int, ndarray, or Quantity.\")\n\n if isinstance(zeta, u.Quantity):\n if zeta.unit == u.dimensionless_unscaled:\n zeta = zeta.value\n else:\n raise u.UnitsError(\"The argument to plasma_dispersion_function \"\n \"must be dimensionless if it is a Quantity\")\n\n if not np.all(np.isfinite(zeta)):\n raise ValueError(\"The argument to plasma_dispersion_function is \"\n \"not finite.\")\n\n Z = 1j * np.sqrt(np.pi) * Faddeeva_function(zeta)\n\n return Z\n\n\ndef plasma_dispersion_func_deriv(zeta):\n r\"\"\"Calculate the derivative of the plasma dispersion function\n\n Parameters\n ----------\n zeta : complex, int, float, ndarray, or Quantity\n Argument of plasma dispersion function.\n\n Returns\n -------\n Zprime : complex, int, float, or ndarray\n First derivative of plasma dispersion function.\n\n Raises\n ------\n TypeError\n If the argument is invalid.\n UnitsError\n If the argument is a Quantity but is not dimensionless\n ValueError\n If the argument is not entirely finite\n\n See also\n --------\n plasma_dispersion_func\n\n Notes\n -----\n The derivative of the plasma dispersion function is defined as:\n\n .. math::\n Z'(\\zeta) = \\pi^{-0.5} \\int_{-\\infty}^{+\\infty} \\frac{e^{-x^2}}{(x-\\zeta)^2} dx\n\n where the argument is a complex number [fried.conte-1961]_.\n\n Examples\n --------\n >>> plasma_dispersion_func_deriv(0)\n (-2+0j)\n >>> plasma_dispersion_func_deriv(1j)\n (-0.48425568771737604+0j)\n >>> plasma_dispersion_func_deriv(-1.52+0.47j)\n (0.16587133149822897+0.44587978805935047j)\n\n \"\"\"\n\n if not isinstance(zeta, (int, float, complex, np.ndarray, u.Quantity)):\n raise TypeError(\"The argument to plasma_dispersion_function_deriv \"\n \"must be one of the following types: complex, float, \"\n \"int, ndarray, or Quantity.\")\n\n if isinstance(zeta, u.Quantity):\n if zeta.unit == u.dimensionless_unscaled:\n zeta = zeta.value\n else:\n raise u.UnitsError(\"The argument to \"\n \"plasma_dispersion_function_deriv \"\n \"must be dimensionless if it is a Quantity\")\n\n if not np.all(np.isfinite(zeta)):\n raise ValueError(\"The argument to plasma_dispersion_function_deriv is \"\n \"not finite.\")\n\n Zprime = -2 * (1 + zeta * plasma_dispersion_func(zeta))\n\n return Zprime\n", "path": "plasmapy/mathematics/mathematics.py"}]}
| 1,899 | 603 |
gh_patches_debug_448
|
rasdani/github-patches
|
git_diff
|
tensorflow__addons-1770
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Enable multiprocessing when testing with GPU and support distributed strategies in the tests.
**Describe the feature and the current behavior/state.**
Here I'm not going to discuss the bazel case as it's much more complicated to handle, and we currently advertize using pytest anyway to run the tests. We can of course make sure everything is compatible though.
This revamping of gpu testing has multiple objectives:
* The tests should behave the same weither the contributor has a gpu or not. Meaning we shouldn't run all the tests on a gpu just because a gpu is available, otherwise it hurts reproducibility.
* The test suite should be able to run with multiple workers in kokoro or when a user has multiple gpus. Pytest should use all gpus visible by the main process.
* We need to support testing with distributed strategies. Currently it doesn't work. A fix has been started in #1209 but we need to update it for pytest.
* Making the whole thing simple to use and to maintain. Notably, we would get rid of this file: https://github.com/tensorflow/addons/blob/master/tools/testing/parallel_gpu_execute.sh which is quite hard to work on.
To do all that, here is my proposal:
Stuff to know:
* Pytest-xdist uses multiprocessing to run the tests, not multithreading.
* 2 environement variables are available in each of the workers to identify them. https://github.com/pytest-dev/pytest-xdist#identifying-the-worker-process-during-a-test
### Test workers
Suppose we have a machine with 10CPUs and 4 GPUs, 10 processes will start to run the test suite. Workers 0 to 3 will have ownership of one GPU each (we can use CUDA_VISIBLE_DEVICES to enforce that, but I'm not even sure that's needed with the proposed implementation). Workers 4 to 9 will have no gpu available.
### Virtual devices
Each of those processes, when starting, will split their physical device into 2 virtual device. Tests that just need to run on gpu will use the first of those virtual devices. Processes which need to test distributed strategies will use the two of them. We assume here that 2 virtual devices are enough to test distributed strategies.
### Impact on the contributors:
For this whole machinery to work, we need to know which test needs to run on CPU, GPU, or in distributed strategies. To do that we'll use pytest markers: `@pytest.mark.....`
* By default, if no marker is found, the test will run on CPU: `with device("CPU:0")`. It's equivalent to
`@pytest.mark.run_on(["cpu"])`.
* To run with gpu only: `@pytest.mark.run_on(["gpu"])`.
* To run on the cpu and gpu: `@pytest.mark.run_on(["cpu", "gpu"])` (test runs twice)
* To run in within a distributed strategy `@pytest.mark.run_on(["distributed strategy"])`. (runs once here).
* To run with everything `@pytest.mark.run_on(["cpu", "gpu", "distributed strategy"])`
* To make crazy stuff, and not run the test in any device scope: `@pytest.mark.no_device_scope`. Then the contributor can do whatever he/she wants in the test.
Of course, if no gpu is available, we just skip the tests needing a distribution strategy or the gpu. Contributors who handle the devices manually have to make sure to skip manually the test if the gpu is used.
Since gpu are often the scarsest ressource (nb gpus << nb cpus), tests needing the gpu will also be marked with `@pytest.mark.tryfirst` to ensure that we don't have workers starvation at the end (to get maximum speed).
To implement that, we need first to convert all tests to pytest (as opposed to unittest) it's currently 80% done and thanks a lot @autoih for putting a LOT of work into that.
**Relevant information**
- Are you willing to contribute it (yes/no): yes
- Are you willing to maintain it going forward? (yes/no): yes
- Is there a relevant academic paper? (if so, where): no
- Is there already an implementation in another framework? (if so, where): no
- Was it part of tf.contrib? (if so, where): no
**Which API type would this fall under (layer, metric, optimizer, etc.)**
Testing
**Who will benefit with this feature?**
Contributors with gpu, CI.
**Any other info.**
I believe that the implementation will first go in tensorflow addons because we have 4 GPUs available in the CI. Later on when it's stable we can split it from tensorflow addons and make it a separate pytest plugin with a public API.
Comments welcome. Especially from @Squadrick , @hyang0129 , @seanpmorgan since I'm not a ninja of tf.device.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tensorflow_addons/conftest.py`
Content:
```
1 from tensorflow_addons.utils.test_utils import ( # noqa: F401
2 maybe_run_functions_eagerly,
3 pytest_make_parametrize_id,
4 data_format,
5 set_seeds,
6 pytest_addoption,
7 set_global_variables,
8 pytest_configure,
9 device,
10 pytest_generate_tests,
11 )
12
13 # fixtures present in this file will be available
14 # when running tests and can be referenced with strings
15 # https://docs.pytest.org/en/latest/fixture.html#conftest-py-sharing-fixture-functions
16
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tensorflow_addons/conftest.py b/tensorflow_addons/conftest.py
--- a/tensorflow_addons/conftest.py
+++ b/tensorflow_addons/conftest.py
@@ -8,6 +8,7 @@
pytest_configure,
device,
pytest_generate_tests,
+ pytest_collection_modifyitems,
)
# fixtures present in this file will be available
|
{"golden_diff": "diff --git a/tensorflow_addons/conftest.py b/tensorflow_addons/conftest.py\n--- a/tensorflow_addons/conftest.py\n+++ b/tensorflow_addons/conftest.py\n@@ -8,6 +8,7 @@\n pytest_configure,\n device,\n pytest_generate_tests,\n+ pytest_collection_modifyitems,\n )\n \n # fixtures present in this file will be available\n", "issue": "Enable multiprocessing when testing with GPU and support distributed strategies in the tests.\n**Describe the feature and the current behavior/state.**\r\n\r\nHere I'm not going to discuss the bazel case as it's much more complicated to handle, and we currently advertize using pytest anyway to run the tests. We can of course make sure everything is compatible though.\r\n\r\nThis revamping of gpu testing has multiple objectives:\r\n\r\n* The tests should behave the same weither the contributor has a gpu or not. Meaning we shouldn't run all the tests on a gpu just because a gpu is available, otherwise it hurts reproducibility.\r\n* The test suite should be able to run with multiple workers in kokoro or when a user has multiple gpus. Pytest should use all gpus visible by the main process.\r\n* We need to support testing with distributed strategies. Currently it doesn't work. A fix has been started in #1209 but we need to update it for pytest. \r\n* Making the whole thing simple to use and to maintain. Notably, we would get rid of this file: https://github.com/tensorflow/addons/blob/master/tools/testing/parallel_gpu_execute.sh which is quite hard to work on.\r\n\r\n\r\nTo do all that, here is my proposal:\r\n\r\nStuff to know: \r\n* Pytest-xdist uses multiprocessing to run the tests, not multithreading. \r\n* 2 environement variables are available in each of the workers to identify them. https://github.com/pytest-dev/pytest-xdist#identifying-the-worker-process-during-a-test\r\n\r\n\r\n### Test workers\r\nSuppose we have a machine with 10CPUs and 4 GPUs, 10 processes will start to run the test suite. Workers 0 to 3 will have ownership of one GPU each (we can use CUDA_VISIBLE_DEVICES to enforce that, but I'm not even sure that's needed with the proposed implementation). Workers 4 to 9 will have no gpu available.\r\n\r\n### Virtual devices\r\nEach of those processes, when starting, will split their physical device into 2 virtual device. Tests that just need to run on gpu will use the first of those virtual devices. Processes which need to test distributed strategies will use the two of them. We assume here that 2 virtual devices are enough to test distributed strategies.\r\n\r\n### Impact on the contributors:\r\nFor this whole machinery to work, we need to know which test needs to run on CPU, GPU, or in distributed strategies. To do that we'll use pytest markers: `@pytest.mark.....`\r\n\r\n* By default, if no marker is found, the test will run on CPU: `with device(\"CPU:0\")`. It's equivalent to \r\n`@pytest.mark.run_on([\"cpu\"])`.\r\n* To run with gpu only: `@pytest.mark.run_on([\"gpu\"])`.\r\n* To run on the cpu and gpu: `@pytest.mark.run_on([\"cpu\", \"gpu\"])` (test runs twice)\r\n* To run in within a distributed strategy `@pytest.mark.run_on([\"distributed strategy\"])`. (runs once here).\r\n* To run with everything `@pytest.mark.run_on([\"cpu\", \"gpu\", \"distributed strategy\"])`\r\n* To make crazy stuff, and not run the test in any device scope: `@pytest.mark.no_device_scope`. Then the contributor can do whatever he/she wants in the test.\r\n\r\nOf course, if no gpu is available, we just skip the tests needing a distribution strategy or the gpu. Contributors who handle the devices manually have to make sure to skip manually the test if the gpu is used.\r\n\r\nSince gpu are often the scarsest ressource (nb gpus << nb cpus), tests needing the gpu will also be marked with `@pytest.mark.tryfirst` to ensure that we don't have workers starvation at the end (to get maximum speed).\r\n\r\nTo implement that, we need first to convert all tests to pytest (as opposed to unittest) it's currently 80% done and thanks a lot @autoih for putting a LOT of work into that.\r\n\r\n\r\n\r\n**Relevant information**\r\n- Are you willing to contribute it (yes/no): yes\r\n- Are you willing to maintain it going forward? (yes/no): yes\r\n- Is there a relevant academic paper? (if so, where): no\r\n- Is there already an implementation in another framework? (if so, where): no\r\n- Was it part of tf.contrib? (if so, where): no\r\n\r\n**Which API type would this fall under (layer, metric, optimizer, etc.)**\r\n\r\nTesting\r\n\r\n**Who will benefit with this feature?**\r\n\r\nContributors with gpu, CI.\r\n\r\n**Any other info.**\r\n\r\nI believe that the implementation will first go in tensorflow addons because we have 4 GPUs available in the CI. Later on when it's stable we can split it from tensorflow addons and make it a separate pytest plugin with a public API.\r\n\r\nComments welcome. Especially from @Squadrick , @hyang0129 , @seanpmorgan since I'm not a ninja of tf.device.\n", "before_files": [{"content": "from tensorflow_addons.utils.test_utils import ( # noqa: F401\n maybe_run_functions_eagerly,\n pytest_make_parametrize_id,\n data_format,\n set_seeds,\n pytest_addoption,\n set_global_variables,\n pytest_configure,\n device,\n pytest_generate_tests,\n)\n\n# fixtures present in this file will be available\n# when running tests and can be referenced with strings\n# https://docs.pytest.org/en/latest/fixture.html#conftest-py-sharing-fixture-functions\n", "path": "tensorflow_addons/conftest.py"}], "after_files": [{"content": "from tensorflow_addons.utils.test_utils import ( # noqa: F401\n maybe_run_functions_eagerly,\n pytest_make_parametrize_id,\n data_format,\n set_seeds,\n pytest_addoption,\n set_global_variables,\n pytest_configure,\n device,\n pytest_generate_tests,\n pytest_collection_modifyitems,\n)\n\n# fixtures present in this file will be available\n# when running tests and can be referenced with strings\n# https://docs.pytest.org/en/latest/fixture.html#conftest-py-sharing-fixture-functions\n", "path": "tensorflow_addons/conftest.py"}]}
| 1,432 | 89 |
gh_patches_debug_29057
|
rasdani/github-patches
|
git_diff
|
vega__altair-2355
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Allow Paths in save()
Instead of allowing only string paths or file-likes, allow pathlib.Paths to be passed to `save()`.
Are these two the only places that would have to be changed?
https://github.com/altair-viz/altair/blob/54e03d403c1cec9ce2f2e8b14dc3d936c6686128/altair/utils/save.py#L8
https://github.com/altair-viz/altair/blob/54e03d403c1cec9ce2f2e8b14dc3d936c6686128/altair/utils/save.py#L72
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `altair/utils/save.py`
Content:
```
1 import json
2
3 from .mimebundle import spec_to_mimebundle
4
5
6 def write_file_or_filename(fp, content, mode="w"):
7 """Write content to fp, whether fp is a string or a file-like object"""
8 if isinstance(fp, str):
9 with open(fp, mode) as f:
10 f.write(content)
11 else:
12 fp.write(content)
13
14
15 def save(
16 chart,
17 fp,
18 vega_version,
19 vegaembed_version,
20 format=None,
21 mode=None,
22 vegalite_version=None,
23 embed_options=None,
24 json_kwds=None,
25 webdriver="chrome",
26 scale_factor=1,
27 **kwargs,
28 ):
29 """Save a chart to file in a variety of formats
30
31 Supported formats are [json, html, png, svg]
32
33 Parameters
34 ----------
35 chart : alt.Chart
36 the chart instance to save
37 fp : string filename or file-like object
38 file in which to write the chart.
39 format : string (optional)
40 the format to write: one of ['json', 'html', 'png', 'svg'].
41 If not specified, the format will be determined from the filename.
42 mode : string (optional)
43 Either 'vega' or 'vegalite'. If not specified, then infer the mode from
44 the '$schema' property of the spec, or the ``opt`` dictionary.
45 If it's not specified in either of those places, then use 'vegalite'.
46 vega_version : string
47 For html output, the version of vega.js to use
48 vegalite_version : string
49 For html output, the version of vegalite.js to use
50 vegaembed_version : string
51 For html output, the version of vegaembed.js to use
52 embed_options : dict
53 The vegaEmbed options dictionary. Default is {}
54 (See https://github.com/vega/vega-embed for details)
55 json_kwds : dict
56 Additional keyword arguments are passed to the output method
57 associated with the specified format.
58 webdriver : string {'chrome' | 'firefox'}
59 Webdriver to use for png or svg output
60 scale_factor : float
61 scale_factor to use to change size/resolution of png or svg output
62 **kwargs :
63 additional kwargs passed to spec_to_mimebundle.
64 """
65 if json_kwds is None:
66 json_kwds = {}
67
68 if embed_options is None:
69 embed_options = {}
70
71 if format is None:
72 if isinstance(fp, str):
73 format = fp.split(".")[-1]
74 else:
75 raise ValueError(
76 "must specify file format: " "['png', 'svg', 'pdf', 'html', 'json']"
77 )
78
79 spec = chart.to_dict()
80
81 if mode is None:
82 if "mode" in embed_options:
83 mode = embed_options["mode"]
84 elif "$schema" in spec:
85 mode = spec["$schema"].split("/")[-2]
86 else:
87 mode = "vega-lite"
88
89 if mode not in ["vega", "vega-lite"]:
90 raise ValueError("mode must be 'vega' or 'vega-lite', " "not '{}'".format(mode))
91
92 if mode == "vega-lite" and vegalite_version is None:
93 raise ValueError("must specify vega-lite version")
94
95 if format == "json":
96 json_spec = json.dumps(spec, **json_kwds)
97 write_file_or_filename(fp, json_spec, mode="w")
98 elif format == "html":
99 mimebundle = spec_to_mimebundle(
100 spec=spec,
101 format=format,
102 mode=mode,
103 vega_version=vega_version,
104 vegalite_version=vegalite_version,
105 vegaembed_version=vegaembed_version,
106 embed_options=embed_options,
107 json_kwds=json_kwds,
108 **kwargs,
109 )
110 write_file_or_filename(fp, mimebundle["text/html"], mode="w")
111 elif format in ["png", "svg", "pdf"]:
112 mimebundle = spec_to_mimebundle(
113 spec=spec,
114 format=format,
115 mode=mode,
116 vega_version=vega_version,
117 vegalite_version=vegalite_version,
118 vegaembed_version=vegaembed_version,
119 webdriver=webdriver,
120 scale_factor=scale_factor,
121 **kwargs,
122 )
123 if format == "png":
124 write_file_or_filename(fp, mimebundle["image/png"], mode="wb")
125 elif format == "pdf":
126 write_file_or_filename(fp, mimebundle["application/pdf"], mode="wb")
127 else:
128 write_file_or_filename(fp, mimebundle["image/svg+xml"], mode="w")
129 else:
130 raise ValueError("unrecognized format: '{}'".format(format))
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/altair/utils/save.py b/altair/utils/save.py
--- a/altair/utils/save.py
+++ b/altair/utils/save.py
@@ -1,11 +1,13 @@
import json
+import pathlib
from .mimebundle import spec_to_mimebundle
def write_file_or_filename(fp, content, mode="w"):
- """Write content to fp, whether fp is a string or a file-like object"""
- if isinstance(fp, str):
+ """Write content to fp, whether fp is a string, a pathlib Path or a
+ file-like object"""
+ if isinstance(fp, str) or isinstance(fp, pathlib.PurePath):
with open(fp, mode) as f:
f.write(content)
else:
@@ -34,8 +36,8 @@
----------
chart : alt.Chart
the chart instance to save
- fp : string filename or file-like object
- file in which to write the chart.
+ fp : string filename, pathlib.Path or file-like object
+ file to which to write the chart.
format : string (optional)
the format to write: one of ['json', 'html', 'png', 'svg'].
If not specified, the format will be determined from the filename.
@@ -71,6 +73,8 @@
if format is None:
if isinstance(fp, str):
format = fp.split(".")[-1]
+ elif isinstance(fp, pathlib.PurePath):
+ format = fp.suffix.lstrip(".")
else:
raise ValueError(
"must specify file format: " "['png', 'svg', 'pdf', 'html', 'json']"
|
{"golden_diff": "diff --git a/altair/utils/save.py b/altair/utils/save.py\n--- a/altair/utils/save.py\n+++ b/altair/utils/save.py\n@@ -1,11 +1,13 @@\n import json\n+import pathlib\n \n from .mimebundle import spec_to_mimebundle\n \n \n def write_file_or_filename(fp, content, mode=\"w\"):\n- \"\"\"Write content to fp, whether fp is a string or a file-like object\"\"\"\n- if isinstance(fp, str):\n+ \"\"\"Write content to fp, whether fp is a string, a pathlib Path or a\n+ file-like object\"\"\"\n+ if isinstance(fp, str) or isinstance(fp, pathlib.PurePath):\n with open(fp, mode) as f:\n f.write(content)\n else:\n@@ -34,8 +36,8 @@\n ----------\n chart : alt.Chart\n the chart instance to save\n- fp : string filename or file-like object\n- file in which to write the chart.\n+ fp : string filename, pathlib.Path or file-like object\n+ file to which to write the chart.\n format : string (optional)\n the format to write: one of ['json', 'html', 'png', 'svg'].\n If not specified, the format will be determined from the filename.\n@@ -71,6 +73,8 @@\n if format is None:\n if isinstance(fp, str):\n format = fp.split(\".\")[-1]\n+ elif isinstance(fp, pathlib.PurePath):\n+ format = fp.suffix.lstrip(\".\")\n else:\n raise ValueError(\n \"must specify file format: \" \"['png', 'svg', 'pdf', 'html', 'json']\"\n", "issue": "Allow Paths in save()\nInstead of allowing only string paths or file-likes, allow pathlib.Paths to be passed to `save()`.\r\n\r\nAre these two the only places that would have to be changed?\r\n\r\nhttps://github.com/altair-viz/altair/blob/54e03d403c1cec9ce2f2e8b14dc3d936c6686128/altair/utils/save.py#L8\r\nhttps://github.com/altair-viz/altair/blob/54e03d403c1cec9ce2f2e8b14dc3d936c6686128/altair/utils/save.py#L72\n", "before_files": [{"content": "import json\n\nfrom .mimebundle import spec_to_mimebundle\n\n\ndef write_file_or_filename(fp, content, mode=\"w\"):\n \"\"\"Write content to fp, whether fp is a string or a file-like object\"\"\"\n if isinstance(fp, str):\n with open(fp, mode) as f:\n f.write(content)\n else:\n fp.write(content)\n\n\ndef save(\n chart,\n fp,\n vega_version,\n vegaembed_version,\n format=None,\n mode=None,\n vegalite_version=None,\n embed_options=None,\n json_kwds=None,\n webdriver=\"chrome\",\n scale_factor=1,\n **kwargs,\n):\n \"\"\"Save a chart to file in a variety of formats\n\n Supported formats are [json, html, png, svg]\n\n Parameters\n ----------\n chart : alt.Chart\n the chart instance to save\n fp : string filename or file-like object\n file in which to write the chart.\n format : string (optional)\n the format to write: one of ['json', 'html', 'png', 'svg'].\n If not specified, the format will be determined from the filename.\n mode : string (optional)\n Either 'vega' or 'vegalite'. If not specified, then infer the mode from\n the '$schema' property of the spec, or the ``opt`` dictionary.\n If it's not specified in either of those places, then use 'vegalite'.\n vega_version : string\n For html output, the version of vega.js to use\n vegalite_version : string\n For html output, the version of vegalite.js to use\n vegaembed_version : string\n For html output, the version of vegaembed.js to use\n embed_options : dict\n The vegaEmbed options dictionary. Default is {}\n (See https://github.com/vega/vega-embed for details)\n json_kwds : dict\n Additional keyword arguments are passed to the output method\n associated with the specified format.\n webdriver : string {'chrome' | 'firefox'}\n Webdriver to use for png or svg output\n scale_factor : float\n scale_factor to use to change size/resolution of png or svg output\n **kwargs :\n additional kwargs passed to spec_to_mimebundle.\n \"\"\"\n if json_kwds is None:\n json_kwds = {}\n\n if embed_options is None:\n embed_options = {}\n\n if format is None:\n if isinstance(fp, str):\n format = fp.split(\".\")[-1]\n else:\n raise ValueError(\n \"must specify file format: \" \"['png', 'svg', 'pdf', 'html', 'json']\"\n )\n\n spec = chart.to_dict()\n\n if mode is None:\n if \"mode\" in embed_options:\n mode = embed_options[\"mode\"]\n elif \"$schema\" in spec:\n mode = spec[\"$schema\"].split(\"/\")[-2]\n else:\n mode = \"vega-lite\"\n\n if mode not in [\"vega\", \"vega-lite\"]:\n raise ValueError(\"mode must be 'vega' or 'vega-lite', \" \"not '{}'\".format(mode))\n\n if mode == \"vega-lite\" and vegalite_version is None:\n raise ValueError(\"must specify vega-lite version\")\n\n if format == \"json\":\n json_spec = json.dumps(spec, **json_kwds)\n write_file_or_filename(fp, json_spec, mode=\"w\")\n elif format == \"html\":\n mimebundle = spec_to_mimebundle(\n spec=spec,\n format=format,\n mode=mode,\n vega_version=vega_version,\n vegalite_version=vegalite_version,\n vegaembed_version=vegaembed_version,\n embed_options=embed_options,\n json_kwds=json_kwds,\n **kwargs,\n )\n write_file_or_filename(fp, mimebundle[\"text/html\"], mode=\"w\")\n elif format in [\"png\", \"svg\", \"pdf\"]:\n mimebundle = spec_to_mimebundle(\n spec=spec,\n format=format,\n mode=mode,\n vega_version=vega_version,\n vegalite_version=vegalite_version,\n vegaembed_version=vegaembed_version,\n webdriver=webdriver,\n scale_factor=scale_factor,\n **kwargs,\n )\n if format == \"png\":\n write_file_or_filename(fp, mimebundle[\"image/png\"], mode=\"wb\")\n elif format == \"pdf\":\n write_file_or_filename(fp, mimebundle[\"application/pdf\"], mode=\"wb\")\n else:\n write_file_or_filename(fp, mimebundle[\"image/svg+xml\"], mode=\"w\")\n else:\n raise ValueError(\"unrecognized format: '{}'\".format(format))\n", "path": "altair/utils/save.py"}], "after_files": [{"content": "import json\nimport pathlib\n\nfrom .mimebundle import spec_to_mimebundle\n\n\ndef write_file_or_filename(fp, content, mode=\"w\"):\n \"\"\"Write content to fp, whether fp is a string, a pathlib Path or a\n file-like object\"\"\"\n if isinstance(fp, str) or isinstance(fp, pathlib.PurePath):\n with open(fp, mode) as f:\n f.write(content)\n else:\n fp.write(content)\n\n\ndef save(\n chart,\n fp,\n vega_version,\n vegaembed_version,\n format=None,\n mode=None,\n vegalite_version=None,\n embed_options=None,\n json_kwds=None,\n webdriver=\"chrome\",\n scale_factor=1,\n **kwargs,\n):\n \"\"\"Save a chart to file in a variety of formats\n\n Supported formats are [json, html, png, svg]\n\n Parameters\n ----------\n chart : alt.Chart\n the chart instance to save\n fp : string filename, pathlib.Path or file-like object\n file to which to write the chart.\n format : string (optional)\n the format to write: one of ['json', 'html', 'png', 'svg'].\n If not specified, the format will be determined from the filename.\n mode : string (optional)\n Either 'vega' or 'vegalite'. If not specified, then infer the mode from\n the '$schema' property of the spec, or the ``opt`` dictionary.\n If it's not specified in either of those places, then use 'vegalite'.\n vega_version : string\n For html output, the version of vega.js to use\n vegalite_version : string\n For html output, the version of vegalite.js to use\n vegaembed_version : string\n For html output, the version of vegaembed.js to use\n embed_options : dict\n The vegaEmbed options dictionary. Default is {}\n (See https://github.com/vega/vega-embed for details)\n json_kwds : dict\n Additional keyword arguments are passed to the output method\n associated with the specified format.\n webdriver : string {'chrome' | 'firefox'}\n Webdriver to use for png or svg output\n scale_factor : float\n scale_factor to use to change size/resolution of png or svg output\n **kwargs :\n additional kwargs passed to spec_to_mimebundle.\n \"\"\"\n if json_kwds is None:\n json_kwds = {}\n\n if embed_options is None:\n embed_options = {}\n\n if format is None:\n if isinstance(fp, str):\n format = fp.split(\".\")[-1]\n elif isinstance(fp, pathlib.PurePath):\n format = fp.suffix.lstrip(\".\")\n else:\n raise ValueError(\n \"must specify file format: \" \"['png', 'svg', 'pdf', 'html', 'json']\"\n )\n\n spec = chart.to_dict()\n\n if mode is None:\n if \"mode\" in embed_options:\n mode = embed_options[\"mode\"]\n elif \"$schema\" in spec:\n mode = spec[\"$schema\"].split(\"/\")[-2]\n else:\n mode = \"vega-lite\"\n\n if mode not in [\"vega\", \"vega-lite\"]:\n raise ValueError(\"mode must be 'vega' or 'vega-lite', \" \"not '{}'\".format(mode))\n\n if mode == \"vega-lite\" and vegalite_version is None:\n raise ValueError(\"must specify vega-lite version\")\n\n if format == \"json\":\n json_spec = json.dumps(spec, **json_kwds)\n write_file_or_filename(fp, json_spec, mode=\"w\")\n elif format == \"html\":\n mimebundle = spec_to_mimebundle(\n spec=spec,\n format=format,\n mode=mode,\n vega_version=vega_version,\n vegalite_version=vegalite_version,\n vegaembed_version=vegaembed_version,\n embed_options=embed_options,\n json_kwds=json_kwds,\n **kwargs,\n )\n write_file_or_filename(fp, mimebundle[\"text/html\"], mode=\"w\")\n elif format in [\"png\", \"svg\", \"pdf\"]:\n mimebundle = spec_to_mimebundle(\n spec=spec,\n format=format,\n mode=mode,\n vega_version=vega_version,\n vegalite_version=vegalite_version,\n vegaembed_version=vegaembed_version,\n webdriver=webdriver,\n scale_factor=scale_factor,\n **kwargs,\n )\n if format == \"png\":\n write_file_or_filename(fp, mimebundle[\"image/png\"], mode=\"wb\")\n elif format == \"pdf\":\n write_file_or_filename(fp, mimebundle[\"application/pdf\"], mode=\"wb\")\n else:\n write_file_or_filename(fp, mimebundle[\"image/svg+xml\"], mode=\"w\")\n else:\n raise ValueError(\"unrecognized format: '{}'\".format(format))\n", "path": "altair/utils/save.py"}]}
| 1,726 | 369 |
gh_patches_debug_18410
|
rasdani/github-patches
|
git_diff
|
openai__openai-python-176
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[0.26.0] openai/cli.py:440: RuntimeWarning: coroutine 'FineTune.stream_events' was never awaited
```
bash$ openai api fine_tunes.create -t ./train_test_prepared.jsonl -m davinci
Upload progress: 100%|██████████████████████████████████████████████████████████████████████████████| 112/112 [00:00<00:00, 88.5kit/s]
Uploaded file from ./train_test_prepared.jsonl: file-ppEDNe0p6EomteEp3JFbBoFp
Created fine-tune: ft-u9KskmmvSnBtVc4VDfbe7lyr
Streaming events until fine-tuning is complete...
(Ctrl-C will interrupt the stream, but not cancel the fine-tune)
Stream interrupted (client disconnected).
To resume the stream, run:
openai api fine_tunes.follow -i ft-u9KskmmvSnBtVc4VDfbe7lyr
/usr/lib/python3.10/site-packages/openai/cli.py:406: RuntimeWarning: coroutine 'FineTune.stream_events' was never awaited
cls._stream_events(resp["id"])
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
```
it is working fine with 0.25.0
may be related: dev-python/aiohttp-3.8.3
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openai/api_resources/fine_tune.py`
Content:
```
1 from urllib.parse import quote_plus
2
3 from openai import api_requestor, util, error
4 from openai.api_resources.abstract import (
5 CreateableAPIResource,
6 ListableAPIResource,
7 nested_resource_class_methods,
8 )
9 from openai.api_resources.abstract.deletable_api_resource import DeletableAPIResource
10 from openai.openai_response import OpenAIResponse
11 from openai.util import ApiType
12
13
14 @nested_resource_class_methods("event", operations=["list"])
15 class FineTune(ListableAPIResource, CreateableAPIResource, DeletableAPIResource):
16 OBJECT_NAME = "fine-tunes"
17
18 @classmethod
19 def _prepare_cancel(
20 cls,
21 id,
22 api_key=None,
23 api_type=None,
24 request_id=None,
25 api_version=None,
26 **params,
27 ):
28 base = cls.class_url()
29 extn = quote_plus(id)
30
31 typed_api_type, api_version = cls._get_api_type_and_version(
32 api_type, api_version
33 )
34 if typed_api_type in (ApiType.AZURE, ApiType.AZURE_AD):
35 url = "/%s%s/%s/cancel?api-version=%s" % (
36 cls.azure_api_prefix,
37 base,
38 extn,
39 api_version,
40 )
41 elif typed_api_type == ApiType.OPEN_AI:
42 url = "%s/%s/cancel" % (base, extn)
43 else:
44 raise error.InvalidAPIType("Unsupported API type %s" % api_type)
45
46 instance = cls(id, api_key, **params)
47 return instance, url
48
49 @classmethod
50 def cancel(
51 cls,
52 id,
53 api_key=None,
54 api_type=None,
55 request_id=None,
56 api_version=None,
57 **params,
58 ):
59 instance, url = cls._prepare_cancel(
60 id,
61 api_key,
62 api_type,
63 request_id,
64 api_version,
65 **params,
66 )
67 return instance.request("post", url, request_id=request_id)
68
69 @classmethod
70 def acancel(
71 cls,
72 id,
73 api_key=None,
74 api_type=None,
75 request_id=None,
76 api_version=None,
77 **params,
78 ):
79 instance, url = cls._prepare_cancel(
80 id,
81 api_key,
82 api_type,
83 request_id,
84 api_version,
85 **params,
86 )
87 return instance.arequest("post", url, request_id=request_id)
88
89 @classmethod
90 def _prepare_stream_events(
91 cls,
92 id,
93 api_key=None,
94 api_base=None,
95 api_type=None,
96 request_id=None,
97 api_version=None,
98 organization=None,
99 **params,
100 ):
101 base = cls.class_url()
102 extn = quote_plus(id)
103
104 requestor = api_requestor.APIRequestor(
105 api_key,
106 api_base=api_base,
107 api_type=api_type,
108 api_version=api_version,
109 organization=organization,
110 )
111
112 typed_api_type, api_version = cls._get_api_type_and_version(
113 api_type, api_version
114 )
115
116 if typed_api_type in (ApiType.AZURE, ApiType.AZURE_AD):
117 url = "/%s%s/%s/events?stream=true&api-version=%s" % (
118 cls.azure_api_prefix,
119 base,
120 extn,
121 api_version,
122 )
123 elif typed_api_type == ApiType.OPEN_AI:
124 url = "%s/%s/events?stream=true" % (base, extn)
125 else:
126 raise error.InvalidAPIType("Unsupported API type %s" % api_type)
127
128 return requestor, url
129
130 @classmethod
131 async def stream_events(
132 cls,
133 id,
134 api_key=None,
135 api_base=None,
136 api_type=None,
137 request_id=None,
138 api_version=None,
139 organization=None,
140 **params,
141 ):
142 requestor, url = cls._prepare_stream_events(
143 id,
144 api_key,
145 api_base,
146 api_type,
147 request_id,
148 api_version,
149 organization,
150 **params,
151 )
152
153 response, _, api_key = await requestor.arequest(
154 "get", url, params, stream=True, request_id=request_id
155 )
156
157 assert not isinstance(response, OpenAIResponse) # must be an iterator
158 return (
159 util.convert_to_openai_object(
160 line,
161 api_key,
162 api_version,
163 organization,
164 )
165 for line in response
166 )
167
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/openai/api_resources/fine_tune.py b/openai/api_resources/fine_tune.py
--- a/openai/api_resources/fine_tune.py
+++ b/openai/api_resources/fine_tune.py
@@ -128,7 +128,45 @@
return requestor, url
@classmethod
- async def stream_events(
+ def stream_events(
+ cls,
+ id,
+ api_key=None,
+ api_base=None,
+ api_type=None,
+ request_id=None,
+ api_version=None,
+ organization=None,
+ **params,
+ ):
+ requestor, url = cls._prepare_stream_events(
+ id,
+ api_key,
+ api_base,
+ api_type,
+ request_id,
+ api_version,
+ organization,
+ **params,
+ )
+
+ response, _, api_key = requestor.request(
+ "get", url, params, stream=True, request_id=request_id
+ )
+
+ assert not isinstance(response, OpenAIResponse) # must be an iterator
+ return (
+ util.convert_to_openai_object(
+ line,
+ api_key,
+ api_version,
+ organization,
+ )
+ for line in response
+ )
+
+ @classmethod
+ async def astream_events(
cls,
id,
api_key=None,
|
{"golden_diff": "diff --git a/openai/api_resources/fine_tune.py b/openai/api_resources/fine_tune.py\n--- a/openai/api_resources/fine_tune.py\n+++ b/openai/api_resources/fine_tune.py\n@@ -128,7 +128,45 @@\n return requestor, url\n \n @classmethod\n- async def stream_events(\n+ def stream_events(\n+ cls,\n+ id,\n+ api_key=None,\n+ api_base=None,\n+ api_type=None,\n+ request_id=None,\n+ api_version=None,\n+ organization=None,\n+ **params,\n+ ):\n+ requestor, url = cls._prepare_stream_events(\n+ id,\n+ api_key,\n+ api_base,\n+ api_type,\n+ request_id,\n+ api_version,\n+ organization,\n+ **params,\n+ )\n+\n+ response, _, api_key = requestor.request(\n+ \"get\", url, params, stream=True, request_id=request_id\n+ )\n+\n+ assert not isinstance(response, OpenAIResponse) # must be an iterator\n+ return (\n+ util.convert_to_openai_object(\n+ line,\n+ api_key,\n+ api_version,\n+ organization,\n+ )\n+ for line in response\n+ )\n+\n+ @classmethod\n+ async def astream_events(\n cls,\n id,\n api_key=None,\n", "issue": "[0.26.0] openai/cli.py:440: RuntimeWarning: coroutine 'FineTune.stream_events' was never awaited\n```\r\nbash$ openai api fine_tunes.create -t ./train_test_prepared.jsonl -m davinci\r\nUpload progress: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 112/112 [00:00<00:00, 88.5kit/s]\r\nUploaded file from ./train_test_prepared.jsonl: file-ppEDNe0p6EomteEp3JFbBoFp\r\nCreated fine-tune: ft-u9KskmmvSnBtVc4VDfbe7lyr\r\nStreaming events until fine-tuning is complete...\r\n\r\n(Ctrl-C will interrupt the stream, but not cancel the fine-tune)\r\n\r\nStream interrupted (client disconnected).\r\nTo resume the stream, run:\r\n\r\n openai api fine_tunes.follow -i ft-u9KskmmvSnBtVc4VDfbe7lyr\r\n\r\n/usr/lib/python3.10/site-packages/openai/cli.py:406: RuntimeWarning: coroutine 'FineTune.stream_events' was never awaited\r\n cls._stream_events(resp[\"id\"])\r\nRuntimeWarning: Enable tracemalloc to get the object allocation traceback\r\n\r\n```\r\n\r\nit is working fine with 0.25.0\r\n\r\nmay be related: dev-python/aiohttp-3.8.3\r\n\n", "before_files": [{"content": "from urllib.parse import quote_plus\n\nfrom openai import api_requestor, util, error\nfrom openai.api_resources.abstract import (\n CreateableAPIResource,\n ListableAPIResource,\n nested_resource_class_methods,\n)\nfrom openai.api_resources.abstract.deletable_api_resource import DeletableAPIResource\nfrom openai.openai_response import OpenAIResponse\nfrom openai.util import ApiType\n\n\n@nested_resource_class_methods(\"event\", operations=[\"list\"])\nclass FineTune(ListableAPIResource, CreateableAPIResource, DeletableAPIResource):\n OBJECT_NAME = \"fine-tunes\"\n\n @classmethod\n def _prepare_cancel(\n cls,\n id,\n api_key=None,\n api_type=None,\n request_id=None,\n api_version=None,\n **params,\n ):\n base = cls.class_url()\n extn = quote_plus(id)\n\n typed_api_type, api_version = cls._get_api_type_and_version(\n api_type, api_version\n )\n if typed_api_type in (ApiType.AZURE, ApiType.AZURE_AD):\n url = \"/%s%s/%s/cancel?api-version=%s\" % (\n cls.azure_api_prefix,\n base,\n extn,\n api_version,\n )\n elif typed_api_type == ApiType.OPEN_AI:\n url = \"%s/%s/cancel\" % (base, extn)\n else:\n raise error.InvalidAPIType(\"Unsupported API type %s\" % api_type)\n\n instance = cls(id, api_key, **params)\n return instance, url\n\n @classmethod\n def cancel(\n cls,\n id,\n api_key=None,\n api_type=None,\n request_id=None,\n api_version=None,\n **params,\n ):\n instance, url = cls._prepare_cancel(\n id,\n api_key,\n api_type,\n request_id,\n api_version,\n **params,\n )\n return instance.request(\"post\", url, request_id=request_id)\n\n @classmethod\n def acancel(\n cls,\n id,\n api_key=None,\n api_type=None,\n request_id=None,\n api_version=None,\n **params,\n ):\n instance, url = cls._prepare_cancel(\n id,\n api_key,\n api_type,\n request_id,\n api_version,\n **params,\n )\n return instance.arequest(\"post\", url, request_id=request_id)\n\n @classmethod\n def _prepare_stream_events(\n cls,\n id,\n api_key=None,\n api_base=None,\n api_type=None,\n request_id=None,\n api_version=None,\n organization=None,\n **params,\n ):\n base = cls.class_url()\n extn = quote_plus(id)\n\n requestor = api_requestor.APIRequestor(\n api_key,\n api_base=api_base,\n api_type=api_type,\n api_version=api_version,\n organization=organization,\n )\n\n typed_api_type, api_version = cls._get_api_type_and_version(\n api_type, api_version\n )\n\n if typed_api_type in (ApiType.AZURE, ApiType.AZURE_AD):\n url = \"/%s%s/%s/events?stream=true&api-version=%s\" % (\n cls.azure_api_prefix,\n base,\n extn,\n api_version,\n )\n elif typed_api_type == ApiType.OPEN_AI:\n url = \"%s/%s/events?stream=true\" % (base, extn)\n else:\n raise error.InvalidAPIType(\"Unsupported API type %s\" % api_type)\n\n return requestor, url\n\n @classmethod\n async def stream_events(\n cls,\n id,\n api_key=None,\n api_base=None,\n api_type=None,\n request_id=None,\n api_version=None,\n organization=None,\n **params,\n ):\n requestor, url = cls._prepare_stream_events(\n id,\n api_key,\n api_base,\n api_type,\n request_id,\n api_version,\n organization,\n **params,\n )\n\n response, _, api_key = await requestor.arequest(\n \"get\", url, params, stream=True, request_id=request_id\n )\n\n assert not isinstance(response, OpenAIResponse) # must be an iterator\n return (\n util.convert_to_openai_object(\n line,\n api_key,\n api_version,\n organization,\n )\n for line in response\n )\n", "path": "openai/api_resources/fine_tune.py"}], "after_files": [{"content": "from urllib.parse import quote_plus\n\nfrom openai import api_requestor, util, error\nfrom openai.api_resources.abstract import (\n CreateableAPIResource,\n ListableAPIResource,\n nested_resource_class_methods,\n)\nfrom openai.api_resources.abstract.deletable_api_resource import DeletableAPIResource\nfrom openai.openai_response import OpenAIResponse\nfrom openai.util import ApiType\n\n\n@nested_resource_class_methods(\"event\", operations=[\"list\"])\nclass FineTune(ListableAPIResource, CreateableAPIResource, DeletableAPIResource):\n OBJECT_NAME = \"fine-tunes\"\n\n @classmethod\n def _prepare_cancel(\n cls,\n id,\n api_key=None,\n api_type=None,\n request_id=None,\n api_version=None,\n **params,\n ):\n base = cls.class_url()\n extn = quote_plus(id)\n\n typed_api_type, api_version = cls._get_api_type_and_version(\n api_type, api_version\n )\n if typed_api_type in (ApiType.AZURE, ApiType.AZURE_AD):\n url = \"/%s%s/%s/cancel?api-version=%s\" % (\n cls.azure_api_prefix,\n base,\n extn,\n api_version,\n )\n elif typed_api_type == ApiType.OPEN_AI:\n url = \"%s/%s/cancel\" % (base, extn)\n else:\n raise error.InvalidAPIType(\"Unsupported API type %s\" % api_type)\n\n instance = cls(id, api_key, **params)\n return instance, url\n\n @classmethod\n def cancel(\n cls,\n id,\n api_key=None,\n api_type=None,\n request_id=None,\n api_version=None,\n **params,\n ):\n instance, url = cls._prepare_cancel(\n id,\n api_key,\n api_type,\n request_id,\n api_version,\n **params,\n )\n return instance.request(\"post\", url, request_id=request_id)\n\n @classmethod\n def acancel(\n cls,\n id,\n api_key=None,\n api_type=None,\n request_id=None,\n api_version=None,\n **params,\n ):\n instance, url = cls._prepare_cancel(\n id,\n api_key,\n api_type,\n request_id,\n api_version,\n **params,\n )\n return instance.arequest(\"post\", url, request_id=request_id)\n\n @classmethod\n def _prepare_stream_events(\n cls,\n id,\n api_key=None,\n api_base=None,\n api_type=None,\n request_id=None,\n api_version=None,\n organization=None,\n **params,\n ):\n base = cls.class_url()\n extn = quote_plus(id)\n\n requestor = api_requestor.APIRequestor(\n api_key,\n api_base=api_base,\n api_type=api_type,\n api_version=api_version,\n organization=organization,\n )\n\n typed_api_type, api_version = cls._get_api_type_and_version(\n api_type, api_version\n )\n\n if typed_api_type in (ApiType.AZURE, ApiType.AZURE_AD):\n url = \"/%s%s/%s/events?stream=true&api-version=%s\" % (\n cls.azure_api_prefix,\n base,\n extn,\n api_version,\n )\n elif typed_api_type == ApiType.OPEN_AI:\n url = \"%s/%s/events?stream=true\" % (base, extn)\n else:\n raise error.InvalidAPIType(\"Unsupported API type %s\" % api_type)\n\n return requestor, url\n\n @classmethod\n def stream_events(\n cls,\n id,\n api_key=None,\n api_base=None,\n api_type=None,\n request_id=None,\n api_version=None,\n organization=None,\n **params,\n ):\n requestor, url = cls._prepare_stream_events(\n id,\n api_key,\n api_base,\n api_type,\n request_id,\n api_version,\n organization,\n **params,\n )\n\n response, _, api_key = requestor.request(\n \"get\", url, params, stream=True, request_id=request_id\n )\n\n assert not isinstance(response, OpenAIResponse) # must be an iterator\n return (\n util.convert_to_openai_object(\n line,\n api_key,\n api_version,\n organization,\n )\n for line in response\n )\n\n @classmethod\n async def astream_events(\n cls,\n id,\n api_key=None,\n api_base=None,\n api_type=None,\n request_id=None,\n api_version=None,\n organization=None,\n **params,\n ):\n requestor, url = cls._prepare_stream_events(\n id,\n api_key,\n api_base,\n api_type,\n request_id,\n api_version,\n organization,\n **params,\n )\n\n response, _, api_key = await requestor.arequest(\n \"get\", url, params, stream=True, request_id=request_id\n )\n\n assert not isinstance(response, OpenAIResponse) # must be an iterator\n return (\n util.convert_to_openai_object(\n line,\n api_key,\n api_version,\n organization,\n )\n for line in response\n )\n", "path": "openai/api_resources/fine_tune.py"}]}
| 1,941 | 313 |
gh_patches_debug_7961
|
rasdani/github-patches
|
git_diff
|
scipy__scipy-9387
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Default value for "mode" in "ndimage.shift"
In scipy.ndimage.shift the mode's default value is 'constant', but [in the documentation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.shift.html#scipy.ndimage.shift) it says Default is 'reflect'.
Which of the two is wrong?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scipy/ndimage/_ni_docstrings.py`
Content:
```
1 """Docstring components common to several ndimage functions."""
2 from __future__ import division, print_function, absolute_import
3
4 from scipy.misc import doccer
5
6 __all__ = ['docfiller']
7
8
9 _input_doc = (
10 """input : array_like
11 The input array.""")
12 _axis_doc = (
13 """axis : int, optional
14 The axis of `input` along which to calculate. Default is -1.""")
15 _output_doc = (
16 """output : array or dtype, optional
17 The array in which to place the output, or the dtype of the
18 returned array. By default an array of the same dtype as input
19 will be created.""")
20 _size_foot_doc = (
21 """size : scalar or tuple, optional
22 See footprint, below. Ignored if footprint is given.
23 footprint : array, optional
24 Either `size` or `footprint` must be defined. `size` gives
25 the shape that is taken from the input array, at every element
26 position, to define the input to the filter function.
27 `footprint` is a boolean array that specifies (implicitly) a
28 shape, but also which of the elements within this shape will get
29 passed to the filter function. Thus ``size=(n,m)`` is equivalent
30 to ``footprint=np.ones((n,m))``. We adjust `size` to the number
31 of dimensions of the input array, so that, if the input array is
32 shape (10,10,10), and `size` is 2, then the actual size used is
33 (2,2,2). When `footprint` is given, `size` is ignored.""")
34 _mode_doc = (
35 """mode : {'reflect', 'constant', 'nearest', 'mirror', 'wrap'}, optional
36 The `mode` parameter determines how the input array is extended
37 when the filter overlaps a border. Default is 'reflect'. Behavior
38 for each valid value is as follows:
39
40 'reflect' (`d c b a | a b c d | d c b a`)
41 The input is extended by reflecting about the edge of the last
42 pixel.
43
44 'constant' (`k k k k | a b c d | k k k k`)
45 The input is extended by filling all values beyond the edge with
46 the same constant value, defined by the `cval` parameter.
47
48 'nearest' (`a a a a | a b c d | d d d d`)
49 The input is extended by replicating the last pixel.
50
51 'mirror' (`d c b | a b c d | c b a`)
52 The input is extended by reflecting about the center of the last
53 pixel.
54
55 'wrap' (`a b c d | a b c d | a b c d`)
56 The input is extended by wrapping around to the opposite edge.""")
57 _mode_multiple_doc = (
58 """mode : str or sequence, optional
59 The `mode` parameter determines how the input array is extended
60 when the filter overlaps a border. By passing a sequence of modes
61 with length equal to the number of dimensions of the input array,
62 different modes can be specified along each axis. Default value is
63 'reflect'. The valid values and their behavior is as follows:
64
65 'reflect' (`d c b a | a b c d | d c b a`)
66 The input is extended by reflecting about the edge of the last
67 pixel.
68
69 'constant' (`k k k k | a b c d | k k k k`)
70 The input is extended by filling all values beyond the edge with
71 the same constant value, defined by the `cval` parameter.
72
73 'nearest' (`a a a a | a b c d | d d d d`)
74 The input is extended by replicating the last pixel.
75
76 'mirror' (`d c b | a b c d | c b a`)
77 The input is extended by reflecting about the center of the last
78 pixel.
79
80 'wrap' (`a b c d | a b c d | a b c d`)
81 The input is extended by wrapping around to the opposite edge.""")
82 _cval_doc = (
83 """cval : scalar, optional
84 Value to fill past edges of input if `mode` is 'constant'. Default
85 is 0.0.""")
86 _origin_doc = (
87 """origin : int, optional
88 Controls the placement of the filter on the input array's pixels.
89 A value of 0 (the default) centers the filter over the pixel, with
90 positive values shifting the filter to the left, and negative ones
91 to the right.""")
92 _origin_multiple_doc = (
93 """origin : int or sequence, optional
94 Controls the placement of the filter on the input array's pixels.
95 A value of 0 (the default) centers the filter over the pixel, with
96 positive values shifting the filter to the left, and negative ones
97 to the right. By passing a sequence of origins with length equal to
98 the number of dimensions of the input array, different shifts can
99 be specified along each axis.""")
100 _extra_arguments_doc = (
101 """extra_arguments : sequence, optional
102 Sequence of extra positional arguments to pass to passed function.""")
103 _extra_keywords_doc = (
104 """extra_keywords : dict, optional
105 dict of extra keyword arguments to pass to passed function.""")
106 _prefilter_doc = (
107 """prefilter : bool, optional
108 Determines if the input array is prefiltered with `spline_filter`
109 before interpolation. The default is True, which will create a
110 temporary `float64` array of filtered values if `order > 1`. If
111 setting this to False, the output will be slightly blurred if
112 `order > 1`, unless the input is prefiltered, i.e. it is the result
113 of calling `spline_filter` on the original input.""")
114
115 docdict = {
116 'input': _input_doc,
117 'axis': _axis_doc,
118 'output': _output_doc,
119 'size_foot': _size_foot_doc,
120 'mode': _mode_doc,
121 'mode_multiple': _mode_multiple_doc,
122 'cval': _cval_doc,
123 'origin': _origin_doc,
124 'origin_multiple': _origin_multiple_doc,
125 'extra_arguments': _extra_arguments_doc,
126 'extra_keywords': _extra_keywords_doc,
127 'prefilter': _prefilter_doc
128 }
129
130 docfiller = doccer.filldoc(docdict)
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scipy/ndimage/_ni_docstrings.py b/scipy/ndimage/_ni_docstrings.py
--- a/scipy/ndimage/_ni_docstrings.py
+++ b/scipy/ndimage/_ni_docstrings.py
@@ -34,7 +34,7 @@
_mode_doc = (
"""mode : {'reflect', 'constant', 'nearest', 'mirror', 'wrap'}, optional
The `mode` parameter determines how the input array is extended
- when the filter overlaps a border. Default is 'reflect'. Behavior
+ when the filter overlaps a border. Default is 'constant'. Behavior
for each valid value is as follows:
'reflect' (`d c b a | a b c d | d c b a`)
|
{"golden_diff": "diff --git a/scipy/ndimage/_ni_docstrings.py b/scipy/ndimage/_ni_docstrings.py\n--- a/scipy/ndimage/_ni_docstrings.py\n+++ b/scipy/ndimage/_ni_docstrings.py\n@@ -34,7 +34,7 @@\n _mode_doc = (\n \"\"\"mode : {'reflect', 'constant', 'nearest', 'mirror', 'wrap'}, optional\n The `mode` parameter determines how the input array is extended\n- when the filter overlaps a border. Default is 'reflect'. Behavior\n+ when the filter overlaps a border. Default is 'constant'. Behavior\n for each valid value is as follows:\n \n 'reflect' (`d c b a | a b c d | d c b a`)\n", "issue": "Default value for \"mode\" in \"ndimage.shift\"\nIn scipy.ndimage.shift the mode's default value is 'constant', but [in the documentation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.shift.html#scipy.ndimage.shift) it says Default is 'reflect'.\r\nWhich of the two is wrong?\n", "before_files": [{"content": "\"\"\"Docstring components common to several ndimage functions.\"\"\"\nfrom __future__ import division, print_function, absolute_import\n\nfrom scipy.misc import doccer\n\n__all__ = ['docfiller']\n\n\n_input_doc = (\n\"\"\"input : array_like\n The input array.\"\"\")\n_axis_doc = (\n\"\"\"axis : int, optional\n The axis of `input` along which to calculate. Default is -1.\"\"\")\n_output_doc = (\n\"\"\"output : array or dtype, optional\n The array in which to place the output, or the dtype of the\n returned array. By default an array of the same dtype as input\n will be created.\"\"\")\n_size_foot_doc = (\n\"\"\"size : scalar or tuple, optional\n See footprint, below. Ignored if footprint is given.\nfootprint : array, optional\n Either `size` or `footprint` must be defined. `size` gives\n the shape that is taken from the input array, at every element\n position, to define the input to the filter function.\n `footprint` is a boolean array that specifies (implicitly) a\n shape, but also which of the elements within this shape will get\n passed to the filter function. Thus ``size=(n,m)`` is equivalent\n to ``footprint=np.ones((n,m))``. We adjust `size` to the number\n of dimensions of the input array, so that, if the input array is\n shape (10,10,10), and `size` is 2, then the actual size used is\n (2,2,2). When `footprint` is given, `size` is ignored.\"\"\")\n_mode_doc = (\n\"\"\"mode : {'reflect', 'constant', 'nearest', 'mirror', 'wrap'}, optional\n The `mode` parameter determines how the input array is extended\n when the filter overlaps a border. Default is 'reflect'. Behavior\n for each valid value is as follows:\n\n 'reflect' (`d c b a | a b c d | d c b a`)\n The input is extended by reflecting about the edge of the last\n pixel.\n\n 'constant' (`k k k k | a b c d | k k k k`)\n The input is extended by filling all values beyond the edge with\n the same constant value, defined by the `cval` parameter.\n\n 'nearest' (`a a a a | a b c d | d d d d`)\n The input is extended by replicating the last pixel.\n\n 'mirror' (`d c b | a b c d | c b a`)\n The input is extended by reflecting about the center of the last\n pixel.\n\n 'wrap' (`a b c d | a b c d | a b c d`)\n The input is extended by wrapping around to the opposite edge.\"\"\")\n_mode_multiple_doc = (\n\"\"\"mode : str or sequence, optional\n The `mode` parameter determines how the input array is extended\n when the filter overlaps a border. By passing a sequence of modes\n with length equal to the number of dimensions of the input array,\n different modes can be specified along each axis. Default value is\n 'reflect'. The valid values and their behavior is as follows:\n\n 'reflect' (`d c b a | a b c d | d c b a`)\n The input is extended by reflecting about the edge of the last\n pixel.\n\n 'constant' (`k k k k | a b c d | k k k k`)\n The input is extended by filling all values beyond the edge with\n the same constant value, defined by the `cval` parameter.\n\n 'nearest' (`a a a a | a b c d | d d d d`)\n The input is extended by replicating the last pixel.\n\n 'mirror' (`d c b | a b c d | c b a`)\n The input is extended by reflecting about the center of the last\n pixel.\n\n 'wrap' (`a b c d | a b c d | a b c d`)\n The input is extended by wrapping around to the opposite edge.\"\"\")\n_cval_doc = (\n\"\"\"cval : scalar, optional\n Value to fill past edges of input if `mode` is 'constant'. Default\n is 0.0.\"\"\")\n_origin_doc = (\n\"\"\"origin : int, optional\n Controls the placement of the filter on the input array's pixels.\n A value of 0 (the default) centers the filter over the pixel, with\n positive values shifting the filter to the left, and negative ones\n to the right.\"\"\")\n_origin_multiple_doc = (\n\"\"\"origin : int or sequence, optional\n Controls the placement of the filter on the input array's pixels.\n A value of 0 (the default) centers the filter over the pixel, with\n positive values shifting the filter to the left, and negative ones\n to the right. By passing a sequence of origins with length equal to\n the number of dimensions of the input array, different shifts can\n be specified along each axis.\"\"\")\n_extra_arguments_doc = (\n\"\"\"extra_arguments : sequence, optional\n Sequence of extra positional arguments to pass to passed function.\"\"\")\n_extra_keywords_doc = (\n\"\"\"extra_keywords : dict, optional\n dict of extra keyword arguments to pass to passed function.\"\"\")\n_prefilter_doc = (\n\"\"\"prefilter : bool, optional\n Determines if the input array is prefiltered with `spline_filter`\n before interpolation. The default is True, which will create a\n temporary `float64` array of filtered values if `order > 1`. If\n setting this to False, the output will be slightly blurred if\n `order > 1`, unless the input is prefiltered, i.e. it is the result\n of calling `spline_filter` on the original input.\"\"\")\n\ndocdict = {\n 'input': _input_doc,\n 'axis': _axis_doc,\n 'output': _output_doc,\n 'size_foot': _size_foot_doc,\n 'mode': _mode_doc,\n 'mode_multiple': _mode_multiple_doc,\n 'cval': _cval_doc,\n 'origin': _origin_doc,\n 'origin_multiple': _origin_multiple_doc,\n 'extra_arguments': _extra_arguments_doc,\n 'extra_keywords': _extra_keywords_doc,\n 'prefilter': _prefilter_doc\n }\n\ndocfiller = doccer.filldoc(docdict)\n", "path": "scipy/ndimage/_ni_docstrings.py"}], "after_files": [{"content": "\"\"\"Docstring components common to several ndimage functions.\"\"\"\nfrom __future__ import division, print_function, absolute_import\n\nfrom scipy.misc import doccer\n\n__all__ = ['docfiller']\n\n\n_input_doc = (\n\"\"\"input : array_like\n The input array.\"\"\")\n_axis_doc = (\n\"\"\"axis : int, optional\n The axis of `input` along which to calculate. Default is -1.\"\"\")\n_output_doc = (\n\"\"\"output : array or dtype, optional\n The array in which to place the output, or the dtype of the\n returned array. By default an array of the same dtype as input\n will be created.\"\"\")\n_size_foot_doc = (\n\"\"\"size : scalar or tuple, optional\n See footprint, below. Ignored if footprint is given.\nfootprint : array, optional\n Either `size` or `footprint` must be defined. `size` gives\n the shape that is taken from the input array, at every element\n position, to define the input to the filter function.\n `footprint` is a boolean array that specifies (implicitly) a\n shape, but also which of the elements within this shape will get\n passed to the filter function. Thus ``size=(n,m)`` is equivalent\n to ``footprint=np.ones((n,m))``. We adjust `size` to the number\n of dimensions of the input array, so that, if the input array is\n shape (10,10,10), and `size` is 2, then the actual size used is\n (2,2,2). When `footprint` is given, `size` is ignored.\"\"\")\n_mode_doc = (\n\"\"\"mode : {'reflect', 'constant', 'nearest', 'mirror', 'wrap'}, optional\n The `mode` parameter determines how the input array is extended\n when the filter overlaps a border. Default is 'constant'. Behavior\n for each valid value is as follows:\n\n 'reflect' (`d c b a | a b c d | d c b a`)\n The input is extended by reflecting about the edge of the last\n pixel.\n\n 'constant' (`k k k k | a b c d | k k k k`)\n The input is extended by filling all values beyond the edge with\n the same constant value, defined by the `cval` parameter.\n\n 'nearest' (`a a a a | a b c d | d d d d`)\n The input is extended by replicating the last pixel.\n\n 'mirror' (`d c b | a b c d | c b a`)\n The input is extended by reflecting about the center of the last\n pixel.\n\n 'wrap' (`a b c d | a b c d | a b c d`)\n The input is extended by wrapping around to the opposite edge.\"\"\")\n_mode_multiple_doc = (\n\"\"\"mode : str or sequence, optional\n The `mode` parameter determines how the input array is extended\n when the filter overlaps a border. By passing a sequence of modes\n with length equal to the number of dimensions of the input array,\n different modes can be specified along each axis. Default value is\n 'reflect'. The valid values and their behavior is as follows:\n\n 'reflect' (`d c b a | a b c d | d c b a`)\n The input is extended by reflecting about the edge of the last\n pixel.\n\n 'constant' (`k k k k | a b c d | k k k k`)\n The input is extended by filling all values beyond the edge with\n the same constant value, defined by the `cval` parameter.\n\n 'nearest' (`a a a a | a b c d | d d d d`)\n The input is extended by replicating the last pixel.\n\n 'mirror' (`d c b | a b c d | c b a`)\n The input is extended by reflecting about the center of the last\n pixel.\n\n 'wrap' (`a b c d | a b c d | a b c d`)\n The input is extended by wrapping around to the opposite edge.\"\"\")\n_cval_doc = (\n\"\"\"cval : scalar, optional\n Value to fill past edges of input if `mode` is 'constant'. Default\n is 0.0.\"\"\")\n_origin_doc = (\n\"\"\"origin : int, optional\n Controls the placement of the filter on the input array's pixels.\n A value of 0 (the default) centers the filter over the pixel, with\n positive values shifting the filter to the left, and negative ones\n to the right.\"\"\")\n_origin_multiple_doc = (\n\"\"\"origin : int or sequence, optional\n Controls the placement of the filter on the input array's pixels.\n A value of 0 (the default) centers the filter over the pixel, with\n positive values shifting the filter to the left, and negative ones\n to the right. By passing a sequence of origins with length equal to\n the number of dimensions of the input array, different shifts can\n be specified along each axis.\"\"\")\n_extra_arguments_doc = (\n\"\"\"extra_arguments : sequence, optional\n Sequence of extra positional arguments to pass to passed function.\"\"\")\n_extra_keywords_doc = (\n\"\"\"extra_keywords : dict, optional\n dict of extra keyword arguments to pass to passed function.\"\"\")\n_prefilter_doc = (\n\"\"\"prefilter : bool, optional\n Determines if the input array is prefiltered with `spline_filter`\n before interpolation. The default is True, which will create a\n temporary `float64` array of filtered values if `order > 1`. If\n setting this to False, the output will be slightly blurred if\n `order > 1`, unless the input is prefiltered, i.e. it is the result\n of calling `spline_filter` on the original input.\"\"\")\n\ndocdict = {\n 'input': _input_doc,\n 'axis': _axis_doc,\n 'output': _output_doc,\n 'size_foot': _size_foot_doc,\n 'mode': _mode_doc,\n 'mode_multiple': _mode_multiple_doc,\n 'cval': _cval_doc,\n 'origin': _origin_doc,\n 'origin_multiple': _origin_multiple_doc,\n 'extra_arguments': _extra_arguments_doc,\n 'extra_keywords': _extra_keywords_doc,\n 'prefilter': _prefilter_doc\n }\n\ndocfiller = doccer.filldoc(docdict)\n", "path": "scipy/ndimage/_ni_docstrings.py"}]}
| 2,017 | 165 |
gh_patches_debug_1091
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-5926
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plugins.mangomolo: error: No plugin can handle URL
### Checklist
- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
[cli][info] Your Streamlink version (6.7.2) is up to date!
### Description
Unable to get stream for Kuwaiti channels.. error message: "error: No plugin can handle URL:"
sample URLs:
https://www.media.gov.kw/LiveTV.aspx
https://www.media.gov.kw/LiveTV.aspx?PanChannel=Drama
### Debug log
```text
user@desktop:~ $ streamlink https://www.media.gov.kw/LiveTV.aspx --loglevel=debug
[cli][debug] OS: Linux-6.1.21+-armv6l-with-glibc2.31
[cli][debug] Python: 3.9.2
[cli][debug] OpenSSL: OpenSSL 1.1.1w 11 Sep 2023
[cli][debug] Streamlink: 6.7.2
[cli][debug] Dependencies:
[cli][debug] certifi: 2023.7.22
[cli][debug] exceptiongroup: 1.1.3
[cli][debug] isodate: 0.6.1
[cli][debug] lxml: 4.9.3
[cli][debug] pycountry: 20.7.3
[cli][debug] pycryptodome: 3.18.0
[cli][debug] PySocks: 1.7.1
[cli][debug] requests: 2.31.0
[cli][debug] trio: 0.22.2
[cli][debug] trio-websocket: 0.10.3
[cli][debug] typing-extensions: 4.7.1
[cli][debug] urllib3: 2.0.4
[cli][debug] websocket-client: 1.6.2
[cli][debug] Arguments:
[cli][debug] url=https://www.media.gov.kw/LiveTV.aspx
[cli][debug] --loglevel=debug
error: No plugin can handle URL: https://www.media.gov.kw/LiveTV.aspx
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/mangomolo.py`
Content:
```
1 """
2 $description OTT video platform owned by Alpha Technology Group
3 $url player.mangomolo.com
4 $url media.gov.kw
5 $type live
6 """
7
8 import logging
9 import re
10
11 from streamlink.exceptions import NoStreamsError
12 from streamlink.plugin import Plugin, pluginmatcher
13 from streamlink.plugin.api import validate
14 from streamlink.stream import HLSStream
15 from streamlink.utils.url import update_scheme
16
17
18 log = logging.getLogger(__name__)
19
20
21 @pluginmatcher(
22 name="mangomoloplayer",
23 pattern=re.compile(r"https?://player\.mangomolo\.com/v1/"),
24 )
25 @pluginmatcher(
26 name="mediagovkw",
27 pattern=re.compile(r"https?://media\.gov\.kw/"),
28 )
29 class Mangomolo(Plugin):
30 def _get_player_url(self):
31 player_url = self.session.http.get(self.url, schema=validate.Schema(
32 validate.parse_html(),
33 validate.xml_xpath_string(".//iframe[contains(@src,'//player.mangomolo.com/v1/')][1]/@src"),
34 ))
35 if not player_url:
36 log.error("Could not find embedded player")
37 raise NoStreamsError
38
39 self.url = update_scheme("https://", player_url)
40
41 def _get_streams(self):
42 headers = {}
43 if not self.matches["mangomoloplayer"]:
44 headers["Referer"] = self.url
45 self._get_player_url()
46
47 hls_url = self.session.http.get(self.url, headers=headers, schema=validate.Schema(
48 re.compile(r"src\s*:\s*(?P<q>[\"'])(?P<url>https?://\S+?\.m3u8\S*?)(?P=q)"),
49 validate.none_or_all(validate.get("url")),
50 ))
51 if hls_url:
52 return HLSStream.parse_variant_playlist(self.session, hls_url)
53
54
55 __plugin__ = Mangomolo
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/streamlink/plugins/mangomolo.py b/src/streamlink/plugins/mangomolo.py
--- a/src/streamlink/plugins/mangomolo.py
+++ b/src/streamlink/plugins/mangomolo.py
@@ -24,7 +24,7 @@
)
@pluginmatcher(
name="mediagovkw",
- pattern=re.compile(r"https?://media\.gov\.kw/"),
+ pattern=re.compile(r"https?://(www\.)?media\.gov\.kw/"),
)
class Mangomolo(Plugin):
def _get_player_url(self):
|
{"golden_diff": "diff --git a/src/streamlink/plugins/mangomolo.py b/src/streamlink/plugins/mangomolo.py\n--- a/src/streamlink/plugins/mangomolo.py\n+++ b/src/streamlink/plugins/mangomolo.py\n@@ -24,7 +24,7 @@\n )\n @pluginmatcher(\n name=\"mediagovkw\",\n- pattern=re.compile(r\"https?://media\\.gov\\.kw/\"),\n+ pattern=re.compile(r\"https?://(www\\.)?media\\.gov\\.kw/\"),\n )\n class Mangomolo(Plugin):\n def _get_player_url(self):\n", "issue": "plugins.mangomolo: error: No plugin can handle URL\n### Checklist\n\n- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\n\n### Streamlink version\n\n[cli][info] Your Streamlink version (6.7.2) is up to date!\n\n### Description\n\nUnable to get stream for Kuwaiti channels.. error message: \"error: No plugin can handle URL:\"\r\nsample URLs:\r\n https://www.media.gov.kw/LiveTV.aspx\r\n https://www.media.gov.kw/LiveTV.aspx?PanChannel=Drama\r\n\n\n### Debug log\n\n```text\nuser@desktop:~ $ streamlink https://www.media.gov.kw/LiveTV.aspx --loglevel=debug\r\n[cli][debug] OS: Linux-6.1.21+-armv6l-with-glibc2.31\r\n[cli][debug] Python: 3.9.2\r\n[cli][debug] OpenSSL: OpenSSL 1.1.1w 11 Sep 2023\r\n[cli][debug] Streamlink: 6.7.2\r\n[cli][debug] Dependencies:\r\n[cli][debug] certifi: 2023.7.22\r\n[cli][debug] exceptiongroup: 1.1.3\r\n[cli][debug] isodate: 0.6.1\r\n[cli][debug] lxml: 4.9.3\r\n[cli][debug] pycountry: 20.7.3\r\n[cli][debug] pycryptodome: 3.18.0\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.31.0\r\n[cli][debug] trio: 0.22.2\r\n[cli][debug] trio-websocket: 0.10.3\r\n[cli][debug] typing-extensions: 4.7.1\r\n[cli][debug] urllib3: 2.0.4\r\n[cli][debug] websocket-client: 1.6.2\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://www.media.gov.kw/LiveTV.aspx\r\n[cli][debug] --loglevel=debug\r\nerror: No plugin can handle URL: https://www.media.gov.kw/LiveTV.aspx\n```\n\n", "before_files": [{"content": "\"\"\"\n$description OTT video platform owned by Alpha Technology Group\n$url player.mangomolo.com\n$url media.gov.kw\n$type live\n\"\"\"\n\nimport logging\nimport re\n\nfrom streamlink.exceptions import NoStreamsError\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream import HLSStream\nfrom streamlink.utils.url import update_scheme\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(\n name=\"mangomoloplayer\",\n pattern=re.compile(r\"https?://player\\.mangomolo\\.com/v1/\"),\n)\n@pluginmatcher(\n name=\"mediagovkw\",\n pattern=re.compile(r\"https?://media\\.gov\\.kw/\"),\n)\nclass Mangomolo(Plugin):\n def _get_player_url(self):\n player_url = self.session.http.get(self.url, schema=validate.Schema(\n validate.parse_html(),\n validate.xml_xpath_string(\".//iframe[contains(@src,'//player.mangomolo.com/v1/')][1]/@src\"),\n ))\n if not player_url:\n log.error(\"Could not find embedded player\")\n raise NoStreamsError\n\n self.url = update_scheme(\"https://\", player_url)\n\n def _get_streams(self):\n headers = {}\n if not self.matches[\"mangomoloplayer\"]:\n headers[\"Referer\"] = self.url\n self._get_player_url()\n\n hls_url = self.session.http.get(self.url, headers=headers, schema=validate.Schema(\n re.compile(r\"src\\s*:\\s*(?P<q>[\\\"'])(?P<url>https?://\\S+?\\.m3u8\\S*?)(?P=q)\"),\n validate.none_or_all(validate.get(\"url\")),\n ))\n if hls_url:\n return HLSStream.parse_variant_playlist(self.session, hls_url)\n\n\n__plugin__ = Mangomolo\n", "path": "src/streamlink/plugins/mangomolo.py"}], "after_files": [{"content": "\"\"\"\n$description OTT video platform owned by Alpha Technology Group\n$url player.mangomolo.com\n$url media.gov.kw\n$type live\n\"\"\"\n\nimport logging\nimport re\n\nfrom streamlink.exceptions import NoStreamsError\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream import HLSStream\nfrom streamlink.utils.url import update_scheme\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(\n name=\"mangomoloplayer\",\n pattern=re.compile(r\"https?://player\\.mangomolo\\.com/v1/\"),\n)\n@pluginmatcher(\n name=\"mediagovkw\",\n pattern=re.compile(r\"https?://(www\\.)?media\\.gov\\.kw/\"),\n)\nclass Mangomolo(Plugin):\n def _get_player_url(self):\n player_url = self.session.http.get(self.url, schema=validate.Schema(\n validate.parse_html(),\n validate.xml_xpath_string(\".//iframe[contains(@src,'//player.mangomolo.com/v1/')][1]/@src\"),\n ))\n if not player_url:\n log.error(\"Could not find embedded player\")\n raise NoStreamsError\n\n self.url = update_scheme(\"https://\", player_url)\n\n def _get_streams(self):\n headers = {}\n if not self.matches[\"mangomoloplayer\"]:\n headers[\"Referer\"] = self.url\n self._get_player_url()\n\n hls_url = self.session.http.get(self.url, headers=headers, schema=validate.Schema(\n re.compile(r\"src\\s*:\\s*(?P<q>[\\\"'])(?P<url>https?://\\S+?\\.m3u8\\S*?)(?P=q)\"),\n validate.none_or_all(validate.get(\"url\")),\n ))\n if hls_url:\n return HLSStream.parse_variant_playlist(self.session, hls_url)\n\n\n__plugin__ = Mangomolo\n", "path": "src/streamlink/plugins/mangomolo.py"}]}
| 1,437 | 126 |
gh_patches_debug_24491
|
rasdani/github-patches
|
git_diff
|
safe-global__safe-config-service-6
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bootstrap database with default data for the /safe-apps endpoint
When the database is created and the service is running, we should offer the option/utility to bootstrap it with the default list of safe-apps.
Default database should contain the following data: https://github.com/gnosis/safe-apps-list/blob/development/public/gnosis-default.applist.json
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/safe_apps/models.py`
Content:
```
1 from django.contrib.postgres.fields import ArrayField
2 from django.db import models
3
4
5 class SafeApp(models.Model):
6 url = models.URLField()
7 name = models.CharField(max_length=200)
8 icon_url = models.URLField()
9 description = models.CharField(max_length=200)
10 networks = ArrayField(models.IntegerField())
11
```
Path: `src/safe_apps/migrations/0001_initial.py`
Content:
```
1 # Generated by Django 3.2 on 2021-04-16 14:52
2
3 import django.contrib.postgres.fields
4 from django.db import migrations, models
5
6
7 class Migration(migrations.Migration):
8
9 initial = True
10
11 dependencies = [
12 ]
13
14 operations = [
15 migrations.CreateModel(
16 name='SafeApp',
17 fields=[
18 ('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
19 ('url', models.URLField()),
20 ('name', models.CharField(max_length=200)),
21 ('icon_url', models.URLField()),
22 ('description', models.CharField(max_length=200)),
23 ('networks', django.contrib.postgres.fields.ArrayField(base_field=models.IntegerField(), size=None)),
24 ],
25 ),
26 ]
27
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/safe_apps/migrations/0001_initial.py b/src/safe_apps/migrations/0001_initial.py
--- a/src/safe_apps/migrations/0001_initial.py
+++ b/src/safe_apps/migrations/0001_initial.py
@@ -1,4 +1,4 @@
-# Generated by Django 3.2 on 2021-04-16 14:52
+# Generated by Django 3.2 on 2021-04-28 13:39
import django.contrib.postgres.fields
from django.db import migrations, models
@@ -15,8 +15,7 @@
migrations.CreateModel(
name='SafeApp',
fields=[
- ('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
- ('url', models.URLField()),
+ ('url', models.URLField(primary_key=True, serialize=False)),
('name', models.CharField(max_length=200)),
('icon_url', models.URLField()),
('description', models.CharField(max_length=200)),
diff --git a/src/safe_apps/models.py b/src/safe_apps/models.py
--- a/src/safe_apps/models.py
+++ b/src/safe_apps/models.py
@@ -3,7 +3,7 @@
class SafeApp(models.Model):
- url = models.URLField()
+ url = models.URLField(primary_key=True)
name = models.CharField(max_length=200)
icon_url = models.URLField()
description = models.CharField(max_length=200)
|
{"golden_diff": "diff --git a/src/safe_apps/migrations/0001_initial.py b/src/safe_apps/migrations/0001_initial.py\n--- a/src/safe_apps/migrations/0001_initial.py\n+++ b/src/safe_apps/migrations/0001_initial.py\n@@ -1,4 +1,4 @@\n-# Generated by Django 3.2 on 2021-04-16 14:52\n+# Generated by Django 3.2 on 2021-04-28 13:39\n \n import django.contrib.postgres.fields\n from django.db import migrations, models\n@@ -15,8 +15,7 @@\n migrations.CreateModel(\n name='SafeApp',\n fields=[\n- ('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n- ('url', models.URLField()),\n+ ('url', models.URLField(primary_key=True, serialize=False)),\n ('name', models.CharField(max_length=200)),\n ('icon_url', models.URLField()),\n ('description', models.CharField(max_length=200)),\ndiff --git a/src/safe_apps/models.py b/src/safe_apps/models.py\n--- a/src/safe_apps/models.py\n+++ b/src/safe_apps/models.py\n@@ -3,7 +3,7 @@\n \n \n class SafeApp(models.Model):\n- url = models.URLField()\n+ url = models.URLField(primary_key=True)\n name = models.CharField(max_length=200)\n icon_url = models.URLField()\n description = models.CharField(max_length=200)\n", "issue": "Bootstrap database with default data for the /safe-apps endpoint \nWhen the database is created and the service is running, we should offer the option/utility to bootstrap it with the default list of safe-apps.\r\n\r\nDefault database should contain the following data: https://github.com/gnosis/safe-apps-list/blob/development/public/gnosis-default.applist.json\n", "before_files": [{"content": "from django.contrib.postgres.fields import ArrayField\nfrom django.db import models\n\n\nclass SafeApp(models.Model):\n url = models.URLField()\n name = models.CharField(max_length=200)\n icon_url = models.URLField()\n description = models.CharField(max_length=200)\n networks = ArrayField(models.IntegerField())\n", "path": "src/safe_apps/models.py"}, {"content": "# Generated by Django 3.2 on 2021-04-16 14:52\n\nimport django.contrib.postgres.fields\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n initial = True\n\n dependencies = [\n ]\n\n operations = [\n migrations.CreateModel(\n name='SafeApp',\n fields=[\n ('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('url', models.URLField()),\n ('name', models.CharField(max_length=200)),\n ('icon_url', models.URLField()),\n ('description', models.CharField(max_length=200)),\n ('networks', django.contrib.postgres.fields.ArrayField(base_field=models.IntegerField(), size=None)),\n ],\n ),\n ]\n", "path": "src/safe_apps/migrations/0001_initial.py"}], "after_files": [{"content": "from django.contrib.postgres.fields import ArrayField\nfrom django.db import models\n\n\nclass SafeApp(models.Model):\n url = models.URLField(primary_key=True)\n name = models.CharField(max_length=200)\n icon_url = models.URLField()\n description = models.CharField(max_length=200)\n networks = ArrayField(models.IntegerField())\n", "path": "src/safe_apps/models.py"}, {"content": "# Generated by Django 3.2 on 2021-04-28 13:39\n\nimport django.contrib.postgres.fields\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n initial = True\n\n dependencies = [\n ]\n\n operations = [\n migrations.CreateModel(\n name='SafeApp',\n fields=[\n ('url', models.URLField(primary_key=True, serialize=False)),\n ('name', models.CharField(max_length=200)),\n ('icon_url', models.URLField()),\n ('description', models.CharField(max_length=200)),\n ('networks', django.contrib.postgres.fields.ArrayField(base_field=models.IntegerField(), size=None)),\n ],\n ),\n ]\n", "path": "src/safe_apps/migrations/0001_initial.py"}]}
| 662 | 358 |
gh_patches_debug_38167
|
rasdani/github-patches
|
git_diff
|
Lightning-AI__torchmetrics-2031
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Self-Implemented Metrics Behaved Weirdly in DDP Mode
## 🐛 Bug
Self-implemented nDCG@5 is larger than nDCG@1 in DDP mode. What I was trying to do here is to record the sum of sample-wise scores **_s_** and the number of samples **_n_**. At the end of batch/epoch, I can obtain the mean of nDCG@k by dividing **_s_** by **_n_**.
My implementation looks very similar to the example on [IMPLEMENTING A METRIC](https://torchmetrics.readthedocs.io/en/stable/pages/implement.html). However, It failed in distributed settings.
`Epoch 8: 60%|██████ | 1541/2555 [04:58<03:16, 5.17it/s, loss=0.00354, v_num=213259, train_ndcg@1=0.656, train_ndcg@3=0.675, train_ndcg@5=0.698,...`
### To Reproduce
<details>
<summary>Self-implemented Metric</summary>
```python
from torchmetrics.functional.retrieval import retrieval_normalized_dcg
from torchmetrics import Metric
import torch
from torch import Tensor
class NDCG(Metric):
full_state_update = False
def __init__(self, top_k: int):
super().__init__()
self.top_k = top_k
self.add_state("ndcg", default=torch.tensor(0, dtype=torch.float), dist_reduce_fx="sum")
self.add_state("n", default=torch.tensor(0), dist_reduce_fx="sum")
def update(self, preds: Tensor, target: Tensor):
# preds and target are 2-D tensors.
assert preds.shape == target.shape
self.ndcg += torch.stack([retrieval_normalized_dcg(p, t, k=self.top_k) for p, t in zip(preds, target)]).sum()
self.n += len(preds)
def compute(self):
return self.ndcg / self.n
```
</details>
### Expected behavior
The documentation of self-implemented metric looked too simple and probably missed some details of distributed metrics.
I noticed that you used list and in `compute()` you call mean() to get the average score of a given batch. Can you please explain the difference between these two implementations?
### Environment
- TorchMetrics version (and how you installed TM, e.g. conda, pip, build from source):
0.10.3 (conda-forge)
- Python version:
3.10.12
PyTorch version (e.g., 1.0):
1.13.1
- Any other relevant information such as OS (e.g., Linux):
pytorch-lightning: 1.9.4
Linux: 5.15.0-78-generic
### Additional context
<!-- Add any other context about the problem here. -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/torchmetrics/functional/retrieval/ndcg.py`
Content:
```
1 # Copyright The Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Optional
15
16 import torch
17 from torch import Tensor
18
19 from torchmetrics.utilities.checks import _check_retrieval_functional_inputs
20
21
22 def _dcg(target: Tensor) -> Tensor:
23 """Compute Discounted Cumulative Gain for input tensor."""
24 denom = torch.log2(torch.arange(target.shape[-1], device=target.device) + 2.0)
25 return (target / denom).sum(dim=-1)
26
27
28 def retrieval_normalized_dcg(preds: Tensor, target: Tensor, top_k: Optional[int] = None) -> Tensor:
29 """Compute `Normalized Discounted Cumulative Gain`_ (for information retrieval).
30
31 ``preds`` and ``target`` should be of the same shape and live on the same device.
32 ``target`` must be either `bool` or `integers` and ``preds`` must be ``float``,
33 otherwise an error is raised.
34
35 Args:
36 preds: estimated probabilities of each document to be relevant.
37 target: ground truth about each document relevance.
38 top_k: consider only the top k elements (default: ``None``, which considers them all)
39
40 Return:
41 A single-value tensor with the nDCG of the predictions ``preds`` w.r.t. the labels ``target``.
42
43 Raises:
44 ValueError:
45 If ``top_k`` parameter is not `None` or an integer larger than 0
46
47 Example:
48 >>> from torchmetrics.functional.retrieval import retrieval_normalized_dcg
49 >>> preds = torch.tensor([.1, .2, .3, 4, 70])
50 >>> target = torch.tensor([10, 0, 0, 1, 5])
51 >>> retrieval_normalized_dcg(preds, target)
52 tensor(0.6957)
53
54 """
55 preds, target = _check_retrieval_functional_inputs(preds, target, allow_non_binary_target=True)
56
57 top_k = preds.shape[-1] if top_k is None else top_k
58
59 if not (isinstance(top_k, int) and top_k > 0):
60 raise ValueError("`top_k` has to be a positive integer or None")
61
62 sorted_target = target[torch.argsort(preds, dim=-1, descending=True)][:top_k]
63 ideal_target = torch.sort(target, descending=True)[0][:top_k]
64
65 ideal_dcg = _dcg(ideal_target)
66 target_dcg = _dcg(sorted_target)
67
68 # filter undefined scores
69 all_irrelevant = ideal_dcg == 0
70 target_dcg[all_irrelevant] = 0
71 target_dcg[~all_irrelevant] /= ideal_dcg[~all_irrelevant]
72
73 return target_dcg.mean()
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/torchmetrics/functional/retrieval/ndcg.py b/src/torchmetrics/functional/retrieval/ndcg.py
--- a/src/torchmetrics/functional/retrieval/ndcg.py
+++ b/src/torchmetrics/functional/retrieval/ndcg.py
@@ -19,10 +19,53 @@
from torchmetrics.utilities.checks import _check_retrieval_functional_inputs
-def _dcg(target: Tensor) -> Tensor:
- """Compute Discounted Cumulative Gain for input tensor."""
- denom = torch.log2(torch.arange(target.shape[-1], device=target.device) + 2.0)
- return (target / denom).sum(dim=-1)
+def _tie_average_dcg(target: Tensor, preds: Tensor, discount_cumsum: Tensor) -> Tensor:
+ """Translated version of sklearns `_tie_average_dcg` function.
+
+ Args:
+ target: ground truth about each document relevance.
+ preds: estimated probabilities of each document to be relevant.
+ discount_cumsum: cumulative sum of the discount.
+
+ Returns:
+ The cumulative gain of the tied elements.
+
+ """
+ _, inv, counts = torch.unique(-preds, return_inverse=True, return_counts=True)
+ ranked = torch.zeros_like(counts, dtype=torch.float32)
+ ranked.scatter_add_(0, inv, target.to(dtype=ranked.dtype))
+ ranked = ranked / counts
+ groups = counts.cumsum(dim=0) - 1
+ discount_sums = torch.zeros_like(counts, dtype=torch.float32)
+ discount_sums[0] = discount_cumsum[groups[0]]
+ discount_sums[1:] = discount_cumsum[groups].diff()
+ return (ranked * discount_sums).sum()
+
+
+def _dcg_sample_scores(target: Tensor, preds: Tensor, top_k: int, ignore_ties: bool) -> Tensor:
+ """Translated version of sklearns `_dcg_sample_scores` function.
+
+ Args:
+ target: ground truth about each document relevance.
+ preds: estimated probabilities of each document to be relevant.
+ top_k: consider only the top k elements
+ ignore_ties: If True, ties are ignored. If False, ties are averaged.
+
+ Returns:
+ The cumulative gain
+
+ """
+ discount = 1.0 / (torch.log2(torch.arange(target.shape[-1], device=target.device) + 2.0))
+ discount[top_k:] = 0.0
+
+ if ignore_ties:
+ ranking = preds.argsort(descending=True)
+ ranked = target[ranking]
+ cumulative_gain = (discount * ranked).sum()
+ else:
+ discount_cumsum = discount.cumsum(dim=-1)
+ cumulative_gain = _tie_average_dcg(target, preds, discount_cumsum)
+ return cumulative_gain
def retrieval_normalized_dcg(preds: Tensor, target: Tensor, top_k: Optional[int] = None) -> Tensor:
@@ -59,15 +102,12 @@
if not (isinstance(top_k, int) and top_k > 0):
raise ValueError("`top_k` has to be a positive integer or None")
- sorted_target = target[torch.argsort(preds, dim=-1, descending=True)][:top_k]
- ideal_target = torch.sort(target, descending=True)[0][:top_k]
-
- ideal_dcg = _dcg(ideal_target)
- target_dcg = _dcg(sorted_target)
+ gain = _dcg_sample_scores(target, preds, top_k, ignore_ties=False)
+ normalized_gain = _dcg_sample_scores(target, target, top_k, ignore_ties=True)
# filter undefined scores
- all_irrelevant = ideal_dcg == 0
- target_dcg[all_irrelevant] = 0
- target_dcg[~all_irrelevant] /= ideal_dcg[~all_irrelevant]
+ all_irrelevant = normalized_gain == 0
+ gain[all_irrelevant] = 0
+ gain[~all_irrelevant] /= normalized_gain[~all_irrelevant]
- return target_dcg.mean()
+ return gain.mean()
|
{"golden_diff": "diff --git a/src/torchmetrics/functional/retrieval/ndcg.py b/src/torchmetrics/functional/retrieval/ndcg.py\n--- a/src/torchmetrics/functional/retrieval/ndcg.py\n+++ b/src/torchmetrics/functional/retrieval/ndcg.py\n@@ -19,10 +19,53 @@\n from torchmetrics.utilities.checks import _check_retrieval_functional_inputs\n \n \n-def _dcg(target: Tensor) -> Tensor:\n- \"\"\"Compute Discounted Cumulative Gain for input tensor.\"\"\"\n- denom = torch.log2(torch.arange(target.shape[-1], device=target.device) + 2.0)\n- return (target / denom).sum(dim=-1)\n+def _tie_average_dcg(target: Tensor, preds: Tensor, discount_cumsum: Tensor) -> Tensor:\n+ \"\"\"Translated version of sklearns `_tie_average_dcg` function.\n+\n+ Args:\n+ target: ground truth about each document relevance.\n+ preds: estimated probabilities of each document to be relevant.\n+ discount_cumsum: cumulative sum of the discount.\n+\n+ Returns:\n+ The cumulative gain of the tied elements.\n+\n+ \"\"\"\n+ _, inv, counts = torch.unique(-preds, return_inverse=True, return_counts=True)\n+ ranked = torch.zeros_like(counts, dtype=torch.float32)\n+ ranked.scatter_add_(0, inv, target.to(dtype=ranked.dtype))\n+ ranked = ranked / counts\n+ groups = counts.cumsum(dim=0) - 1\n+ discount_sums = torch.zeros_like(counts, dtype=torch.float32)\n+ discount_sums[0] = discount_cumsum[groups[0]]\n+ discount_sums[1:] = discount_cumsum[groups].diff()\n+ return (ranked * discount_sums).sum()\n+\n+\n+def _dcg_sample_scores(target: Tensor, preds: Tensor, top_k: int, ignore_ties: bool) -> Tensor:\n+ \"\"\"Translated version of sklearns `_dcg_sample_scores` function.\n+\n+ Args:\n+ target: ground truth about each document relevance.\n+ preds: estimated probabilities of each document to be relevant.\n+ top_k: consider only the top k elements\n+ ignore_ties: If True, ties are ignored. If False, ties are averaged.\n+\n+ Returns:\n+ The cumulative gain\n+\n+ \"\"\"\n+ discount = 1.0 / (torch.log2(torch.arange(target.shape[-1], device=target.device) + 2.0))\n+ discount[top_k:] = 0.0\n+\n+ if ignore_ties:\n+ ranking = preds.argsort(descending=True)\n+ ranked = target[ranking]\n+ cumulative_gain = (discount * ranked).sum()\n+ else:\n+ discount_cumsum = discount.cumsum(dim=-1)\n+ cumulative_gain = _tie_average_dcg(target, preds, discount_cumsum)\n+ return cumulative_gain\n \n \n def retrieval_normalized_dcg(preds: Tensor, target: Tensor, top_k: Optional[int] = None) -> Tensor:\n@@ -59,15 +102,12 @@\n if not (isinstance(top_k, int) and top_k > 0):\n raise ValueError(\"`top_k` has to be a positive integer or None\")\n \n- sorted_target = target[torch.argsort(preds, dim=-1, descending=True)][:top_k]\n- ideal_target = torch.sort(target, descending=True)[0][:top_k]\n-\n- ideal_dcg = _dcg(ideal_target)\n- target_dcg = _dcg(sorted_target)\n+ gain = _dcg_sample_scores(target, preds, top_k, ignore_ties=False)\n+ normalized_gain = _dcg_sample_scores(target, target, top_k, ignore_ties=True)\n \n # filter undefined scores\n- all_irrelevant = ideal_dcg == 0\n- target_dcg[all_irrelevant] = 0\n- target_dcg[~all_irrelevant] /= ideal_dcg[~all_irrelevant]\n+ all_irrelevant = normalized_gain == 0\n+ gain[all_irrelevant] = 0\n+ gain[~all_irrelevant] /= normalized_gain[~all_irrelevant]\n \n- return target_dcg.mean()\n+ return gain.mean()\n", "issue": "Self-Implemented Metrics Behaved Weirdly in DDP Mode\n## \ud83d\udc1b Bug\r\n\r\nSelf-implemented nDCG@5 is larger than nDCG@1 in DDP mode. What I was trying to do here is to record the sum of sample-wise scores **_s_** and the number of samples **_n_**. At the end of batch/epoch, I can obtain the mean of nDCG@k by dividing **_s_** by **_n_**.\r\n\r\nMy implementation looks very similar to the example on [IMPLEMENTING A METRIC](https://torchmetrics.readthedocs.io/en/stable/pages/implement.html). However, It failed in distributed settings.\r\n\r\n`Epoch 8: 60%|\u2588\u2588\u2588\u2588\u2588\u2588 | 1541/2555 [04:58<03:16, 5.17it/s, loss=0.00354, v_num=213259, train_ndcg@1=0.656, train_ndcg@3=0.675, train_ndcg@5=0.698,...`\r\n\r\n### To Reproduce\r\n\r\n<details>\r\n <summary>Self-implemented Metric</summary>\r\n\r\n```python\r\nfrom torchmetrics.functional.retrieval import retrieval_normalized_dcg\r\nfrom torchmetrics import Metric\r\nimport torch\r\nfrom torch import Tensor\r\n\r\nclass NDCG(Metric):\r\n full_state_update = False\r\n\r\n def __init__(self, top_k: int):\r\n super().__init__()\r\n self.top_k = top_k\r\n self.add_state(\"ndcg\", default=torch.tensor(0, dtype=torch.float), dist_reduce_fx=\"sum\")\r\n self.add_state(\"n\", default=torch.tensor(0), dist_reduce_fx=\"sum\")\r\n\r\n def update(self, preds: Tensor, target: Tensor):\r\n # preds and target are 2-D tensors.\r\n assert preds.shape == target.shape\r\n\r\n self.ndcg += torch.stack([retrieval_normalized_dcg(p, t, k=self.top_k) for p, t in zip(preds, target)]).sum()\r\n self.n += len(preds)\r\n\r\n def compute(self):\r\n return self.ndcg / self.n\r\n```\r\n\r\n</details>\r\n\r\n### Expected behavior\r\n\r\nThe documentation of self-implemented metric looked too simple and probably missed some details of distributed metrics.\r\n\r\nI noticed that you used list and in `compute()` you call mean() to get the average score of a given batch. Can you please explain the difference between these two implementations?\r\n\r\n### Environment\r\n- TorchMetrics version (and how you installed TM, e.g. conda, pip, build from source):\r\n0.10.3 (conda-forge)\r\n- Python version:\r\n3.10.12\r\nPyTorch version (e.g., 1.0):\r\n1.13.1\r\n- Any other relevant information such as OS (e.g., Linux):\r\npytorch-lightning: 1.9.4\r\nLinux: 5.15.0-78-generic\r\n\r\n### Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n\n", "before_files": [{"content": "# Copyright The Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Optional\n\nimport torch\nfrom torch import Tensor\n\nfrom torchmetrics.utilities.checks import _check_retrieval_functional_inputs\n\n\ndef _dcg(target: Tensor) -> Tensor:\n \"\"\"Compute Discounted Cumulative Gain for input tensor.\"\"\"\n denom = torch.log2(torch.arange(target.shape[-1], device=target.device) + 2.0)\n return (target / denom).sum(dim=-1)\n\n\ndef retrieval_normalized_dcg(preds: Tensor, target: Tensor, top_k: Optional[int] = None) -> Tensor:\n \"\"\"Compute `Normalized Discounted Cumulative Gain`_ (for information retrieval).\n\n ``preds`` and ``target`` should be of the same shape and live on the same device.\n ``target`` must be either `bool` or `integers` and ``preds`` must be ``float``,\n otherwise an error is raised.\n\n Args:\n preds: estimated probabilities of each document to be relevant.\n target: ground truth about each document relevance.\n top_k: consider only the top k elements (default: ``None``, which considers them all)\n\n Return:\n A single-value tensor with the nDCG of the predictions ``preds`` w.r.t. the labels ``target``.\n\n Raises:\n ValueError:\n If ``top_k`` parameter is not `None` or an integer larger than 0\n\n Example:\n >>> from torchmetrics.functional.retrieval import retrieval_normalized_dcg\n >>> preds = torch.tensor([.1, .2, .3, 4, 70])\n >>> target = torch.tensor([10, 0, 0, 1, 5])\n >>> retrieval_normalized_dcg(preds, target)\n tensor(0.6957)\n\n \"\"\"\n preds, target = _check_retrieval_functional_inputs(preds, target, allow_non_binary_target=True)\n\n top_k = preds.shape[-1] if top_k is None else top_k\n\n if not (isinstance(top_k, int) and top_k > 0):\n raise ValueError(\"`top_k` has to be a positive integer or None\")\n\n sorted_target = target[torch.argsort(preds, dim=-1, descending=True)][:top_k]\n ideal_target = torch.sort(target, descending=True)[0][:top_k]\n\n ideal_dcg = _dcg(ideal_target)\n target_dcg = _dcg(sorted_target)\n\n # filter undefined scores\n all_irrelevant = ideal_dcg == 0\n target_dcg[all_irrelevant] = 0\n target_dcg[~all_irrelevant] /= ideal_dcg[~all_irrelevant]\n\n return target_dcg.mean()\n", "path": "src/torchmetrics/functional/retrieval/ndcg.py"}], "after_files": [{"content": "# Copyright The Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Optional\n\nimport torch\nfrom torch import Tensor\n\nfrom torchmetrics.utilities.checks import _check_retrieval_functional_inputs\n\n\ndef _tie_average_dcg(target: Tensor, preds: Tensor, discount_cumsum: Tensor) -> Tensor:\n \"\"\"Translated version of sklearns `_tie_average_dcg` function.\n\n Args:\n target: ground truth about each document relevance.\n preds: estimated probabilities of each document to be relevant.\n discount_cumsum: cumulative sum of the discount.\n\n Returns:\n The cumulative gain of the tied elements.\n\n \"\"\"\n _, inv, counts = torch.unique(-preds, return_inverse=True, return_counts=True)\n ranked = torch.zeros_like(counts, dtype=torch.float32)\n ranked.scatter_add_(0, inv, target.to(dtype=ranked.dtype))\n ranked = ranked / counts\n groups = counts.cumsum(dim=0) - 1\n discount_sums = torch.zeros_like(counts, dtype=torch.float32)\n discount_sums[0] = discount_cumsum[groups[0]]\n discount_sums[1:] = discount_cumsum[groups].diff()\n return (ranked * discount_sums).sum()\n\n\ndef _dcg_sample_scores(target: Tensor, preds: Tensor, top_k: int, ignore_ties: bool) -> Tensor:\n \"\"\"Translated version of sklearns `_dcg_sample_scores` function.\n\n Args:\n target: ground truth about each document relevance.\n preds: estimated probabilities of each document to be relevant.\n top_k: consider only the top k elements\n ignore_ties: If True, ties are ignored. If False, ties are averaged.\n\n Returns:\n The cumulative gain\n\n \"\"\"\n discount = 1.0 / (torch.log2(torch.arange(target.shape[-1], device=target.device) + 2.0))\n discount[top_k:] = 0.0\n\n if ignore_ties:\n ranking = preds.argsort(descending=True)\n ranked = target[ranking]\n cumulative_gain = (discount * ranked).sum()\n else:\n discount_cumsum = discount.cumsum(dim=-1)\n cumulative_gain = _tie_average_dcg(target, preds, discount_cumsum)\n return cumulative_gain\n\n\ndef retrieval_normalized_dcg(preds: Tensor, target: Tensor, top_k: Optional[int] = None) -> Tensor:\n \"\"\"Compute `Normalized Discounted Cumulative Gain`_ (for information retrieval).\n\n ``preds`` and ``target`` should be of the same shape and live on the same device.\n ``target`` must be either `bool` or `integers` and ``preds`` must be ``float``,\n otherwise an error is raised.\n\n Args:\n preds: estimated probabilities of each document to be relevant.\n target: ground truth about each document relevance.\n top_k: consider only the top k elements (default: ``None``, which considers them all)\n\n Return:\n A single-value tensor with the nDCG of the predictions ``preds`` w.r.t. the labels ``target``.\n\n Raises:\n ValueError:\n If ``top_k`` parameter is not `None` or an integer larger than 0\n\n Example:\n >>> from torchmetrics.functional.retrieval import retrieval_normalized_dcg\n >>> preds = torch.tensor([.1, .2, .3, 4, 70])\n >>> target = torch.tensor([10, 0, 0, 1, 5])\n >>> retrieval_normalized_dcg(preds, target)\n tensor(0.6957)\n\n \"\"\"\n preds, target = _check_retrieval_functional_inputs(preds, target, allow_non_binary_target=True)\n\n top_k = preds.shape[-1] if top_k is None else top_k\n\n if not (isinstance(top_k, int) and top_k > 0):\n raise ValueError(\"`top_k` has to be a positive integer or None\")\n\n gain = _dcg_sample_scores(target, preds, top_k, ignore_ties=False)\n normalized_gain = _dcg_sample_scores(target, target, top_k, ignore_ties=True)\n\n # filter undefined scores\n all_irrelevant = normalized_gain == 0\n gain[all_irrelevant] = 0\n gain[~all_irrelevant] /= normalized_gain[~all_irrelevant]\n\n return gain.mean()\n", "path": "src/torchmetrics/functional/retrieval/ndcg.py"}]}
| 1,782 | 943 |
gh_patches_debug_23010
|
rasdani/github-patches
|
git_diff
|
uccser__cs-unplugged-67
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add django-debug-toolbar for debugging
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `csunplugged/config/settings.py`
Content:
```
1 """
2 Django settings for csunplugged project.
3
4 Generated by 'django-admin startproject' using Django 1.10.3.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/1.10/topics/settings/
8
9 For the full list of settings and their values, see
10 https://docs.djangoproject.com/en/1.10/ref/settings/
11 """
12
13 import os
14 from config.settings_secret import *
15
16 # Build paths inside the project like this: os.path.join(BASE_DIR, ...)
17 BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
18
19 # nasty hard coding
20 SETTINGS_PATH = os.path.dirname(os.path.dirname(__file__))
21
22
23 # Quick-start development settings - unsuitable for production
24 # See https://docs.djangoproject.com/en/1.10/howto/deployment/checklist/
25
26 # SECURITY WARNING: keep the secret key used in production secret!
27 SECRET_KEY = 'l@@)w&&%&u37+sjz^lsx^+29y_333oid3ygxzucar^8o(axo*f'
28
29 # SECURITY WARNING: don't run with debug turned on in production!
30 DEBUG = True
31
32 ALLOWED_HOSTS = []
33
34
35 # Application definition
36
37 INSTALLED_APPS = [
38 'general.apps.GeneralConfig',
39 'topics.apps.TopicsConfig',
40 'resources.apps.ResourcesConfig',
41 'django.contrib.admin',
42 'django.contrib.auth',
43 'django.contrib.contenttypes',
44 'django.contrib.sessions',
45 'django.contrib.messages',
46 'django.contrib.staticfiles',
47 ]
48
49 MIDDLEWARE = [
50 'django.middleware.security.SecurityMiddleware',
51 'django.contrib.sessions.middleware.SessionMiddleware',
52 'django.middleware.locale.LocaleMiddleware',
53 'django.middleware.common.CommonMiddleware',
54 'django.middleware.csrf.CsrfViewMiddleware',
55 'django.contrib.auth.middleware.AuthenticationMiddleware',
56 'django.contrib.messages.middleware.MessageMiddleware',
57 'django.middleware.clickjacking.XFrameOptionsMiddleware',
58 ]
59
60 ROOT_URLCONF = 'config.urls'
61
62 TEMPLATES = [
63 {
64 'BACKEND': 'django.template.backends.django.DjangoTemplates',
65 'DIRS': [
66 os.path.join(SETTINGS_PATH, 'templates'),
67 os.path.join(SETTINGS_PATH, 'resources/content/')
68 ],
69 'APP_DIRS': True,
70 'OPTIONS': {
71 'context_processors': [
72 'django.template.context_processors.debug',
73 'django.template.context_processors.request',
74 'django.contrib.auth.context_processors.auth',
75 'django.contrib.messages.context_processors.messages',
76 ],
77 },
78 },
79 ]
80
81 WSGI_APPLICATION = 'config.wsgi.application'
82
83
84 # Database
85 # https://docs.djangoproject.com/en/1.10/ref/settings/#databases
86 # Database values are stored in `settings_secret.py`
87 # A template of this file is available as `settings_secret_template.py`
88
89
90 # Password validation
91 # https://docs.djangoproject.com/en/1.10/ref/settings/#auth-password-validators
92
93 AUTH_PASSWORD_VALIDATORS = [
94 {
95 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
96 },
97 {
98 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
99 },
100 {
101 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
102 },
103 {
104 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
105 },
106 ]
107
108
109 # Internationalization
110 # https://docs.djangoproject.com/en/1.10/topics/i18n/
111
112 LANGUAGE_CODE = 'en-us'
113
114 TIME_ZONE = 'UTC'
115
116 USE_I18N = True
117
118 USE_L10N = True
119
120 USE_TZ = True
121
122 LOCALE_PATHS = ['locale']
123
124 # Static files (CSS, JavaScript, Images)
125 # https://docs.djangoproject.com/en/1.10/howto/static-files/
126
127 STATIC_URL = '/static/'
128 STATICFILES_DIRS = (
129 os.path.join(BASE_DIR, 'build'),
130 )
131
```
Path: `csunplugged/config/urls.py`
Content:
```
1 """csunplugged URL Configuration
2
3 The `urlpatterns` list routes URLs to views. For more information please see:
4 https://docs.djangoproject.com/en/1.10/topics/http/urls/
5 Examples:
6 Function views
7 1. Add an import: from my_app import views
8 2. Add a URL to urlpatterns: url(r'^$', views.home, name='home')
9 Class-based views
10 1. Add an import: from other_app.views import Home
11 2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home')
12 Including another URLconf
13 1. Import the include() function: from django.conf.urls import url, include
14 2. Add a URL to urlpatterns: url(r'^blog/', include('blog.urls'))
15 """
16 from django.conf.urls import include, url
17 from django.conf.urls.i18n import i18n_patterns
18 from django.contrib import admin
19 from django.conf import settings
20 from django.conf.urls.static import static
21
22 urlpatterns = i18n_patterns(
23 url(r'', include('general.urls', namespace='general')),
24 url(r'^topics/', include('topics.urls', namespace='topics')),
25 url(r'^resources/', include('resources.urls', namespace='resources')),
26 url(r'^admin/', include(admin.site.urls)),
27 )
28 # ] + static(settings.STATIC_URL, documnet_root=settings.STATIC_ROOT)
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/csunplugged/config/settings.py b/csunplugged/config/settings.py
--- a/csunplugged/config/settings.py
+++ b/csunplugged/config/settings.py
@@ -44,9 +44,11 @@
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
+ 'debug_toolbar',
]
MIDDLEWARE = [
+ 'debug_toolbar.middleware.DebugToolbarMiddleware',
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.locale.LocaleMiddleware',
@@ -128,3 +130,7 @@
STATICFILES_DIRS = (
os.path.join(BASE_DIR, 'build'),
)
+
+# Internal IPs for Django Debug Toolbar
+# https://docs.djangoproject.com/en/1.10/ref/settings/#internal-ips
+INTERNAL_IPS = ['127.0.0.1']
diff --git a/csunplugged/config/urls.py b/csunplugged/config/urls.py
--- a/csunplugged/config/urls.py
+++ b/csunplugged/config/urls.py
@@ -26,3 +26,9 @@
url(r'^admin/', include(admin.site.urls)),
)
# ] + static(settings.STATIC_URL, documnet_root=settings.STATIC_ROOT)
+
+if settings.DEBUG:
+ import debug_toolbar
+ urlpatterns += [
+ url(r'^__debug__/', include(debug_toolbar.urls)),
+ ]
|
{"golden_diff": "diff --git a/csunplugged/config/settings.py b/csunplugged/config/settings.py\n--- a/csunplugged/config/settings.py\n+++ b/csunplugged/config/settings.py\n@@ -44,9 +44,11 @@\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n+ 'debug_toolbar',\n ]\n \n MIDDLEWARE = [\n+ 'debug_toolbar.middleware.DebugToolbarMiddleware',\n 'django.middleware.security.SecurityMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.locale.LocaleMiddleware',\n@@ -128,3 +130,7 @@\n STATICFILES_DIRS = (\n os.path.join(BASE_DIR, 'build'),\n )\n+\n+# Internal IPs for Django Debug Toolbar\n+# https://docs.djangoproject.com/en/1.10/ref/settings/#internal-ips\n+INTERNAL_IPS = ['127.0.0.1']\ndiff --git a/csunplugged/config/urls.py b/csunplugged/config/urls.py\n--- a/csunplugged/config/urls.py\n+++ b/csunplugged/config/urls.py\n@@ -26,3 +26,9 @@\n url(r'^admin/', include(admin.site.urls)),\n )\n # ] + static(settings.STATIC_URL, documnet_root=settings.STATIC_ROOT)\n+\n+if settings.DEBUG:\n+ import debug_toolbar\n+ urlpatterns += [\n+ url(r'^__debug__/', include(debug_toolbar.urls)),\n+ ]\n", "issue": "Add django-debug-toolbar for debugging\n\n", "before_files": [{"content": "\"\"\"\nDjango settings for csunplugged project.\n\nGenerated by 'django-admin startproject' using Django 1.10.3.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.10/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.10/ref/settings/\n\"\"\"\n\nimport os\nfrom config.settings_secret import *\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\n# nasty hard coding\nSETTINGS_PATH = os.path.dirname(os.path.dirname(__file__))\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.10/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = 'l@@)w&&%&u37+sjz^lsx^+29y_333oid3ygxzucar^8o(axo*f'\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = True\n\nALLOWED_HOSTS = []\n\n\n# Application definition\n\nINSTALLED_APPS = [\n 'general.apps.GeneralConfig',\n 'topics.apps.TopicsConfig',\n 'resources.apps.ResourcesConfig',\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n]\n\nMIDDLEWARE = [\n 'django.middleware.security.SecurityMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.locale.LocaleMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n]\n\nROOT_URLCONF = 'config.urls'\n\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [\n os.path.join(SETTINGS_PATH, 'templates'),\n os.path.join(SETTINGS_PATH, 'resources/content/')\n ],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n },\n },\n]\n\nWSGI_APPLICATION = 'config.wsgi.application'\n\n\n# Database\n# https://docs.djangoproject.com/en/1.10/ref/settings/#databases\n# Database values are stored in `settings_secret.py`\n# A template of this file is available as `settings_secret_template.py`\n\n\n# Password validation\n# https://docs.djangoproject.com/en/1.10/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.10/topics/i18n/\n\nLANGUAGE_CODE = 'en-us'\n\nTIME_ZONE = 'UTC'\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\nLOCALE_PATHS = ['locale']\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.10/howto/static-files/\n\nSTATIC_URL = '/static/'\nSTATICFILES_DIRS = (\n os.path.join(BASE_DIR, 'build'),\n )\n", "path": "csunplugged/config/settings.py"}, {"content": "\"\"\"csunplugged URL Configuration\n\nThe `urlpatterns` list routes URLs to views. For more information please see:\n https://docs.djangoproject.com/en/1.10/topics/http/urls/\nExamples:\nFunction views\n 1. Add an import: from my_app import views\n 2. Add a URL to urlpatterns: url(r'^$', views.home, name='home')\nClass-based views\n 1. Add an import: from other_app.views import Home\n 2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home')\nIncluding another URLconf\n 1. Import the include() function: from django.conf.urls import url, include\n 2. Add a URL to urlpatterns: url(r'^blog/', include('blog.urls'))\n\"\"\"\nfrom django.conf.urls import include, url\nfrom django.conf.urls.i18n import i18n_patterns\nfrom django.contrib import admin\nfrom django.conf import settings\nfrom django.conf.urls.static import static\n\nurlpatterns = i18n_patterns(\n url(r'', include('general.urls', namespace='general')),\n url(r'^topics/', include('topics.urls', namespace='topics')),\n url(r'^resources/', include('resources.urls', namespace='resources')),\n url(r'^admin/', include(admin.site.urls)),\n)\n# ] + static(settings.STATIC_URL, documnet_root=settings.STATIC_ROOT)\n", "path": "csunplugged/config/urls.py"}], "after_files": [{"content": "\"\"\"\nDjango settings for csunplugged project.\n\nGenerated by 'django-admin startproject' using Django 1.10.3.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.10/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.10/ref/settings/\n\"\"\"\n\nimport os\nfrom config.settings_secret import *\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\n# nasty hard coding\nSETTINGS_PATH = os.path.dirname(os.path.dirname(__file__))\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.10/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = 'l@@)w&&%&u37+sjz^lsx^+29y_333oid3ygxzucar^8o(axo*f'\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = True\n\nALLOWED_HOSTS = []\n\n\n# Application definition\n\nINSTALLED_APPS = [\n 'general.apps.GeneralConfig',\n 'topics.apps.TopicsConfig',\n 'resources.apps.ResourcesConfig',\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'debug_toolbar',\n]\n\nMIDDLEWARE = [\n 'debug_toolbar.middleware.DebugToolbarMiddleware',\n 'django.middleware.security.SecurityMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.locale.LocaleMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n]\n\nROOT_URLCONF = 'config.urls'\n\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [\n os.path.join(SETTINGS_PATH, 'templates'),\n os.path.join(SETTINGS_PATH, 'resources/content/')\n ],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n },\n },\n]\n\nWSGI_APPLICATION = 'config.wsgi.application'\n\n\n# Database\n# https://docs.djangoproject.com/en/1.10/ref/settings/#databases\n# Database values are stored in `settings_secret.py`\n# A template of this file is available as `settings_secret_template.py`\n\n\n# Password validation\n# https://docs.djangoproject.com/en/1.10/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.10/topics/i18n/\n\nLANGUAGE_CODE = 'en-us'\n\nTIME_ZONE = 'UTC'\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\nLOCALE_PATHS = ['locale']\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.10/howto/static-files/\n\nSTATIC_URL = '/static/'\nSTATICFILES_DIRS = (\n os.path.join(BASE_DIR, 'build'),\n )\n\n# Internal IPs for Django Debug Toolbar\n# https://docs.djangoproject.com/en/1.10/ref/settings/#internal-ips\nINTERNAL_IPS = ['127.0.0.1']\n", "path": "csunplugged/config/settings.py"}, {"content": "\"\"\"csunplugged URL Configuration\n\nThe `urlpatterns` list routes URLs to views. For more information please see:\n https://docs.djangoproject.com/en/1.10/topics/http/urls/\nExamples:\nFunction views\n 1. Add an import: from my_app import views\n 2. Add a URL to urlpatterns: url(r'^$', views.home, name='home')\nClass-based views\n 1. Add an import: from other_app.views import Home\n 2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home')\nIncluding another URLconf\n 1. Import the include() function: from django.conf.urls import url, include\n 2. Add a URL to urlpatterns: url(r'^blog/', include('blog.urls'))\n\"\"\"\nfrom django.conf.urls import include, url\nfrom django.conf.urls.i18n import i18n_patterns\nfrom django.contrib import admin\nfrom django.conf import settings\nfrom django.conf.urls.static import static\n\nurlpatterns = i18n_patterns(\n url(r'', include('general.urls', namespace='general')),\n url(r'^topics/', include('topics.urls', namespace='topics')),\n url(r'^resources/', include('resources.urls', namespace='resources')),\n url(r'^admin/', include(admin.site.urls)),\n)\n# ] + static(settings.STATIC_URL, documnet_root=settings.STATIC_ROOT)\n\nif settings.DEBUG:\n import debug_toolbar\n urlpatterns += [\n url(r'^__debug__/', include(debug_toolbar.urls)),\n ]\n", "path": "csunplugged/config/urls.py"}]}
| 1,745 | 315 |
gh_patches_debug_33414
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-6733
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Caffe Nero GB spider using outdated JSON file
The caffe_nero_gb.py spider gets its data from JSON file that the Store Finder page at https://caffenero.com/uk/stores/ uses to display its map. However, it looks like that URL of that JSON file has changed, and ATP is still referencing the old (and no longer updated one).
The ATP code currently has
`allowed_domains = ["caffenero-webassets-production.s3.eu-west-2.amazonaws.com"]`
`start_urls = ["https://caffenero-webassets-production.s3.eu-west-2.amazonaws.com/stores/stores_gb.json"]`
But the URL referenced by https://caffenero.com/uk/stores/ is now
https://caffenerowebsite.blob.core.windows.net/production/data/stores/stores-gb.json
I think the format of the JSON file has remained the same, so it should just be a matter of swapping the URLs over.
To help issues like this be picked up sooner in the future, I wonder if there's a way of checking that the JSON URL used is still included in the https://caffenero.com/uk/stores/ page, and producing a warning to anyone running ATP if not?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/caffe_nero_gb.py`
Content:
```
1 from scrapy import Spider
2 from scrapy.http import JsonRequest
3
4 from locations.categories import Categories, Extras, apply_category, apply_yes_no
5 from locations.dict_parser import DictParser
6 from locations.hours import OpeningHours
7
8
9 class CaffeNeroGBSpider(Spider):
10 name = "caffe_nero_gb"
11 item_attributes = {"brand": "Caffe Nero", "brand_wikidata": "Q675808"}
12 allowed_domains = ["caffenero-webassets-production.s3.eu-west-2.amazonaws.com"]
13 start_urls = ["https://caffenero-webassets-production.s3.eu-west-2.amazonaws.com/stores/stores_gb.json"]
14
15 def start_requests(self):
16 for url in self.start_urls:
17 yield JsonRequest(url=url)
18
19 def parse(self, response):
20 for location in response.json()["features"]:
21 if (
22 not location["properties"]["status"]["open"]
23 or location["properties"]["status"]["opening_soon"]
24 or location["properties"]["status"]["temp_closed"]
25 ):
26 continue
27
28 item = DictParser.parse(location["properties"])
29 item["geometry"] = location["geometry"]
30 if location["properties"]["status"]["express"]:
31 item["brand"] = "Nero Express"
32
33 item["opening_hours"] = OpeningHours()
34 for day_name, day_hours in location["properties"]["hoursRegular"].items():
35 if day_hours["open"] == "closed" or day_hours["close"] == "closed":
36 continue
37 if day_name == "holiday":
38 continue
39 item["opening_hours"].add_range(day_name.title(), day_hours["open"], day_hours["close"])
40
41 apply_yes_no(Extras.TAKEAWAY, item, location["properties"]["status"]["takeaway"], False)
42 apply_yes_no(Extras.DELIVERY, item, location["properties"]["status"]["delivery"], False)
43 apply_yes_no(Extras.WIFI, item, location["properties"]["amenities"]["wifi"], False)
44 apply_yes_no(Extras.TOILETS, item, location["properties"]["amenities"]["toilet"], False)
45 apply_yes_no(Extras.BABY_CHANGING_TABLE, item, location["properties"]["amenities"]["baby_change"], False)
46 apply_yes_no(Extras.SMOKING_AREA, item, location["properties"]["amenities"]["smoking_area"], False)
47 apply_yes_no(Extras.AIR_CONDITIONING, item, location["properties"]["amenities"]["air_conditioned"], False)
48 apply_yes_no(Extras.WHEELCHAIR, item, location["properties"]["amenities"].get("disabled_access"), False)
49 apply_yes_no(Extras.TOILETS_WHEELCHAIR, item, location["properties"]["amenities"]["disabled_toilet"], False)
50 apply_yes_no(Extras.OUTDOOR_SEATING, item, location["properties"]["amenities"]["outside_seating"], False)
51 apply_category(Categories.COFFEE_SHOP, item)
52
53 item["website"] = f'https://caffenero.com/uk/store/{location["properties"]["slug"]}/'
54
55 yield item
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/locations/spiders/caffe_nero_gb.py b/locations/spiders/caffe_nero_gb.py
--- a/locations/spiders/caffe_nero_gb.py
+++ b/locations/spiders/caffe_nero_gb.py
@@ -1,5 +1,8 @@
+import re
+from typing import Any
+
from scrapy import Spider
-from scrapy.http import JsonRequest
+from scrapy.http import JsonRequest, Response
from locations.categories import Categories, Extras, apply_category, apply_yes_no
from locations.dict_parser import DictParser
@@ -9,14 +12,15 @@
class CaffeNeroGBSpider(Spider):
name = "caffe_nero_gb"
item_attributes = {"brand": "Caffe Nero", "brand_wikidata": "Q675808"}
- allowed_domains = ["caffenero-webassets-production.s3.eu-west-2.amazonaws.com"]
- start_urls = ["https://caffenero-webassets-production.s3.eu-west-2.amazonaws.com/stores/stores_gb.json"]
+ allowed_domains = ["caffenero.com", "caffenerowebsite.blob.core.windows.net"]
+ start_urls = ["https://caffenero.com/uk/stores/"]
- def start_requests(self):
- for url in self.start_urls:
- yield JsonRequest(url=url)
+ def parse(self, response: Response, **kwargs: Any) -> Any:
+ yield JsonRequest(
+ re.search(r"loadGeoJson\(\n\s+'(https://.+)', {", response.text).group(1), callback=self.parse_geojson
+ )
- def parse(self, response):
+ def parse_geojson(self, response: Response, **kwargs: Any) -> Any:
for location in response.json()["features"]:
if (
not location["properties"]["status"]["open"]
@@ -30,6 +34,8 @@
if location["properties"]["status"]["express"]:
item["brand"] = "Nero Express"
+ item["branch"] = item.pop("name")
+
item["opening_hours"] = OpeningHours()
for day_name, day_hours in location["properties"]["hoursRegular"].items():
if day_hours["open"] == "closed" or day_hours["close"] == "closed":
|
{"golden_diff": "diff --git a/locations/spiders/caffe_nero_gb.py b/locations/spiders/caffe_nero_gb.py\n--- a/locations/spiders/caffe_nero_gb.py\n+++ b/locations/spiders/caffe_nero_gb.py\n@@ -1,5 +1,8 @@\n+import re\n+from typing import Any\n+\n from scrapy import Spider\n-from scrapy.http import JsonRequest\n+from scrapy.http import JsonRequest, Response\n \n from locations.categories import Categories, Extras, apply_category, apply_yes_no\n from locations.dict_parser import DictParser\n@@ -9,14 +12,15 @@\n class CaffeNeroGBSpider(Spider):\n name = \"caffe_nero_gb\"\n item_attributes = {\"brand\": \"Caffe Nero\", \"brand_wikidata\": \"Q675808\"}\n- allowed_domains = [\"caffenero-webassets-production.s3.eu-west-2.amazonaws.com\"]\n- start_urls = [\"https://caffenero-webassets-production.s3.eu-west-2.amazonaws.com/stores/stores_gb.json\"]\n+ allowed_domains = [\"caffenero.com\", \"caffenerowebsite.blob.core.windows.net\"]\n+ start_urls = [\"https://caffenero.com/uk/stores/\"]\n \n- def start_requests(self):\n- for url in self.start_urls:\n- yield JsonRequest(url=url)\n+ def parse(self, response: Response, **kwargs: Any) -> Any:\n+ yield JsonRequest(\n+ re.search(r\"loadGeoJson\\(\\n\\s+'(https://.+)', {\", response.text).group(1), callback=self.parse_geojson\n+ )\n \n- def parse(self, response):\n+ def parse_geojson(self, response: Response, **kwargs: Any) -> Any:\n for location in response.json()[\"features\"]:\n if (\n not location[\"properties\"][\"status\"][\"open\"]\n@@ -30,6 +34,8 @@\n if location[\"properties\"][\"status\"][\"express\"]:\n item[\"brand\"] = \"Nero Express\"\n \n+ item[\"branch\"] = item.pop(\"name\")\n+\n item[\"opening_hours\"] = OpeningHours()\n for day_name, day_hours in location[\"properties\"][\"hoursRegular\"].items():\n if day_hours[\"open\"] == \"closed\" or day_hours[\"close\"] == \"closed\":\n", "issue": "Caffe Nero GB spider using outdated JSON file\nThe caffe_nero_gb.py spider gets its data from JSON file that the Store Finder page at https://caffenero.com/uk/stores/ uses to display its map. However, it looks like that URL of that JSON file has changed, and ATP is still referencing the old (and no longer updated one).\r\n\r\nThe ATP code currently has\r\n`allowed_domains = [\"caffenero-webassets-production.s3.eu-west-2.amazonaws.com\"]`\r\n`start_urls = [\"https://caffenero-webassets-production.s3.eu-west-2.amazonaws.com/stores/stores_gb.json\"]`\r\nBut the URL referenced by https://caffenero.com/uk/stores/ is now\r\nhttps://caffenerowebsite.blob.core.windows.net/production/data/stores/stores-gb.json\r\n\r\nI think the format of the JSON file has remained the same, so it should just be a matter of swapping the URLs over.\r\n\r\nTo help issues like this be picked up sooner in the future, I wonder if there's a way of checking that the JSON URL used is still included in the https://caffenero.com/uk/stores/ page, and producing a warning to anyone running ATP if not?\n", "before_files": [{"content": "from scrapy import Spider\nfrom scrapy.http import JsonRequest\n\nfrom locations.categories import Categories, Extras, apply_category, apply_yes_no\nfrom locations.dict_parser import DictParser\nfrom locations.hours import OpeningHours\n\n\nclass CaffeNeroGBSpider(Spider):\n name = \"caffe_nero_gb\"\n item_attributes = {\"brand\": \"Caffe Nero\", \"brand_wikidata\": \"Q675808\"}\n allowed_domains = [\"caffenero-webassets-production.s3.eu-west-2.amazonaws.com\"]\n start_urls = [\"https://caffenero-webassets-production.s3.eu-west-2.amazonaws.com/stores/stores_gb.json\"]\n\n def start_requests(self):\n for url in self.start_urls:\n yield JsonRequest(url=url)\n\n def parse(self, response):\n for location in response.json()[\"features\"]:\n if (\n not location[\"properties\"][\"status\"][\"open\"]\n or location[\"properties\"][\"status\"][\"opening_soon\"]\n or location[\"properties\"][\"status\"][\"temp_closed\"]\n ):\n continue\n\n item = DictParser.parse(location[\"properties\"])\n item[\"geometry\"] = location[\"geometry\"]\n if location[\"properties\"][\"status\"][\"express\"]:\n item[\"brand\"] = \"Nero Express\"\n\n item[\"opening_hours\"] = OpeningHours()\n for day_name, day_hours in location[\"properties\"][\"hoursRegular\"].items():\n if day_hours[\"open\"] == \"closed\" or day_hours[\"close\"] == \"closed\":\n continue\n if day_name == \"holiday\":\n continue\n item[\"opening_hours\"].add_range(day_name.title(), day_hours[\"open\"], day_hours[\"close\"])\n\n apply_yes_no(Extras.TAKEAWAY, item, location[\"properties\"][\"status\"][\"takeaway\"], False)\n apply_yes_no(Extras.DELIVERY, item, location[\"properties\"][\"status\"][\"delivery\"], False)\n apply_yes_no(Extras.WIFI, item, location[\"properties\"][\"amenities\"][\"wifi\"], False)\n apply_yes_no(Extras.TOILETS, item, location[\"properties\"][\"amenities\"][\"toilet\"], False)\n apply_yes_no(Extras.BABY_CHANGING_TABLE, item, location[\"properties\"][\"amenities\"][\"baby_change\"], False)\n apply_yes_no(Extras.SMOKING_AREA, item, location[\"properties\"][\"amenities\"][\"smoking_area\"], False)\n apply_yes_no(Extras.AIR_CONDITIONING, item, location[\"properties\"][\"amenities\"][\"air_conditioned\"], False)\n apply_yes_no(Extras.WHEELCHAIR, item, location[\"properties\"][\"amenities\"].get(\"disabled_access\"), False)\n apply_yes_no(Extras.TOILETS_WHEELCHAIR, item, location[\"properties\"][\"amenities\"][\"disabled_toilet\"], False)\n apply_yes_no(Extras.OUTDOOR_SEATING, item, location[\"properties\"][\"amenities\"][\"outside_seating\"], False)\n apply_category(Categories.COFFEE_SHOP, item)\n\n item[\"website\"] = f'https://caffenero.com/uk/store/{location[\"properties\"][\"slug\"]}/'\n\n yield item\n", "path": "locations/spiders/caffe_nero_gb.py"}], "after_files": [{"content": "import re\nfrom typing import Any\n\nfrom scrapy import Spider\nfrom scrapy.http import JsonRequest, Response\n\nfrom locations.categories import Categories, Extras, apply_category, apply_yes_no\nfrom locations.dict_parser import DictParser\nfrom locations.hours import OpeningHours\n\n\nclass CaffeNeroGBSpider(Spider):\n name = \"caffe_nero_gb\"\n item_attributes = {\"brand\": \"Caffe Nero\", \"brand_wikidata\": \"Q675808\"}\n allowed_domains = [\"caffenero.com\", \"caffenerowebsite.blob.core.windows.net\"]\n start_urls = [\"https://caffenero.com/uk/stores/\"]\n\n def parse(self, response: Response, **kwargs: Any) -> Any:\n yield JsonRequest(\n re.search(r\"loadGeoJson\\(\\n\\s+'(https://.+)', {\", response.text).group(1), callback=self.parse_geojson\n )\n\n def parse_geojson(self, response: Response, **kwargs: Any) -> Any:\n for location in response.json()[\"features\"]:\n if (\n not location[\"properties\"][\"status\"][\"open\"]\n or location[\"properties\"][\"status\"][\"opening_soon\"]\n or location[\"properties\"][\"status\"][\"temp_closed\"]\n ):\n continue\n\n item = DictParser.parse(location[\"properties\"])\n item[\"geometry\"] = location[\"geometry\"]\n if location[\"properties\"][\"status\"][\"express\"]:\n item[\"brand\"] = \"Nero Express\"\n\n item[\"branch\"] = item.pop(\"name\")\n\n item[\"opening_hours\"] = OpeningHours()\n for day_name, day_hours in location[\"properties\"][\"hoursRegular\"].items():\n if day_hours[\"open\"] == \"closed\" or day_hours[\"close\"] == \"closed\":\n continue\n if day_name == \"holiday\":\n continue\n item[\"opening_hours\"].add_range(day_name.title(), day_hours[\"open\"], day_hours[\"close\"])\n\n apply_yes_no(Extras.TAKEAWAY, item, location[\"properties\"][\"status\"][\"takeaway\"], False)\n apply_yes_no(Extras.DELIVERY, item, location[\"properties\"][\"status\"][\"delivery\"], False)\n apply_yes_no(Extras.WIFI, item, location[\"properties\"][\"amenities\"][\"wifi\"], False)\n apply_yes_no(Extras.TOILETS, item, location[\"properties\"][\"amenities\"][\"toilet\"], False)\n apply_yes_no(Extras.BABY_CHANGING_TABLE, item, location[\"properties\"][\"amenities\"][\"baby_change\"], False)\n apply_yes_no(Extras.SMOKING_AREA, item, location[\"properties\"][\"amenities\"][\"smoking_area\"], False)\n apply_yes_no(Extras.AIR_CONDITIONING, item, location[\"properties\"][\"amenities\"][\"air_conditioned\"], False)\n apply_yes_no(Extras.WHEELCHAIR, item, location[\"properties\"][\"amenities\"].get(\"disabled_access\"), False)\n apply_yes_no(Extras.TOILETS_WHEELCHAIR, item, location[\"properties\"][\"amenities\"][\"disabled_toilet\"], False)\n apply_yes_no(Extras.OUTDOOR_SEATING, item, location[\"properties\"][\"amenities\"][\"outside_seating\"], False)\n apply_category(Categories.COFFEE_SHOP, item)\n\n item[\"website\"] = f'https://caffenero.com/uk/store/{location[\"properties\"][\"slug\"]}/'\n\n yield item\n", "path": "locations/spiders/caffe_nero_gb.py"}]}
| 1,264 | 495 |
gh_patches_debug_24105
|
rasdani/github-patches
|
git_diff
|
deepchecks__deepchecks-372
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The mean value is not shown in the regression systematic error plot
I would expect that near the plot (or when I hover over the mean line in the plot), I would see the mean error value.

To reproduce:
https://www.kaggle.com/itay94/notebookf8c78e84d7
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `deepchecks/checks/performance/regression_systematic_error.py`
Content:
```
1 # ----------------------------------------------------------------------------
2 # Copyright (C) 2021 Deepchecks (https://www.deepchecks.com)
3 #
4 # This file is part of Deepchecks.
5 # Deepchecks is distributed under the terms of the GNU Affero General
6 # Public License (version 3 or later).
7 # You should have received a copy of the GNU Affero General Public License
8 # along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.
9 # ----------------------------------------------------------------------------
10 #
11 """The RegressionSystematicError check module."""
12 import plotly.graph_objects as go
13 from sklearn.base import BaseEstimator
14 from sklearn.metrics import mean_squared_error
15
16 from deepchecks import CheckResult, Dataset, SingleDatasetBaseCheck, ConditionResult
17 from deepchecks.utils.metrics import ModelType, task_type_validation
18 from deepchecks.utils.strings import format_number
19
20
21 __all__ = ['RegressionSystematicError']
22
23
24 class RegressionSystematicError(SingleDatasetBaseCheck):
25 """Check the regression systematic error."""
26
27 def run(self, dataset: Dataset, model: BaseEstimator) -> CheckResult:
28 """Run check.
29
30 Arguments:
31 dataset (Dataset): A dataset object.
32 model (BaseEstimator): A scikit-learn-compatible fitted estimator instance
33 Returns:
34 CheckResult:
35 - value is a dict with rmse and mean prediction error.
36 - display is box plot of the prediction errorד.
37 Raises:
38 DeepchecksValueError: If the object is not a Dataset instance with a label
39 """
40 return self._regression_error_distribution(dataset, model)
41
42 def _regression_error_distribution(self, dataset: Dataset, model: BaseEstimator):
43 Dataset.validate_dataset(dataset)
44 dataset.validate_label()
45 task_type_validation(model, dataset, [ModelType.REGRESSION])
46
47 y_test = dataset.label_col
48 y_pred = model.predict(dataset.features_columns)
49
50 rmse = mean_squared_error(dataset.label_col, y_pred, squared=False)
51 diff = y_test - y_pred
52 diff_mean = diff.mean()
53
54 display = [
55 'Non-zero mean of the error distribution indicated the presents '
56 'of systematic error in model predictions',
57 go.Figure()
58 .add_trace(go.Box(
59 x=diff.values,
60 name='Model Prediction Error',
61 boxpoints='suspectedoutliers',
62 marker=dict(outliercolor='red'),
63 hoverinfo='x',
64 hoveron='points'))
65 .update_layout(
66 title_text='Box plot of the model prediction error',
67 width=800, height=500)
68 .add_vline(
69 x=diff_mean + 0.01,
70 line_dash='dash',
71 annotation_text='Mean error')
72 ]
73
74 return CheckResult(value={'rmse': rmse, 'mean_error': diff_mean}, display=display)
75
76 def add_condition_systematic_error_ratio_to_rmse_not_greater_than(self, max_ratio: float = 0.01):
77 """Add condition - require the absolute mean systematic error to be not greater than (max_ratio * RMSE).
78
79 Args:
80 max_ratio (float): Maximum ratio
81 """
82 def max_bias_condition(result: float) -> ConditionResult:
83 rmse = result['rmse']
84 mean_error = result['mean_error']
85 if abs(mean_error) > max_ratio * rmse:
86 return ConditionResult(False,
87 f'mean error: {format_number(mean_error, 5)}, RMSE: {format_number(rmse)}')
88 else:
89 return ConditionResult(True)
90
91 return self.add_condition(f'Bias ratio is not greater than {format_number(max_ratio)}',
92 max_bias_condition)
93
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/deepchecks/checks/performance/regression_systematic_error.py b/deepchecks/checks/performance/regression_systematic_error.py
--- a/deepchecks/checks/performance/regression_systematic_error.py
+++ b/deepchecks/checks/performance/regression_systematic_error.py
@@ -51,24 +51,19 @@
diff = y_test - y_pred
diff_mean = diff.mean()
+ fig = go.Figure()
+ fig.add_trace(go.Box(
+ x=diff,
+ name='Model prediction error',
+ boxmean=True # represent mean
+ )).update_layout(
+ title_text='Box plot of the model prediction error',
+ width=800, height=500)
+
display = [
'Non-zero mean of the error distribution indicated the presents '
'of systematic error in model predictions',
- go.Figure()
- .add_trace(go.Box(
- x=diff.values,
- name='Model Prediction Error',
- boxpoints='suspectedoutliers',
- marker=dict(outliercolor='red'),
- hoverinfo='x',
- hoveron='points'))
- .update_layout(
- title_text='Box plot of the model prediction error',
- width=800, height=500)
- .add_vline(
- x=diff_mean + 0.01,
- line_dash='dash',
- annotation_text='Mean error')
+ fig
]
return CheckResult(value={'rmse': rmse, 'mean_error': diff_mean}, display=display)
|
{"golden_diff": "diff --git a/deepchecks/checks/performance/regression_systematic_error.py b/deepchecks/checks/performance/regression_systematic_error.py\n--- a/deepchecks/checks/performance/regression_systematic_error.py\n+++ b/deepchecks/checks/performance/regression_systematic_error.py\n@@ -51,24 +51,19 @@\n diff = y_test - y_pred\n diff_mean = diff.mean()\n \n+ fig = go.Figure()\n+ fig.add_trace(go.Box(\n+ x=diff,\n+ name='Model prediction error',\n+ boxmean=True # represent mean\n+ )).update_layout(\n+ title_text='Box plot of the model prediction error',\n+ width=800, height=500)\n+\n display = [\n 'Non-zero mean of the error distribution indicated the presents '\n 'of systematic error in model predictions',\n- go.Figure()\n- .add_trace(go.Box(\n- x=diff.values,\n- name='Model Prediction Error',\n- boxpoints='suspectedoutliers',\n- marker=dict(outliercolor='red'),\n- hoverinfo='x',\n- hoveron='points'))\n- .update_layout(\n- title_text='Box plot of the model prediction error',\n- width=800, height=500)\n- .add_vline(\n- x=diff_mean + 0.01,\n- line_dash='dash',\n- annotation_text='Mean error')\n+ fig\n ]\n \n return CheckResult(value={'rmse': rmse, 'mean_error': diff_mean}, display=display)\n", "issue": "The mean value is not shown in the regression systematic error plot\nI would expect that near the plot (or when I hover over the mean line in the plot), I would see the mean error value.\r\n\r\n\r\n\r\nTo reproduce:\r\nhttps://www.kaggle.com/itay94/notebookf8c78e84d7\n", "before_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"The RegressionSystematicError check module.\"\"\"\nimport plotly.graph_objects as go\nfrom sklearn.base import BaseEstimator\nfrom sklearn.metrics import mean_squared_error\n\nfrom deepchecks import CheckResult, Dataset, SingleDatasetBaseCheck, ConditionResult\nfrom deepchecks.utils.metrics import ModelType, task_type_validation\nfrom deepchecks.utils.strings import format_number\n\n\n__all__ = ['RegressionSystematicError']\n\n\nclass RegressionSystematicError(SingleDatasetBaseCheck):\n \"\"\"Check the regression systematic error.\"\"\"\n\n def run(self, dataset: Dataset, model: BaseEstimator) -> CheckResult:\n \"\"\"Run check.\n\n Arguments:\n dataset (Dataset): A dataset object.\n model (BaseEstimator): A scikit-learn-compatible fitted estimator instance\n Returns:\n CheckResult:\n - value is a dict with rmse and mean prediction error.\n - display is box plot of the prediction error\u05d3.\n Raises:\n DeepchecksValueError: If the object is not a Dataset instance with a label\n \"\"\"\n return self._regression_error_distribution(dataset, model)\n\n def _regression_error_distribution(self, dataset: Dataset, model: BaseEstimator):\n Dataset.validate_dataset(dataset)\n dataset.validate_label()\n task_type_validation(model, dataset, [ModelType.REGRESSION])\n\n y_test = dataset.label_col\n y_pred = model.predict(dataset.features_columns)\n\n rmse = mean_squared_error(dataset.label_col, y_pred, squared=False)\n diff = y_test - y_pred\n diff_mean = diff.mean()\n\n display = [\n 'Non-zero mean of the error distribution indicated the presents '\n 'of systematic error in model predictions',\n go.Figure()\n .add_trace(go.Box(\n x=diff.values,\n name='Model Prediction Error',\n boxpoints='suspectedoutliers',\n marker=dict(outliercolor='red'),\n hoverinfo='x',\n hoveron='points'))\n .update_layout(\n title_text='Box plot of the model prediction error',\n width=800, height=500)\n .add_vline(\n x=diff_mean + 0.01,\n line_dash='dash',\n annotation_text='Mean error')\n ]\n\n return CheckResult(value={'rmse': rmse, 'mean_error': diff_mean}, display=display)\n\n def add_condition_systematic_error_ratio_to_rmse_not_greater_than(self, max_ratio: float = 0.01):\n \"\"\"Add condition - require the absolute mean systematic error to be not greater than (max_ratio * RMSE).\n\n Args:\n max_ratio (float): Maximum ratio\n \"\"\"\n def max_bias_condition(result: float) -> ConditionResult:\n rmse = result['rmse']\n mean_error = result['mean_error']\n if abs(mean_error) > max_ratio * rmse:\n return ConditionResult(False,\n f'mean error: {format_number(mean_error, 5)}, RMSE: {format_number(rmse)}')\n else:\n return ConditionResult(True)\n\n return self.add_condition(f'Bias ratio is not greater than {format_number(max_ratio)}',\n max_bias_condition)\n", "path": "deepchecks/checks/performance/regression_systematic_error.py"}], "after_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"The RegressionSystematicError check module.\"\"\"\nimport plotly.graph_objects as go\nfrom sklearn.base import BaseEstimator\nfrom sklearn.metrics import mean_squared_error\n\nfrom deepchecks import CheckResult, Dataset, SingleDatasetBaseCheck, ConditionResult\nfrom deepchecks.utils.metrics import ModelType, task_type_validation\nfrom deepchecks.utils.strings import format_number\n\n\n__all__ = ['RegressionSystematicError']\n\n\nclass RegressionSystematicError(SingleDatasetBaseCheck):\n \"\"\"Check the regression systematic error.\"\"\"\n\n def run(self, dataset: Dataset, model: BaseEstimator) -> CheckResult:\n \"\"\"Run check.\n\n Arguments:\n dataset (Dataset): A dataset object.\n model (BaseEstimator): A scikit-learn-compatible fitted estimator instance\n Returns:\n CheckResult:\n - value is a dict with rmse and mean prediction error.\n - display is box plot of the prediction error\u05d3.\n Raises:\n DeepchecksValueError: If the object is not a Dataset instance with a label\n \"\"\"\n return self._regression_error_distribution(dataset, model)\n\n def _regression_error_distribution(self, dataset: Dataset, model: BaseEstimator):\n Dataset.validate_dataset(dataset)\n dataset.validate_label()\n task_type_validation(model, dataset, [ModelType.REGRESSION])\n\n y_test = dataset.label_col\n y_pred = model.predict(dataset.features_columns)\n\n rmse = mean_squared_error(dataset.label_col, y_pred, squared=False)\n diff = y_test - y_pred\n diff_mean = diff.mean()\n\n fig = go.Figure()\n fig.add_trace(go.Box(\n x=diff,\n name='Model prediction error',\n boxmean=True # represent mean\n )).update_layout(\n title_text='Box plot of the model prediction error',\n width=800, height=500)\n\n display = [\n 'Non-zero mean of the error distribution indicated the presents '\n 'of systematic error in model predictions',\n fig\n ]\n\n return CheckResult(value={'rmse': rmse, 'mean_error': diff_mean}, display=display)\n\n def add_condition_systematic_error_ratio_to_rmse_not_greater_than(self, max_ratio: float = 0.01):\n \"\"\"Add condition - require the absolute mean systematic error to be not greater than (max_ratio * RMSE).\n\n Args:\n max_ratio (float): Maximum ratio\n \"\"\"\n def max_bias_condition(result: float) -> ConditionResult:\n rmse = result['rmse']\n mean_error = result['mean_error']\n if abs(mean_error) > max_ratio * rmse:\n return ConditionResult(False,\n f'mean error: {format_number(mean_error, 5)}, RMSE: {format_number(rmse)}')\n else:\n return ConditionResult(True)\n\n return self.add_condition(f'Bias ratio is not greater than {format_number(max_ratio)}',\n max_bias_condition)\n", "path": "deepchecks/checks/performance/regression_systematic_error.py"}]}
| 1,340 | 358 |
gh_patches_debug_39617
|
rasdani/github-patches
|
git_diff
|
googleapis__google-auth-library-python-150
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Expose the full response from the token server
`refresh` on oauth2 Credentials should store the full response from the token server. There is potentially useful data here, like the `id_token`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `google/oauth2/credentials.py`
Content:
```
1 # Copyright 2016 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """OAuth 2.0 Credentials.
16
17 This module provides credentials based on OAuth 2.0 access and refresh tokens.
18 These credentials usually access resources on behalf of a user (resource
19 owner).
20
21 Specifically, this is intended to use access tokens acquired using the
22 `Authorization Code grant`_ and can refresh those tokens using a
23 optional `refresh token`_.
24
25 Obtaining the initial access and refresh token is outside of the scope of this
26 module. Consult `rfc6749 section 4.1`_ for complete details on the
27 Authorization Code grant flow.
28
29 .. _Authorization Code grant: https://tools.ietf.org/html/rfc6749#section-1.3.1
30 .. _refresh token: https://tools.ietf.org/html/rfc6749#section-6
31 .. _rfc6749 section 4.1: https://tools.ietf.org/html/rfc6749#section-4.1
32 """
33
34 from google.auth import _helpers
35 from google.auth import credentials
36 from google.oauth2 import _client
37
38
39 class Credentials(credentials.Scoped, credentials.Credentials):
40 """Credentials using OAuth 2.0 access and refresh tokens."""
41
42 def __init__(self, token, refresh_token=None, token_uri=None,
43 client_id=None, client_secret=None, scopes=None):
44 """
45 Args:
46 token (Optional(str)): The OAuth 2.0 access token. Can be None
47 if refresh information is provided.
48 refresh_token (str): The OAuth 2.0 refresh token. If specified,
49 credentials can be refreshed.
50 token_uri (str): The OAuth 2.0 authorization server's token
51 endpoint URI. Must be specified for refresh, can be left as
52 None if the token can not be refreshed.
53 client_id (str): The OAuth 2.0 client ID. Must be specified for
54 refresh, can be left as None if the token can not be refreshed.
55 client_secret(str): The OAuth 2.0 client secret. Must be specified
56 for refresh, can be left as None if the token can not be
57 refreshed.
58 scopes (Sequence[str]): The scopes that were originally used
59 to obtain authorization. This is a purely informative parameter
60 that can be used by :meth:`has_scopes`. OAuth 2.0 credentials
61 can not request additional scopes after authorization.
62 """
63 super(Credentials, self).__init__()
64 self.token = token
65 self._refresh_token = refresh_token
66 self._scopes = scopes
67 self._token_uri = token_uri
68 self._client_id = client_id
69 self._client_secret = client_secret
70
71 @property
72 def refresh_token(self):
73 """Optional[str]: The OAuth 2.0 refresh token."""
74 return self._refresh_token
75
76 @property
77 def token_uri(self):
78 """Optional[str]: The OAuth 2.0 authorization server's token endpoint
79 URI."""
80 return self._token_uri
81
82 @property
83 def client_id(self):
84 """Optional[str]: The OAuth 2.0 client ID."""
85 return self._client_id
86
87 @property
88 def client_secret(self):
89 """Optional[str]: The OAuth 2.0 client secret."""
90 return self._client_secret
91
92 @property
93 def requires_scopes(self):
94 """False: OAuth 2.0 credentials have their scopes set when
95 the initial token is requested and can not be changed."""
96 return False
97
98 def with_scopes(self, scopes):
99 """Unavailable, OAuth 2.0 credentials can not be re-scoped.
100
101 OAuth 2.0 credentials have their scopes set when the initial token is
102 requested and can not be changed.
103 """
104 raise NotImplementedError(
105 'OAuth 2.0 Credentials can not modify their scopes.')
106
107 @_helpers.copy_docstring(credentials.Credentials)
108 def refresh(self, request):
109 access_token, refresh_token, expiry, _ = _client.refresh_grant(
110 request, self._token_uri, self._refresh_token, self._client_id,
111 self._client_secret)
112
113 self.token = access_token
114 self.expiry = expiry
115 self._refresh_token = refresh_token
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/google/oauth2/credentials.py b/google/oauth2/credentials.py
--- a/google/oauth2/credentials.py
+++ b/google/oauth2/credentials.py
@@ -39,14 +39,16 @@
class Credentials(credentials.Scoped, credentials.Credentials):
"""Credentials using OAuth 2.0 access and refresh tokens."""
- def __init__(self, token, refresh_token=None, token_uri=None,
- client_id=None, client_secret=None, scopes=None):
+ def __init__(self, token, refresh_token=None, id_token=None,
+ token_uri=None, client_id=None, client_secret=None,
+ scopes=None):
"""
Args:
token (Optional(str)): The OAuth 2.0 access token. Can be None
if refresh information is provided.
refresh_token (str): The OAuth 2.0 refresh token. If specified,
credentials can be refreshed.
+ id_token (str): The Open ID Connect ID Token.
token_uri (str): The OAuth 2.0 authorization server's token
endpoint URI. Must be specified for refresh, can be left as
None if the token can not be refreshed.
@@ -63,6 +65,7 @@
super(Credentials, self).__init__()
self.token = token
self._refresh_token = refresh_token
+ self._id_token = id_token
self._scopes = scopes
self._token_uri = token_uri
self._client_id = client_id
@@ -79,6 +82,17 @@
URI."""
return self._token_uri
+ @property
+ def id_token(self):
+ """Optional[str]: The Open ID Connect ID Token.
+
+ Depending on the authorization server and the scopes requested, this
+ may be populated when credentials are obtained and updated when
+ :meth:`refresh` is called. This token is a JWT. It can be verified
+ and decoded using :func:`google.oauth2.id_token.verify_oauth2_token`.
+ """
+ return self._id_token
+
@property
def client_id(self):
"""Optional[str]: The OAuth 2.0 client ID."""
@@ -106,10 +120,12 @@
@_helpers.copy_docstring(credentials.Credentials)
def refresh(self, request):
- access_token, refresh_token, expiry, _ = _client.refresh_grant(
- request, self._token_uri, self._refresh_token, self._client_id,
- self._client_secret)
+ access_token, refresh_token, expiry, grant_response = (
+ _client.refresh_grant(
+ request, self._token_uri, self._refresh_token, self._client_id,
+ self._client_secret))
self.token = access_token
self.expiry = expiry
self._refresh_token = refresh_token
+ self._id_token = grant_response.get('id_token')
|
{"golden_diff": "diff --git a/google/oauth2/credentials.py b/google/oauth2/credentials.py\n--- a/google/oauth2/credentials.py\n+++ b/google/oauth2/credentials.py\n@@ -39,14 +39,16 @@\n class Credentials(credentials.Scoped, credentials.Credentials):\n \"\"\"Credentials using OAuth 2.0 access and refresh tokens.\"\"\"\n \n- def __init__(self, token, refresh_token=None, token_uri=None,\n- client_id=None, client_secret=None, scopes=None):\n+ def __init__(self, token, refresh_token=None, id_token=None,\n+ token_uri=None, client_id=None, client_secret=None,\n+ scopes=None):\n \"\"\"\n Args:\n token (Optional(str)): The OAuth 2.0 access token. Can be None\n if refresh information is provided.\n refresh_token (str): The OAuth 2.0 refresh token. If specified,\n credentials can be refreshed.\n+ id_token (str): The Open ID Connect ID Token.\n token_uri (str): The OAuth 2.0 authorization server's token\n endpoint URI. Must be specified for refresh, can be left as\n None if the token can not be refreshed.\n@@ -63,6 +65,7 @@\n super(Credentials, self).__init__()\n self.token = token\n self._refresh_token = refresh_token\n+ self._id_token = id_token\n self._scopes = scopes\n self._token_uri = token_uri\n self._client_id = client_id\n@@ -79,6 +82,17 @@\n URI.\"\"\"\n return self._token_uri\n \n+ @property\n+ def id_token(self):\n+ \"\"\"Optional[str]: The Open ID Connect ID Token.\n+\n+ Depending on the authorization server and the scopes requested, this\n+ may be populated when credentials are obtained and updated when\n+ :meth:`refresh` is called. This token is a JWT. It can be verified\n+ and decoded using :func:`google.oauth2.id_token.verify_oauth2_token`.\n+ \"\"\"\n+ return self._id_token\n+\n @property\n def client_id(self):\n \"\"\"Optional[str]: The OAuth 2.0 client ID.\"\"\"\n@@ -106,10 +120,12 @@\n \n @_helpers.copy_docstring(credentials.Credentials)\n def refresh(self, request):\n- access_token, refresh_token, expiry, _ = _client.refresh_grant(\n- request, self._token_uri, self._refresh_token, self._client_id,\n- self._client_secret)\n+ access_token, refresh_token, expiry, grant_response = (\n+ _client.refresh_grant(\n+ request, self._token_uri, self._refresh_token, self._client_id,\n+ self._client_secret))\n \n self.token = access_token\n self.expiry = expiry\n self._refresh_token = refresh_token\n+ self._id_token = grant_response.get('id_token')\n", "issue": "Expose the full response from the token server\n`refresh` on oauth2 Credentials should store the full response from the token server. There is potentially useful data here, like the `id_token`.\n", "before_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"OAuth 2.0 Credentials.\n\nThis module provides credentials based on OAuth 2.0 access and refresh tokens.\nThese credentials usually access resources on behalf of a user (resource\nowner).\n\nSpecifically, this is intended to use access tokens acquired using the\n`Authorization Code grant`_ and can refresh those tokens using a\noptional `refresh token`_.\n\nObtaining the initial access and refresh token is outside of the scope of this\nmodule. Consult `rfc6749 section 4.1`_ for complete details on the\nAuthorization Code grant flow.\n\n.. _Authorization Code grant: https://tools.ietf.org/html/rfc6749#section-1.3.1\n.. _refresh token: https://tools.ietf.org/html/rfc6749#section-6\n.. _rfc6749 section 4.1: https://tools.ietf.org/html/rfc6749#section-4.1\n\"\"\"\n\nfrom google.auth import _helpers\nfrom google.auth import credentials\nfrom google.oauth2 import _client\n\n\nclass Credentials(credentials.Scoped, credentials.Credentials):\n \"\"\"Credentials using OAuth 2.0 access and refresh tokens.\"\"\"\n\n def __init__(self, token, refresh_token=None, token_uri=None,\n client_id=None, client_secret=None, scopes=None):\n \"\"\"\n Args:\n token (Optional(str)): The OAuth 2.0 access token. Can be None\n if refresh information is provided.\n refresh_token (str): The OAuth 2.0 refresh token. If specified,\n credentials can be refreshed.\n token_uri (str): The OAuth 2.0 authorization server's token\n endpoint URI. Must be specified for refresh, can be left as\n None if the token can not be refreshed.\n client_id (str): The OAuth 2.0 client ID. Must be specified for\n refresh, can be left as None if the token can not be refreshed.\n client_secret(str): The OAuth 2.0 client secret. Must be specified\n for refresh, can be left as None if the token can not be\n refreshed.\n scopes (Sequence[str]): The scopes that were originally used\n to obtain authorization. This is a purely informative parameter\n that can be used by :meth:`has_scopes`. OAuth 2.0 credentials\n can not request additional scopes after authorization.\n \"\"\"\n super(Credentials, self).__init__()\n self.token = token\n self._refresh_token = refresh_token\n self._scopes = scopes\n self._token_uri = token_uri\n self._client_id = client_id\n self._client_secret = client_secret\n\n @property\n def refresh_token(self):\n \"\"\"Optional[str]: The OAuth 2.0 refresh token.\"\"\"\n return self._refresh_token\n\n @property\n def token_uri(self):\n \"\"\"Optional[str]: The OAuth 2.0 authorization server's token endpoint\n URI.\"\"\"\n return self._token_uri\n\n @property\n def client_id(self):\n \"\"\"Optional[str]: The OAuth 2.0 client ID.\"\"\"\n return self._client_id\n\n @property\n def client_secret(self):\n \"\"\"Optional[str]: The OAuth 2.0 client secret.\"\"\"\n return self._client_secret\n\n @property\n def requires_scopes(self):\n \"\"\"False: OAuth 2.0 credentials have their scopes set when\n the initial token is requested and can not be changed.\"\"\"\n return False\n\n def with_scopes(self, scopes):\n \"\"\"Unavailable, OAuth 2.0 credentials can not be re-scoped.\n\n OAuth 2.0 credentials have their scopes set when the initial token is\n requested and can not be changed.\n \"\"\"\n raise NotImplementedError(\n 'OAuth 2.0 Credentials can not modify their scopes.')\n\n @_helpers.copy_docstring(credentials.Credentials)\n def refresh(self, request):\n access_token, refresh_token, expiry, _ = _client.refresh_grant(\n request, self._token_uri, self._refresh_token, self._client_id,\n self._client_secret)\n\n self.token = access_token\n self.expiry = expiry\n self._refresh_token = refresh_token\n", "path": "google/oauth2/credentials.py"}], "after_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"OAuth 2.0 Credentials.\n\nThis module provides credentials based on OAuth 2.0 access and refresh tokens.\nThese credentials usually access resources on behalf of a user (resource\nowner).\n\nSpecifically, this is intended to use access tokens acquired using the\n`Authorization Code grant`_ and can refresh those tokens using a\noptional `refresh token`_.\n\nObtaining the initial access and refresh token is outside of the scope of this\nmodule. Consult `rfc6749 section 4.1`_ for complete details on the\nAuthorization Code grant flow.\n\n.. _Authorization Code grant: https://tools.ietf.org/html/rfc6749#section-1.3.1\n.. _refresh token: https://tools.ietf.org/html/rfc6749#section-6\n.. _rfc6749 section 4.1: https://tools.ietf.org/html/rfc6749#section-4.1\n\"\"\"\n\nfrom google.auth import _helpers\nfrom google.auth import credentials\nfrom google.oauth2 import _client\n\n\nclass Credentials(credentials.Scoped, credentials.Credentials):\n \"\"\"Credentials using OAuth 2.0 access and refresh tokens.\"\"\"\n\n def __init__(self, token, refresh_token=None, id_token=None,\n token_uri=None, client_id=None, client_secret=None,\n scopes=None):\n \"\"\"\n Args:\n token (Optional(str)): The OAuth 2.0 access token. Can be None\n if refresh information is provided.\n refresh_token (str): The OAuth 2.0 refresh token. If specified,\n credentials can be refreshed.\n id_token (str): The Open ID Connect ID Token.\n token_uri (str): The OAuth 2.0 authorization server's token\n endpoint URI. Must be specified for refresh, can be left as\n None if the token can not be refreshed.\n client_id (str): The OAuth 2.0 client ID. Must be specified for\n refresh, can be left as None if the token can not be refreshed.\n client_secret(str): The OAuth 2.0 client secret. Must be specified\n for refresh, can be left as None if the token can not be\n refreshed.\n scopes (Sequence[str]): The scopes that were originally used\n to obtain authorization. This is a purely informative parameter\n that can be used by :meth:`has_scopes`. OAuth 2.0 credentials\n can not request additional scopes after authorization.\n \"\"\"\n super(Credentials, self).__init__()\n self.token = token\n self._refresh_token = refresh_token\n self._id_token = id_token\n self._scopes = scopes\n self._token_uri = token_uri\n self._client_id = client_id\n self._client_secret = client_secret\n\n @property\n def refresh_token(self):\n \"\"\"Optional[str]: The OAuth 2.0 refresh token.\"\"\"\n return self._refresh_token\n\n @property\n def token_uri(self):\n \"\"\"Optional[str]: The OAuth 2.0 authorization server's token endpoint\n URI.\"\"\"\n return self._token_uri\n\n @property\n def id_token(self):\n \"\"\"Optional[str]: The Open ID Connect ID Token.\n\n Depending on the authorization server and the scopes requested, this\n may be populated when credentials are obtained and updated when\n :meth:`refresh` is called. This token is a JWT. It can be verified\n and decoded using :func:`google.oauth2.id_token.verify_oauth2_token`.\n \"\"\"\n return self._id_token\n\n @property\n def client_id(self):\n \"\"\"Optional[str]: The OAuth 2.0 client ID.\"\"\"\n return self._client_id\n\n @property\n def client_secret(self):\n \"\"\"Optional[str]: The OAuth 2.0 client secret.\"\"\"\n return self._client_secret\n\n @property\n def requires_scopes(self):\n \"\"\"False: OAuth 2.0 credentials have their scopes set when\n the initial token is requested and can not be changed.\"\"\"\n return False\n\n def with_scopes(self, scopes):\n \"\"\"Unavailable, OAuth 2.0 credentials can not be re-scoped.\n\n OAuth 2.0 credentials have their scopes set when the initial token is\n requested and can not be changed.\n \"\"\"\n raise NotImplementedError(\n 'OAuth 2.0 Credentials can not modify their scopes.')\n\n @_helpers.copy_docstring(credentials.Credentials)\n def refresh(self, request):\n access_token, refresh_token, expiry, grant_response = (\n _client.refresh_grant(\n request, self._token_uri, self._refresh_token, self._client_id,\n self._client_secret))\n\n self.token = access_token\n self.expiry = expiry\n self._refresh_token = refresh_token\n self._id_token = grant_response.get('id_token')\n", "path": "google/oauth2/credentials.py"}]}
| 1,570 | 640 |
gh_patches_debug_18776
|
rasdani/github-patches
|
git_diff
|
pwndbg__pwndbg-1239
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`context` sometimes gets printed with the `set` command
`set $rax=0` sometimes causes `context` to be immediately called afterwards. I think don't think this is always reproducible, but will keep investigating it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pwndbg/gdblib/prompt.py`
Content:
```
1 import re
2
3 import gdb
4
5 import pwndbg.decorators
6 import pwndbg.gdblib.events
7 import pwndbg.gdbutils
8 import pwndbg.lib.memoize
9 from pwndbg.color import disable_colors
10 from pwndbg.color import message
11 from pwndbg.lib.tips import get_tip_of_the_day
12
13 funcs_list_str = ", ".join(
14 message.notice("$" + f.name) for f in pwndbg.gdbutils.functions.functions
15 )
16
17 num_pwndbg_cmds = sum(1 for _ in filter(lambda c: not c.shell, pwndbg.commands.commands))
18 num_shell_cmds = sum(1 for _ in filter(lambda c: c.shell, pwndbg.commands.commands))
19 hint_lines = (
20 "loaded %i pwndbg commands and %i shell commands. Type %s for a list."
21 % (num_pwndbg_cmds, num_shell_cmds, message.notice("pwndbg [--shell | --all] [filter]")),
22 "created %s gdb functions (can be used with print/break)" % funcs_list_str,
23 )
24
25 for line in hint_lines:
26 print(message.prompt("pwndbg: ") + message.system(line))
27
28 # noinspection PyPackageRequirements
29 show_tip = pwndbg.config.Parameter(
30 "show-tips", True, "whether to display the tip of the day on startup"
31 )
32
33 cur = None
34
35
36 def initial_hook(*a):
37 if show_tip and not pwndbg.decorators.first_prompt:
38 colored_tip = re.sub(
39 "`(.*?)`", lambda s: message.warn(s.group()[1:-1]), get_tip_of_the_day()
40 )
41 print(
42 message.prompt("------- tip of the day")
43 + message.system(" (disable with %s)" % message.notice("set show-tips off"))
44 + message.prompt(" -------")
45 )
46 print((colored_tip))
47 pwndbg.decorators.first_prompt = True
48
49 prompt_hook(*a)
50 gdb.prompt_hook = prompt_hook
51
52
53 def prompt_hook(*a):
54 global cur
55
56 new = (gdb.selected_inferior(), gdb.selected_thread())
57
58 if cur != new:
59 pwndbg.gdblib.events.after_reload(start=cur is None)
60 cur = new
61
62 if pwndbg.proc.alive and pwndbg.proc.thread_is_stopped:
63 prompt_hook_on_stop(*a)
64
65
66 @pwndbg.lib.memoize.reset_on_stop
67 def prompt_hook_on_stop(*a):
68 pwndbg.commands.context.context()
69
70
71 @pwndbg.config.Trigger([message.config_prompt_color, disable_colors])
72 def set_prompt():
73 prompt = "pwndbg> "
74
75 if not disable_colors:
76 prompt = "\x02" + prompt + "\x01" # STX + prompt + SOH
77 prompt = message.prompt(prompt)
78 prompt = "\x01" + prompt + "\x02" # SOH + prompt + STX
79
80 gdb.execute("set prompt %s" % prompt)
81
82
83 if pwndbg.gdblib.events.before_prompt_event.is_real_event:
84 gdb.prompt_hook = initial_hook
85
86 else:
87 # Old GDBs doesn't have gdb.events.before_prompt, so we will emulate it using gdb.prompt_hook
88 def extended_prompt_hook(*a):
89 pwndbg.gdblib.events.before_prompt_event.invoke_callbacks()
90 return prompt_hook(*a)
91
92 gdb.prompt_hook = extended_prompt_hook
93
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pwndbg/gdblib/prompt.py b/pwndbg/gdblib/prompt.py
--- a/pwndbg/gdblib/prompt.py
+++ b/pwndbg/gdblib/prompt.py
@@ -50,8 +50,11 @@
gdb.prompt_hook = prompt_hook
+context_shown = False
+
+
def prompt_hook(*a):
- global cur
+ global cur, context_shown
new = (gdb.selected_inferior(), gdb.selected_thread())
@@ -59,13 +62,15 @@
pwndbg.gdblib.events.after_reload(start=cur is None)
cur = new
- if pwndbg.proc.alive and pwndbg.proc.thread_is_stopped:
- prompt_hook_on_stop(*a)
+ if pwndbg.proc.alive and pwndbg.proc.thread_is_stopped and not context_shown:
+ pwndbg.commands.context.context()
+ context_shown = True
[email protected]_on_stop
-def prompt_hook_on_stop(*a):
- pwndbg.commands.context.context()
[email protected]
+def reset_context_shown(*a):
+ global context_shown
+ context_shown = False
@pwndbg.config.Trigger([message.config_prompt_color, disable_colors])
|
{"golden_diff": "diff --git a/pwndbg/gdblib/prompt.py b/pwndbg/gdblib/prompt.py\n--- a/pwndbg/gdblib/prompt.py\n+++ b/pwndbg/gdblib/prompt.py\n@@ -50,8 +50,11 @@\n gdb.prompt_hook = prompt_hook\n \n \n+context_shown = False\n+\n+\n def prompt_hook(*a):\n- global cur\n+ global cur, context_shown\n \n new = (gdb.selected_inferior(), gdb.selected_thread())\n \n@@ -59,13 +62,15 @@\n pwndbg.gdblib.events.after_reload(start=cur is None)\n cur = new\n \n- if pwndbg.proc.alive and pwndbg.proc.thread_is_stopped:\n- prompt_hook_on_stop(*a)\n+ if pwndbg.proc.alive and pwndbg.proc.thread_is_stopped and not context_shown:\n+ pwndbg.commands.context.context()\n+ context_shown = True\n \n \[email protected]_on_stop\n-def prompt_hook_on_stop(*a):\n- pwndbg.commands.context.context()\[email protected]\n+def reset_context_shown(*a):\n+ global context_shown\n+ context_shown = False\n \n \n @pwndbg.config.Trigger([message.config_prompt_color, disable_colors])\n", "issue": "`context` sometimes gets printed with the `set` command\n`set $rax=0` sometimes causes `context` to be immediately called afterwards. I think don't think this is always reproducible, but will keep investigating it.\n", "before_files": [{"content": "import re\n\nimport gdb\n\nimport pwndbg.decorators\nimport pwndbg.gdblib.events\nimport pwndbg.gdbutils\nimport pwndbg.lib.memoize\nfrom pwndbg.color import disable_colors\nfrom pwndbg.color import message\nfrom pwndbg.lib.tips import get_tip_of_the_day\n\nfuncs_list_str = \", \".join(\n message.notice(\"$\" + f.name) for f in pwndbg.gdbutils.functions.functions\n)\n\nnum_pwndbg_cmds = sum(1 for _ in filter(lambda c: not c.shell, pwndbg.commands.commands))\nnum_shell_cmds = sum(1 for _ in filter(lambda c: c.shell, pwndbg.commands.commands))\nhint_lines = (\n \"loaded %i pwndbg commands and %i shell commands. Type %s for a list.\"\n % (num_pwndbg_cmds, num_shell_cmds, message.notice(\"pwndbg [--shell | --all] [filter]\")),\n \"created %s gdb functions (can be used with print/break)\" % funcs_list_str,\n)\n\nfor line in hint_lines:\n print(message.prompt(\"pwndbg: \") + message.system(line))\n\n# noinspection PyPackageRequirements\nshow_tip = pwndbg.config.Parameter(\n \"show-tips\", True, \"whether to display the tip of the day on startup\"\n)\n\ncur = None\n\n\ndef initial_hook(*a):\n if show_tip and not pwndbg.decorators.first_prompt:\n colored_tip = re.sub(\n \"`(.*?)`\", lambda s: message.warn(s.group()[1:-1]), get_tip_of_the_day()\n )\n print(\n message.prompt(\"------- tip of the day\")\n + message.system(\" (disable with %s)\" % message.notice(\"set show-tips off\"))\n + message.prompt(\" -------\")\n )\n print((colored_tip))\n pwndbg.decorators.first_prompt = True\n\n prompt_hook(*a)\n gdb.prompt_hook = prompt_hook\n\n\ndef prompt_hook(*a):\n global cur\n\n new = (gdb.selected_inferior(), gdb.selected_thread())\n\n if cur != new:\n pwndbg.gdblib.events.after_reload(start=cur is None)\n cur = new\n\n if pwndbg.proc.alive and pwndbg.proc.thread_is_stopped:\n prompt_hook_on_stop(*a)\n\n\[email protected]_on_stop\ndef prompt_hook_on_stop(*a):\n pwndbg.commands.context.context()\n\n\[email protected]([message.config_prompt_color, disable_colors])\ndef set_prompt():\n prompt = \"pwndbg> \"\n\n if not disable_colors:\n prompt = \"\\x02\" + prompt + \"\\x01\" # STX + prompt + SOH\n prompt = message.prompt(prompt)\n prompt = \"\\x01\" + prompt + \"\\x02\" # SOH + prompt + STX\n\n gdb.execute(\"set prompt %s\" % prompt)\n\n\nif pwndbg.gdblib.events.before_prompt_event.is_real_event:\n gdb.prompt_hook = initial_hook\n\nelse:\n # Old GDBs doesn't have gdb.events.before_prompt, so we will emulate it using gdb.prompt_hook\n def extended_prompt_hook(*a):\n pwndbg.gdblib.events.before_prompt_event.invoke_callbacks()\n return prompt_hook(*a)\n\n gdb.prompt_hook = extended_prompt_hook\n", "path": "pwndbg/gdblib/prompt.py"}], "after_files": [{"content": "import re\n\nimport gdb\n\nimport pwndbg.decorators\nimport pwndbg.gdblib.events\nimport pwndbg.gdbutils\nimport pwndbg.lib.memoize\nfrom pwndbg.color import disable_colors\nfrom pwndbg.color import message\nfrom pwndbg.lib.tips import get_tip_of_the_day\n\nfuncs_list_str = \", \".join(\n message.notice(\"$\" + f.name) for f in pwndbg.gdbutils.functions.functions\n)\n\nnum_pwndbg_cmds = sum(1 for _ in filter(lambda c: not c.shell, pwndbg.commands.commands))\nnum_shell_cmds = sum(1 for _ in filter(lambda c: c.shell, pwndbg.commands.commands))\nhint_lines = (\n \"loaded %i pwndbg commands and %i shell commands. Type %s for a list.\"\n % (num_pwndbg_cmds, num_shell_cmds, message.notice(\"pwndbg [--shell | --all] [filter]\")),\n \"created %s gdb functions (can be used with print/break)\" % funcs_list_str,\n)\n\nfor line in hint_lines:\n print(message.prompt(\"pwndbg: \") + message.system(line))\n\n# noinspection PyPackageRequirements\nshow_tip = pwndbg.config.Parameter(\n \"show-tips\", True, \"whether to display the tip of the day on startup\"\n)\n\ncur = None\n\n\ndef initial_hook(*a):\n if show_tip and not pwndbg.decorators.first_prompt:\n colored_tip = re.sub(\n \"`(.*?)`\", lambda s: message.warn(s.group()[1:-1]), get_tip_of_the_day()\n )\n print(\n message.prompt(\"------- tip of the day\")\n + message.system(\" (disable with %s)\" % message.notice(\"set show-tips off\"))\n + message.prompt(\" -------\")\n )\n print((colored_tip))\n pwndbg.decorators.first_prompt = True\n\n prompt_hook(*a)\n gdb.prompt_hook = prompt_hook\n\n\ncontext_shown = False\n\n\ndef prompt_hook(*a):\n global cur, context_shown\n\n new = (gdb.selected_inferior(), gdb.selected_thread())\n\n if cur != new:\n pwndbg.gdblib.events.after_reload(start=cur is None)\n cur = new\n\n if pwndbg.proc.alive and pwndbg.proc.thread_is_stopped and not context_shown:\n pwndbg.commands.context.context()\n context_shown = True\n\n\[email protected]\ndef reset_context_shown(*a):\n global context_shown\n context_shown = False\n\n\[email protected]([message.config_prompt_color, disable_colors])\ndef set_prompt():\n prompt = \"pwndbg> \"\n\n if not disable_colors:\n prompt = \"\\x02\" + prompt + \"\\x01\" # STX + prompt + SOH\n prompt = message.prompt(prompt)\n prompt = \"\\x01\" + prompt + \"\\x02\" # SOH + prompt + STX\n\n gdb.execute(\"set prompt %s\" % prompt)\n\n\nif pwndbg.gdblib.events.before_prompt_event.is_real_event:\n gdb.prompt_hook = initial_hook\n\nelse:\n # Old GDBs doesn't have gdb.events.before_prompt, so we will emulate it using gdb.prompt_hook\n def extended_prompt_hook(*a):\n pwndbg.gdblib.events.before_prompt_event.invoke_callbacks()\n return prompt_hook(*a)\n\n gdb.prompt_hook = extended_prompt_hook\n", "path": "pwndbg/gdblib/prompt.py"}]}
| 1,223 | 301 |
gh_patches_debug_18658
|
rasdani/github-patches
|
git_diff
|
GeotrekCE__Geotrek-admin-1344
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
set_schema_ft() SQL function delete some triggers
set_schema_ft() contains as "DROP FUNCTION ... CASCADE" that delete some other functions or triggers, eg. e_t_evenement_geom_iu_tgr.
All 0.28.x releases are affected.
I think we should create functions directly in the right schema and drop functions from public schema rather than moving them.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `geotrek/common/utils/postgresql.py`
Content:
```
1 import re
2 import os
3 import logging
4 import traceback
5 from functools import wraps
6
7 from django.db import connection, models
8 from django.conf import settings
9 from django.db.models import get_app, get_models
10
11
12 logger = logging.getLogger(__name__)
13
14
15 def debug_pg_notices(f):
16
17 @wraps(f)
18 def wrapped(*args, **kwargs):
19 before = len(connection.connection.notices) if connection.connection else 0
20 try:
21 r = f(*args, **kwargs)
22 finally:
23 # Show triggers output
24 allnotices = []
25 current = ''
26 if connection.connection:
27 notices = []
28 for notice in connection.connection.notices[before:]:
29 try:
30 notice, context = notice.split('CONTEXT:', 1)
31 context = re.sub("\s+", " ", context)
32 except ValueError:
33 context = ''
34 notices.append((context, notice))
35 if context != current:
36 allnotices.append(notices)
37 notices = []
38 current = context
39 allnotices.append(notices)
40 current = ''
41 for notices in allnotices:
42 for context, notice in notices:
43 if context != current:
44 if context != '':
45 logger.debug('Context %s...:' % context.strip()[:80])
46 current = context
47 notice = notice.replace('NOTICE: ', '')
48 prefix = '' if context == '' else ' '
49 logger.debug('%s%s' % (prefix, notice.strip()))
50 return r
51
52 return wrapped
53
54
55 def load_sql_files(app_label):
56 """
57 Look for SQL files in Django app, and load them into database.
58 We remove RAISE NOTICE instructions from SQL outside unit testing
59 since they lead to interpolation errors of '%' character in python.
60 """
61 app_dir = os.path.dirname(models.get_app(app_label).__file__)
62 sql_dir = os.path.normpath(os.path.join(app_dir, 'sql'))
63 if not os.path.exists(sql_dir):
64 logger.debug("No SQL folder for %s" % app_label)
65 return
66
67 r = re.compile(r'^.*\.sql$')
68 sql_files = [os.path.join(sql_dir, f)
69 for f in os.listdir(sql_dir)
70 if r.match(f) is not None]
71 sql_files.sort()
72
73 if len(sql_files) == 0:
74 logger.warning("Empty folder %s" % sql_dir)
75
76 cursor = connection.cursor()
77 for sql_file in sql_files:
78 try:
79 logger.info("Loading initial SQL data from '%s'" % sql_file)
80 f = open(sql_file)
81 sql = f.read()
82 f.close()
83 if not settings.TEST:
84 # Remove RAISE NOTICE (/!\ only one-liners)
85 sql = re.sub(r"\n.*RAISE NOTICE.*\n", "\n", sql)
86 # TODO: this is the ugliest driver hack ever
87 sql = sql.replace('%', '%%')
88
89 # Replace curly braces with settings values
90 pattern = re.compile(r'{{\s*(.*)\s*}}')
91 for m in pattern.finditer(sql):
92 value = getattr(settings, m.group(1))
93 sql = sql.replace(m.group(0), unicode(value))
94 cursor.execute(sql)
95 except Exception as e:
96 logger.critical("Failed to install custom SQL file '%s': %s\n" %
97 (sql_file, e))
98 traceback.print_exc()
99 raise
100
101
102 def move_models_to_schemas(app_label):
103 """
104 Move models tables to PostgreSQL schemas.
105
106 Views, functions and triggers will be moved in Geotrek app SQL files.
107 """
108 app = get_app(app_label)
109 default_schema = settings.DATABASE_SCHEMAS.get('default')
110 app_schema = settings.DATABASE_SCHEMAS.get(app_label, default_schema)
111
112 table_schemas = {}
113 for model in get_models(app):
114 model_name = model._meta.module_name
115 table_name = model._meta.db_table
116 model_schema = settings.DATABASE_SCHEMAS.get(model_name, app_schema)
117 table_schemas.setdefault(model_schema, []).append(table_name)
118
119 for m2m_field in model._meta.many_to_many:
120 table_name = m2m_field.db_table
121 if table_name:
122 table_schemas[model_schema].append(table_name)
123
124 cursor = connection.cursor()
125
126 for schema_name in table_schemas.keys():
127 try:
128 sql = "CREATE SCHEMA %s;" % model_schema
129 cursor.execute(sql)
130 logger.info("Created schema %s" % model_schema)
131 except Exception:
132 logger.debug("Schema %s already exists." % model_schema)
133
134 for schema_name, tables in table_schemas.items():
135 for table_name in tables:
136 try:
137 sql = "ALTER TABLE %s SET SCHEMA %s;" % (table_name, schema_name)
138 cursor.execute(sql)
139 logger.info("Moved %s to schema %s" % (table_name, schema_name))
140 except Exception:
141 logger.debug("Table %s already in schema %s" % (table_name, schema_name))
142
143 # For Django, search_path is set in connection options.
144 # But when accessing the database using QGis or ETL, search_path must be
145 # set database level (for all users, and for this database only).
146 if app_label == 'common':
147 dbname = settings.DATABASES['default']['NAME']
148 dbuser = settings.DATABASES['default']['USER']
149 search_path = 'public,%s' % ','.join(set(settings.DATABASE_SCHEMAS.values()))
150 sql = "ALTER ROLE %s IN DATABASE %s SET search_path=%s;" % (dbuser, dbname, search_path)
151 cursor.execute(sql)
152
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/geotrek/common/utils/postgresql.py b/geotrek/common/utils/postgresql.py
--- a/geotrek/common/utils/postgresql.py
+++ b/geotrek/common/utils/postgresql.py
@@ -133,12 +133,12 @@
for schema_name, tables in table_schemas.items():
for table_name in tables:
- try:
+ sql = "SELECT 1 FROM information_schema.tables WHERE table_name=%s AND table_schema!=%s"
+ cursor.execute(sql, [table_name, schema_name])
+ if cursor.fetchone():
sql = "ALTER TABLE %s SET SCHEMA %s;" % (table_name, schema_name)
cursor.execute(sql)
logger.info("Moved %s to schema %s" % (table_name, schema_name))
- except Exception:
- logger.debug("Table %s already in schema %s" % (table_name, schema_name))
# For Django, search_path is set in connection options.
# But when accessing the database using QGis or ETL, search_path must be
|
{"golden_diff": "diff --git a/geotrek/common/utils/postgresql.py b/geotrek/common/utils/postgresql.py\n--- a/geotrek/common/utils/postgresql.py\n+++ b/geotrek/common/utils/postgresql.py\n@@ -133,12 +133,12 @@\n \n for schema_name, tables in table_schemas.items():\n for table_name in tables:\n- try:\n+ sql = \"SELECT 1 FROM information_schema.tables WHERE table_name=%s AND table_schema!=%s\"\n+ cursor.execute(sql, [table_name, schema_name])\n+ if cursor.fetchone():\n sql = \"ALTER TABLE %s SET SCHEMA %s;\" % (table_name, schema_name)\n cursor.execute(sql)\n logger.info(\"Moved %s to schema %s\" % (table_name, schema_name))\n- except Exception:\n- logger.debug(\"Table %s already in schema %s\" % (table_name, schema_name))\n \n # For Django, search_path is set in connection options.\n # But when accessing the database using QGis or ETL, search_path must be\n", "issue": "set_schema_ft() SQL function delete some triggers\nset_schema_ft() contains as \"DROP FUNCTION ... CASCADE\" that delete some other functions or triggers, eg. e_t_evenement_geom_iu_tgr.\n\nAll 0.28.x releases are affected.\n\nI think we should create functions directly in the right schema and drop functions from public schema rather than moving them.\n\n", "before_files": [{"content": "import re\nimport os\nimport logging\nimport traceback\nfrom functools import wraps\n\nfrom django.db import connection, models\nfrom django.conf import settings\nfrom django.db.models import get_app, get_models\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef debug_pg_notices(f):\n\n @wraps(f)\n def wrapped(*args, **kwargs):\n before = len(connection.connection.notices) if connection.connection else 0\n try:\n r = f(*args, **kwargs)\n finally:\n # Show triggers output\n allnotices = []\n current = ''\n if connection.connection:\n notices = []\n for notice in connection.connection.notices[before:]:\n try:\n notice, context = notice.split('CONTEXT:', 1)\n context = re.sub(\"\\s+\", \" \", context)\n except ValueError:\n context = ''\n notices.append((context, notice))\n if context != current:\n allnotices.append(notices)\n notices = []\n current = context\n allnotices.append(notices)\n current = ''\n for notices in allnotices:\n for context, notice in notices:\n if context != current:\n if context != '':\n logger.debug('Context %s...:' % context.strip()[:80])\n current = context\n notice = notice.replace('NOTICE: ', '')\n prefix = '' if context == '' else ' '\n logger.debug('%s%s' % (prefix, notice.strip()))\n return r\n\n return wrapped\n\n\ndef load_sql_files(app_label):\n \"\"\"\n Look for SQL files in Django app, and load them into database.\n We remove RAISE NOTICE instructions from SQL outside unit testing\n since they lead to interpolation errors of '%' character in python.\n \"\"\"\n app_dir = os.path.dirname(models.get_app(app_label).__file__)\n sql_dir = os.path.normpath(os.path.join(app_dir, 'sql'))\n if not os.path.exists(sql_dir):\n logger.debug(\"No SQL folder for %s\" % app_label)\n return\n\n r = re.compile(r'^.*\\.sql$')\n sql_files = [os.path.join(sql_dir, f)\n for f in os.listdir(sql_dir)\n if r.match(f) is not None]\n sql_files.sort()\n\n if len(sql_files) == 0:\n logger.warning(\"Empty folder %s\" % sql_dir)\n\n cursor = connection.cursor()\n for sql_file in sql_files:\n try:\n logger.info(\"Loading initial SQL data from '%s'\" % sql_file)\n f = open(sql_file)\n sql = f.read()\n f.close()\n if not settings.TEST:\n # Remove RAISE NOTICE (/!\\ only one-liners)\n sql = re.sub(r\"\\n.*RAISE NOTICE.*\\n\", \"\\n\", sql)\n # TODO: this is the ugliest driver hack ever\n sql = sql.replace('%', '%%')\n\n # Replace curly braces with settings values\n pattern = re.compile(r'{{\\s*(.*)\\s*}}')\n for m in pattern.finditer(sql):\n value = getattr(settings, m.group(1))\n sql = sql.replace(m.group(0), unicode(value))\n cursor.execute(sql)\n except Exception as e:\n logger.critical(\"Failed to install custom SQL file '%s': %s\\n\" %\n (sql_file, e))\n traceback.print_exc()\n raise\n\n\ndef move_models_to_schemas(app_label):\n \"\"\"\n Move models tables to PostgreSQL schemas.\n\n Views, functions and triggers will be moved in Geotrek app SQL files.\n \"\"\"\n app = get_app(app_label)\n default_schema = settings.DATABASE_SCHEMAS.get('default')\n app_schema = settings.DATABASE_SCHEMAS.get(app_label, default_schema)\n\n table_schemas = {}\n for model in get_models(app):\n model_name = model._meta.module_name\n table_name = model._meta.db_table\n model_schema = settings.DATABASE_SCHEMAS.get(model_name, app_schema)\n table_schemas.setdefault(model_schema, []).append(table_name)\n\n for m2m_field in model._meta.many_to_many:\n table_name = m2m_field.db_table\n if table_name:\n table_schemas[model_schema].append(table_name)\n\n cursor = connection.cursor()\n\n for schema_name in table_schemas.keys():\n try:\n sql = \"CREATE SCHEMA %s;\" % model_schema\n cursor.execute(sql)\n logger.info(\"Created schema %s\" % model_schema)\n except Exception:\n logger.debug(\"Schema %s already exists.\" % model_schema)\n\n for schema_name, tables in table_schemas.items():\n for table_name in tables:\n try:\n sql = \"ALTER TABLE %s SET SCHEMA %s;\" % (table_name, schema_name)\n cursor.execute(sql)\n logger.info(\"Moved %s to schema %s\" % (table_name, schema_name))\n except Exception:\n logger.debug(\"Table %s already in schema %s\" % (table_name, schema_name))\n\n # For Django, search_path is set in connection options.\n # But when accessing the database using QGis or ETL, search_path must be\n # set database level (for all users, and for this database only).\n if app_label == 'common':\n dbname = settings.DATABASES['default']['NAME']\n dbuser = settings.DATABASES['default']['USER']\n search_path = 'public,%s' % ','.join(set(settings.DATABASE_SCHEMAS.values()))\n sql = \"ALTER ROLE %s IN DATABASE %s SET search_path=%s;\" % (dbuser, dbname, search_path)\n cursor.execute(sql)\n", "path": "geotrek/common/utils/postgresql.py"}], "after_files": [{"content": "import re\nimport os\nimport logging\nimport traceback\nfrom functools import wraps\n\nfrom django.db import connection, models\nfrom django.conf import settings\nfrom django.db.models import get_app, get_models\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef debug_pg_notices(f):\n\n @wraps(f)\n def wrapped(*args, **kwargs):\n before = len(connection.connection.notices) if connection.connection else 0\n try:\n r = f(*args, **kwargs)\n finally:\n # Show triggers output\n allnotices = []\n current = ''\n if connection.connection:\n notices = []\n for notice in connection.connection.notices[before:]:\n try:\n notice, context = notice.split('CONTEXT:', 1)\n context = re.sub(\"\\s+\", \" \", context)\n except ValueError:\n context = ''\n notices.append((context, notice))\n if context != current:\n allnotices.append(notices)\n notices = []\n current = context\n allnotices.append(notices)\n current = ''\n for notices in allnotices:\n for context, notice in notices:\n if context != current:\n if context != '':\n logger.debug('Context %s...:' % context.strip()[:80])\n current = context\n notice = notice.replace('NOTICE: ', '')\n prefix = '' if context == '' else ' '\n logger.debug('%s%s' % (prefix, notice.strip()))\n return r\n\n return wrapped\n\n\ndef load_sql_files(app_label):\n \"\"\"\n Look for SQL files in Django app, and load them into database.\n We remove RAISE NOTICE instructions from SQL outside unit testing\n since they lead to interpolation errors of '%' character in python.\n \"\"\"\n app_dir = os.path.dirname(models.get_app(app_label).__file__)\n sql_dir = os.path.normpath(os.path.join(app_dir, 'sql'))\n if not os.path.exists(sql_dir):\n logger.debug(\"No SQL folder for %s\" % app_label)\n return\n\n r = re.compile(r'^.*\\.sql$')\n sql_files = [os.path.join(sql_dir, f)\n for f in os.listdir(sql_dir)\n if r.match(f) is not None]\n sql_files.sort()\n\n if len(sql_files) == 0:\n logger.warning(\"Empty folder %s\" % sql_dir)\n\n cursor = connection.cursor()\n for sql_file in sql_files:\n try:\n logger.info(\"Loading initial SQL data from '%s'\" % sql_file)\n f = open(sql_file)\n sql = f.read()\n f.close()\n if not settings.TEST:\n # Remove RAISE NOTICE (/!\\ only one-liners)\n sql = re.sub(r\"\\n.*RAISE NOTICE.*\\n\", \"\\n\", sql)\n # TODO: this is the ugliest driver hack ever\n sql = sql.replace('%', '%%')\n\n # Replace curly braces with settings values\n pattern = re.compile(r'{{\\s*(.*)\\s*}}')\n for m in pattern.finditer(sql):\n value = getattr(settings, m.group(1))\n sql = sql.replace(m.group(0), unicode(value))\n cursor.execute(sql)\n except Exception as e:\n logger.critical(\"Failed to install custom SQL file '%s': %s\\n\" %\n (sql_file, e))\n traceback.print_exc()\n raise\n\n\ndef move_models_to_schemas(app_label):\n \"\"\"\n Move models tables to PostgreSQL schemas.\n\n Views, functions and triggers will be moved in Geotrek app SQL files.\n \"\"\"\n app = get_app(app_label)\n default_schema = settings.DATABASE_SCHEMAS.get('default')\n app_schema = settings.DATABASE_SCHEMAS.get(app_label, default_schema)\n\n table_schemas = {}\n for model in get_models(app):\n model_name = model._meta.module_name\n table_name = model._meta.db_table\n model_schema = settings.DATABASE_SCHEMAS.get(model_name, app_schema)\n table_schemas.setdefault(model_schema, []).append(table_name)\n\n for m2m_field in model._meta.many_to_many:\n table_name = m2m_field.db_table\n if table_name:\n table_schemas[model_schema].append(table_name)\n\n cursor = connection.cursor()\n\n for schema_name in table_schemas.keys():\n try:\n sql = \"CREATE SCHEMA %s;\" % model_schema\n cursor.execute(sql)\n logger.info(\"Created schema %s\" % model_schema)\n except Exception:\n logger.debug(\"Schema %s already exists.\" % model_schema)\n\n for schema_name, tables in table_schemas.items():\n for table_name in tables:\n sql = \"SELECT 1 FROM information_schema.tables WHERE table_name=%s AND table_schema!=%s\"\n cursor.execute(sql, [table_name, schema_name])\n if cursor.fetchone():\n sql = \"ALTER TABLE %s SET SCHEMA %s;\" % (table_name, schema_name)\n cursor.execute(sql)\n logger.info(\"Moved %s to schema %s\" % (table_name, schema_name))\n\n # For Django, search_path is set in connection options.\n # But when accessing the database using QGis or ETL, search_path must be\n # set database level (for all users, and for this database only).\n if app_label == 'common':\n dbname = settings.DATABASES['default']['NAME']\n dbuser = settings.DATABASES['default']['USER']\n search_path = 'public,%s' % ','.join(set(settings.DATABASE_SCHEMAS.values()))\n sql = \"ALTER ROLE %s IN DATABASE %s SET search_path=%s;\" % (dbuser, dbname, search_path)\n cursor.execute(sql)\n", "path": "geotrek/common/utils/postgresql.py"}]}
| 1,901 | 235 |
gh_patches_debug_851
|
rasdani/github-patches
|
git_diff
|
Gallopsled__pwntools-1893
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
'pwn cyclic -o afca' throws a BytesWarning
```
$ pwn cyclic -o afca
/Users/heapcrash/pwntools/pwnlib/commandline/cyclic.py:74: BytesWarning: Text is not bytes; assuming ASCII, no guarantees. See https://docs.pwntools.com/#bytes
pat = flat(pat, bytes=args.length)
506
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pwnlib/commandline/cyclic.py`
Content:
```
1 #!/usr/bin/env python2
2 from __future__ import absolute_import
3 from __future__ import division
4
5 import argparse
6 import six
7 import string
8 import sys
9
10 import pwnlib.args
11 pwnlib.args.free_form = False
12
13 from pwn import *
14 from pwnlib.commandline import common
15
16 parser = common.parser_commands.add_parser(
17 'cyclic',
18 help = "Cyclic pattern creator/finder",
19 description = "Cyclic pattern creator/finder"
20 )
21
22 parser.add_argument(
23 '-a', '--alphabet',
24 metavar = 'alphabet',
25 default = string.ascii_lowercase.encode(),
26 type = packing._encode,
27 help = 'The alphabet to use in the cyclic pattern (defaults to all lower case letters)',
28 )
29
30 parser.add_argument(
31 '-n', '--length',
32 metavar = 'length',
33 default = 4,
34 type = int,
35 help = 'Size of the unique subsequences (defaults to 4).'
36 )
37
38 parser.add_argument(
39 '-c', '--context',
40 metavar = 'context',
41 action = 'append',
42 type = common.context_arg,
43 choices = common.choices,
44 help = 'The os/architecture/endianness/bits the shellcode will run in (default: linux/i386), choose from: %s' % common.choices,
45 )
46
47 group = parser.add_mutually_exclusive_group(required=False)
48 group.add_argument(
49 '-l', '-o', '--offset', '--lookup',
50 dest = 'lookup',
51 metavar = 'lookup_value',
52 help = 'Do a lookup instead printing the alphabet',
53 )
54
55 group.add_argument(
56 'count',
57 type=int,
58 nargs='?',
59 default=None,
60 help='Number of characters to print'
61 )
62
63 def main(args):
64 alphabet = args.alphabet
65 subsize = args.length
66
67 if args.lookup:
68 pat = args.lookup
69
70 try:
71 pat = int(pat, 0)
72 except ValueError:
73 pass
74 pat = flat(pat, bytes=args.length)
75
76 if len(pat) != subsize:
77 log.critical('Subpattern must be %d bytes' % subsize)
78 sys.exit(1)
79
80 if not all(c in alphabet for c in pat):
81 log.critical('Pattern contains characters not present in the alphabet')
82 sys.exit(1)
83
84 offset = cyclic_find(pat, alphabet, subsize)
85
86 if offset == -1:
87 log.critical('Given pattern does not exist in cyclic pattern')
88 sys.exit(1)
89 else:
90 print(offset)
91 else:
92 want = args.count
93 result = cyclic(want, alphabet, subsize)
94 got = len(result)
95 if want is not None and got < want:
96 log.failure("Alphabet too small (max length = %i)" % got)
97
98 out = getattr(sys.stdout, 'buffer', sys.stdout)
99 out.write(result)
100
101 if out.isatty():
102 out.write(b'\n')
103
104 if __name__ == '__main__':
105 pwnlib.commandline.common.main(__file__)
106
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pwnlib/commandline/cyclic.py b/pwnlib/commandline/cyclic.py
--- a/pwnlib/commandline/cyclic.py
+++ b/pwnlib/commandline/cyclic.py
@@ -67,6 +67,9 @@
if args.lookup:
pat = args.lookup
+ if six.PY3:
+ pat = bytes(pat, encoding='utf-8')
+
try:
pat = int(pat, 0)
except ValueError:
|
{"golden_diff": "diff --git a/pwnlib/commandline/cyclic.py b/pwnlib/commandline/cyclic.py\n--- a/pwnlib/commandline/cyclic.py\n+++ b/pwnlib/commandline/cyclic.py\n@@ -67,6 +67,9 @@\n if args.lookup:\n pat = args.lookup\n \n+ if six.PY3:\n+ pat = bytes(pat, encoding='utf-8')\n+\n try:\n pat = int(pat, 0)\n except ValueError:\n", "issue": "'pwn cyclic -o afca' throws a BytesWarning\n\r\n```\r\n$ pwn cyclic -o afca\r\n/Users/heapcrash/pwntools/pwnlib/commandline/cyclic.py:74: BytesWarning: Text is not bytes; assuming ASCII, no guarantees. See https://docs.pwntools.com/#bytes\r\n pat = flat(pat, bytes=args.length)\r\n506\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python2\nfrom __future__ import absolute_import\nfrom __future__ import division\n\nimport argparse\nimport six\nimport string\nimport sys\n\nimport pwnlib.args\npwnlib.args.free_form = False\n\nfrom pwn import *\nfrom pwnlib.commandline import common\n\nparser = common.parser_commands.add_parser(\n 'cyclic',\n help = \"Cyclic pattern creator/finder\",\n description = \"Cyclic pattern creator/finder\"\n)\n\nparser.add_argument(\n '-a', '--alphabet',\n metavar = 'alphabet',\n default = string.ascii_lowercase.encode(),\n type = packing._encode,\n help = 'The alphabet to use in the cyclic pattern (defaults to all lower case letters)',\n)\n\nparser.add_argument(\n '-n', '--length',\n metavar = 'length',\n default = 4,\n type = int,\n help = 'Size of the unique subsequences (defaults to 4).'\n)\n\nparser.add_argument(\n '-c', '--context',\n metavar = 'context',\n action = 'append',\n type = common.context_arg,\n choices = common.choices,\n help = 'The os/architecture/endianness/bits the shellcode will run in (default: linux/i386), choose from: %s' % common.choices,\n)\n\ngroup = parser.add_mutually_exclusive_group(required=False)\ngroup.add_argument(\n '-l', '-o', '--offset', '--lookup',\n dest = 'lookup',\n metavar = 'lookup_value',\n help = 'Do a lookup instead printing the alphabet',\n)\n\ngroup.add_argument(\n 'count',\n type=int,\n nargs='?',\n default=None,\n help='Number of characters to print'\n)\n\ndef main(args):\n alphabet = args.alphabet\n subsize = args.length\n\n if args.lookup:\n pat = args.lookup\n\n try:\n pat = int(pat, 0)\n except ValueError:\n pass\n pat = flat(pat, bytes=args.length)\n\n if len(pat) != subsize:\n log.critical('Subpattern must be %d bytes' % subsize)\n sys.exit(1)\n\n if not all(c in alphabet for c in pat):\n log.critical('Pattern contains characters not present in the alphabet')\n sys.exit(1)\n\n offset = cyclic_find(pat, alphabet, subsize)\n\n if offset == -1:\n log.critical('Given pattern does not exist in cyclic pattern')\n sys.exit(1)\n else:\n print(offset)\n else:\n want = args.count\n result = cyclic(want, alphabet, subsize)\n got = len(result)\n if want is not None and got < want:\n log.failure(\"Alphabet too small (max length = %i)\" % got)\n\n out = getattr(sys.stdout, 'buffer', sys.stdout)\n out.write(result)\n\n if out.isatty():\n out.write(b'\\n')\n\nif __name__ == '__main__':\n pwnlib.commandline.common.main(__file__)\n", "path": "pwnlib/commandline/cyclic.py"}], "after_files": [{"content": "#!/usr/bin/env python2\nfrom __future__ import absolute_import\nfrom __future__ import division\n\nimport argparse\nimport six\nimport string\nimport sys\n\nimport pwnlib.args\npwnlib.args.free_form = False\n\nfrom pwn import *\nfrom pwnlib.commandline import common\n\nparser = common.parser_commands.add_parser(\n 'cyclic',\n help = \"Cyclic pattern creator/finder\",\n description = \"Cyclic pattern creator/finder\"\n)\n\nparser.add_argument(\n '-a', '--alphabet',\n metavar = 'alphabet',\n default = string.ascii_lowercase.encode(),\n type = packing._encode,\n help = 'The alphabet to use in the cyclic pattern (defaults to all lower case letters)',\n)\n\nparser.add_argument(\n '-n', '--length',\n metavar = 'length',\n default = 4,\n type = int,\n help = 'Size of the unique subsequences (defaults to 4).'\n)\n\nparser.add_argument(\n '-c', '--context',\n metavar = 'context',\n action = 'append',\n type = common.context_arg,\n choices = common.choices,\n help = 'The os/architecture/endianness/bits the shellcode will run in (default: linux/i386), choose from: %s' % common.choices,\n)\n\ngroup = parser.add_mutually_exclusive_group(required=False)\ngroup.add_argument(\n '-l', '-o', '--offset', '--lookup',\n dest = 'lookup',\n metavar = 'lookup_value',\n help = 'Do a lookup instead printing the alphabet',\n)\n\ngroup.add_argument(\n 'count',\n type=int,\n nargs='?',\n default=None,\n help='Number of characters to print'\n)\n\ndef main(args):\n alphabet = args.alphabet\n subsize = args.length\n\n if args.lookup:\n pat = args.lookup\n\n if six.PY3:\n pat = bytes(pat, encoding='utf-8')\n\n try:\n pat = int(pat, 0)\n except ValueError:\n pass\n pat = flat(pat, bytes=args.length)\n\n if len(pat) != subsize:\n log.critical('Subpattern must be %d bytes' % subsize)\n sys.exit(1)\n\n if not all(c in alphabet for c in pat):\n log.critical('Pattern contains characters not present in the alphabet')\n sys.exit(1)\n\n offset = cyclic_find(pat, alphabet, subsize)\n\n if offset == -1:\n log.critical('Given pattern does not exist in cyclic pattern')\n sys.exit(1)\n else:\n print(offset)\n else:\n want = args.count\n result = cyclic(want, alphabet, subsize)\n got = len(result)\n if want is not None and got < want:\n log.failure(\"Alphabet too small (max length = %i)\" % got)\n\n out = getattr(sys.stdout, 'buffer', sys.stdout)\n out.write(result)\n\n if out.isatty():\n out.write(b'\\n')\n\nif __name__ == '__main__':\n pwnlib.commandline.common.main(__file__)\n", "path": "pwnlib/commandline/cyclic.py"}]}
| 1,217 | 105 |
gh_patches_debug_14472
|
rasdani/github-patches
|
git_diff
|
mytardis__mytardis-1507
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
NPM_FILE_PATTERNS - collectstatic failing on Windows
MyTardis currently uses `django-npm` to collect static content which has been npm installed into the `node_modules/` folder.
Because we don't necessarily want to copy everything from `node_modules/` when running `collectstatic`, MyTardis uses django-npm's `NPM_FILE_PATTERNS` setting in `tardis/default_settings/static_files.py`
See: https://github.com/kevin1024/django-npm#configuration
It can be used like this:
```
NPM_FILE_PATTERNS = {
'jquery': ['*'],
'jquery-migrate': ['*'],
}
```
to copy everything within `node_modules/jquery/` and everything within `node_modules/jquery-migrate/` into the static folder written to by `collectstatic`.
If you only want `collectstatic` to copy a subset of the files in the node_modules folder, `django-npm` provides the ability to use glob patterns like this:
```
NPM_FILE_PATTERNS = {
'bootstrap': ['dist/*'],
'font-awesome': ['css/*', 'fonts/*'],
}
```
However, these glob patterns don't seem to work on Windows, i.e. nothing is copied from the `node_modules` folders which have glob patterns more complex than `['*']`, see: https://github.com/kevin1024/django-npm/issues/15
A workaround (when running MyTardis's `collectstatic` on Windows) is to redefine `NPM_FILE_PATTERNS` in your `tardis/settings.py`, and just use the `['*']` pattern for every node module you want to be copied by `collectstatic`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tardis/default_settings/static_files.py`
Content:
```
1 from os import path
2 from .storage import DEFAULT_STORAGE_BASE_DIR
3
4 # Absolute path to the directory that holds media.
5 # Example: "/home/media/media.lawrence.com/"
6 MEDIA_ROOT = DEFAULT_STORAGE_BASE_DIR
7
8 # Used by "django collectstatic"
9 STATIC_ROOT = path.abspath(path.join(path.dirname(__file__), '../..', 'static'))
10
11 # Use cachable copies of static files
12 STATICFILES_STORAGE = \
13 'django.contrib.staticfiles.storage.CachedStaticFilesStorage'
14
15 STATICFILES_FINDERS = (
16 'django.contrib.staticfiles.finders.FileSystemFinder',
17 'django.contrib.staticfiles.finders.AppDirectoriesFinder',
18 'npm.finders.NpmFinder',
19 )
20
21 # django-npm settings:
22 NPM_ROOT_PATH = path.abspath(path.join(path.dirname(__file__), '../..'))
23
24 # If you have run "npm install", rather than "npm install --production",
25 # you will get a lot of devDependencies installed in node_modules/ which
26 # are only needed for development/testing (e.g. "npm test") and don't
27 # need to be copied when running collectstatic. NPM_FILE_PATTERNS
28 # specifies the folders within node_modules/ which do need to be copied:
29 NPM_FILE_PATTERNS = {
30 'angular': ['*'],
31 'angular-resource': ['*'],
32 'backbone': ['*'],
33 'backbone-forms': ['*'],
34 'blueimp-file-upload': ['*'],
35 'bootstrap': ['dist/*'],
36 'bootstrap-3-typeahead': ['*'],
37 'clipboard': ['*'],
38 'font-awesome': ['css/*', 'fonts/*'],
39 'jquery': ['*'],
40 'jquery-migrate': ['*'],
41 'jquery-ui-dist': ['jquery-ui.min.js'],
42 'mustache': ['mustache.min.js'],
43 'ng-dialog': ['*'],
44 'sprintf-js': ['dist/*'],
45 'underscore': ['*'],
46 'underscore.string': ['dist/*']
47 }
48
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tardis/default_settings/static_files.py b/tardis/default_settings/static_files.py
--- a/tardis/default_settings/static_files.py
+++ b/tardis/default_settings/static_files.py
@@ -32,16 +32,16 @@
'backbone': ['*'],
'backbone-forms': ['*'],
'blueimp-file-upload': ['*'],
- 'bootstrap': ['dist/*'],
+ 'bootstrap': ['*'],
'bootstrap-3-typeahead': ['*'],
'clipboard': ['*'],
- 'font-awesome': ['css/*', 'fonts/*'],
+ 'font-awesome': ['*'],
'jquery': ['*'],
'jquery-migrate': ['*'],
'jquery-ui-dist': ['jquery-ui.min.js'],
'mustache': ['mustache.min.js'],
'ng-dialog': ['*'],
- 'sprintf-js': ['dist/*'],
+ 'sprintf-js': ['*'],
'underscore': ['*'],
- 'underscore.string': ['dist/*']
+ 'underscore.string': ['*']
}
|
{"golden_diff": "diff --git a/tardis/default_settings/static_files.py b/tardis/default_settings/static_files.py\n--- a/tardis/default_settings/static_files.py\n+++ b/tardis/default_settings/static_files.py\n@@ -32,16 +32,16 @@\n 'backbone': ['*'],\n 'backbone-forms': ['*'],\n 'blueimp-file-upload': ['*'],\n- 'bootstrap': ['dist/*'],\n+ 'bootstrap': ['*'],\n 'bootstrap-3-typeahead': ['*'],\n 'clipboard': ['*'],\n- 'font-awesome': ['css/*', 'fonts/*'],\n+ 'font-awesome': ['*'],\n 'jquery': ['*'],\n 'jquery-migrate': ['*'],\n 'jquery-ui-dist': ['jquery-ui.min.js'],\n 'mustache': ['mustache.min.js'],\n 'ng-dialog': ['*'],\n- 'sprintf-js': ['dist/*'],\n+ 'sprintf-js': ['*'],\n 'underscore': ['*'],\n- 'underscore.string': ['dist/*']\n+ 'underscore.string': ['*']\n }\n", "issue": "NPM_FILE_PATTERNS - collectstatic failing on Windows\nMyTardis currently uses `django-npm` to collect static content which has been npm installed into the `node_modules/` folder.\r\n\r\nBecause we don't necessarily want to copy everything from `node_modules/` when running `collectstatic`, MyTardis uses django-npm's `NPM_FILE_PATTERNS` setting in `tardis/default_settings/static_files.py`\r\n\r\nSee: https://github.com/kevin1024/django-npm#configuration\r\n\r\nIt can be used like this:\r\n\r\n```\r\nNPM_FILE_PATTERNS = {\r\n 'jquery': ['*'],\r\n 'jquery-migrate': ['*'],\r\n}\r\n```\r\n\r\nto copy everything within `node_modules/jquery/` and everything within `node_modules/jquery-migrate/` into the static folder written to by `collectstatic`.\r\n\r\nIf you only want `collectstatic` to copy a subset of the files in the node_modules folder, `django-npm` provides the ability to use glob patterns like this:\r\n\r\n```\r\nNPM_FILE_PATTERNS = {\r\n 'bootstrap': ['dist/*'],\r\n 'font-awesome': ['css/*', 'fonts/*'],\r\n}\r\n```\r\n\r\nHowever, these glob patterns don't seem to work on Windows, i.e. nothing is copied from the `node_modules` folders which have glob patterns more complex than `['*']`, see: https://github.com/kevin1024/django-npm/issues/15\r\n\r\nA workaround (when running MyTardis's `collectstatic` on Windows) is to redefine `NPM_FILE_PATTERNS` in your `tardis/settings.py`, and just use the `['*']` pattern for every node module you want to be copied by `collectstatic`.\n", "before_files": [{"content": "from os import path\nfrom .storage import DEFAULT_STORAGE_BASE_DIR\n\n# Absolute path to the directory that holds media.\n# Example: \"/home/media/media.lawrence.com/\"\nMEDIA_ROOT = DEFAULT_STORAGE_BASE_DIR\n\n# Used by \"django collectstatic\"\nSTATIC_ROOT = path.abspath(path.join(path.dirname(__file__), '../..', 'static'))\n\n# Use cachable copies of static files\nSTATICFILES_STORAGE = \\\n 'django.contrib.staticfiles.storage.CachedStaticFilesStorage'\n\nSTATICFILES_FINDERS = (\n 'django.contrib.staticfiles.finders.FileSystemFinder',\n 'django.contrib.staticfiles.finders.AppDirectoriesFinder',\n 'npm.finders.NpmFinder',\n)\n\n# django-npm settings:\nNPM_ROOT_PATH = path.abspath(path.join(path.dirname(__file__), '../..'))\n\n# If you have run \"npm install\", rather than \"npm install --production\",\n# you will get a lot of devDependencies installed in node_modules/ which\n# are only needed for development/testing (e.g. \"npm test\") and don't\n# need to be copied when running collectstatic. NPM_FILE_PATTERNS\n# specifies the folders within node_modules/ which do need to be copied:\nNPM_FILE_PATTERNS = {\n 'angular': ['*'],\n 'angular-resource': ['*'],\n 'backbone': ['*'],\n 'backbone-forms': ['*'],\n 'blueimp-file-upload': ['*'],\n 'bootstrap': ['dist/*'],\n 'bootstrap-3-typeahead': ['*'],\n 'clipboard': ['*'],\n 'font-awesome': ['css/*', 'fonts/*'],\n 'jquery': ['*'],\n 'jquery-migrate': ['*'],\n 'jquery-ui-dist': ['jquery-ui.min.js'],\n 'mustache': ['mustache.min.js'],\n 'ng-dialog': ['*'],\n 'sprintf-js': ['dist/*'],\n 'underscore': ['*'],\n 'underscore.string': ['dist/*']\n}\n", "path": "tardis/default_settings/static_files.py"}], "after_files": [{"content": "from os import path\nfrom .storage import DEFAULT_STORAGE_BASE_DIR\n\n# Absolute path to the directory that holds media.\n# Example: \"/home/media/media.lawrence.com/\"\nMEDIA_ROOT = DEFAULT_STORAGE_BASE_DIR\n\n# Used by \"django collectstatic\"\nSTATIC_ROOT = path.abspath(path.join(path.dirname(__file__), '../..', 'static'))\n\n# Use cachable copies of static files\nSTATICFILES_STORAGE = \\\n 'django.contrib.staticfiles.storage.CachedStaticFilesStorage'\n\nSTATICFILES_FINDERS = (\n 'django.contrib.staticfiles.finders.FileSystemFinder',\n 'django.contrib.staticfiles.finders.AppDirectoriesFinder',\n 'npm.finders.NpmFinder',\n)\n\n# django-npm settings:\nNPM_ROOT_PATH = path.abspath(path.join(path.dirname(__file__), '../..'))\n\n# If you have run \"npm install\", rather than \"npm install --production\",\n# you will get a lot of devDependencies installed in node_modules/ which\n# are only needed for development/testing (e.g. \"npm test\") and don't\n# need to be copied when running collectstatic. NPM_FILE_PATTERNS\n# specifies the folders within node_modules/ which do need to be copied:\nNPM_FILE_PATTERNS = {\n 'angular': ['*'],\n 'angular-resource': ['*'],\n 'backbone': ['*'],\n 'backbone-forms': ['*'],\n 'blueimp-file-upload': ['*'],\n 'bootstrap': ['*'],\n 'bootstrap-3-typeahead': ['*'],\n 'clipboard': ['*'],\n 'font-awesome': ['*'],\n 'jquery': ['*'],\n 'jquery-migrate': ['*'],\n 'jquery-ui-dist': ['jquery-ui.min.js'],\n 'mustache': ['mustache.min.js'],\n 'ng-dialog': ['*'],\n 'sprintf-js': ['*'],\n 'underscore': ['*'],\n 'underscore.string': ['*']\n}\n", "path": "tardis/default_settings/static_files.py"}]}
| 1,135 | 237 |
gh_patches_debug_13235
|
rasdani/github-patches
|
git_diff
|
kivy__python-for-android-618
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Jpeg recipe is broken
It is missing /home/brussee/.local/lib/python2.7/site-packages/pythonforandroid/recipes/jpeg/Application.mk
Perpaps just the path is incorrect?
```
[INFO]: Prebuilding jpeg for armeabi
[INFO]: jpeg has no prebuild_armeabi, skipping
[DEBUG]: -> running cp /home/brussee/.local/lib/python2.7/site-packages/pythonforandroid/recipes/jpeg/Application.mk /home/brussee/.local/share/python-for-android/build/other_builds/jpeg/armeabi/jpeg/Application.mk
[DEBUG]: /bin/cp: cannot stat ‘/home/brussee/.local/lib/python2.7/site-packages/pythonforandroid/recipes/jpeg/Application.mk’: No such file or directory
Traceback (most recent call last):
File "/home/brussee/.local/bin/p4a", line 9, in <module>
load_entry_point('python-for-android==0.3', 'console_scripts', 'p4a')()
File "/home/brussee/.local/lib/python2.7/site-packages/pythonforandroid/toolchain.py", line 708, in main
ToolchainCL()
File "/home/brussee/.local/lib/python2.7/site-packages/pythonforandroid/toolchain.py", line 323, in __init__
getattr(self, args.command)(unknown)
File "/home/brussee/.local/lib/python2.7/site-packages/pythonforandroid/toolchain.py", line 105, in wrapper_func
build_dist_from_args(ctx, dist, dist_args)
File "/home/brussee/.local/lib/python2.7/site-packages/pythonforandroid/toolchain.py", line 142, in build_dist_from_args
build_recipes(build_order, python_modules, ctx)
File "/home/brussee/.local/lib/python2.7/site-packages/pythonforandroid/build.py", line 543, in build_recipes
recipe.prebuild_arch(arch)
File "/home/brussee/.local/lib/python2.7/site-packages/pythonforandroid/recipes/jpeg/__init__.py", line 22, in prebuild_arch
shprint(sh.cp, join(self.get_recipe_dir(), 'Application.mk'), app_mk)
File "/home/brussee/.local/lib/python2.7/site-packages/pythonforandroid/logger.py", line 160, in shprint
for line in output:
File "/home/brussee/.local/lib/python2.7/site-packages/sh.py", line 565, in next
self.wait()
File "/home/brussee/.local/lib/python2.7/site-packages/sh.py", line 500, in wait
self.handle_command_exit_code(exit_code)
File "/home/brussee/.local/lib/python2.7/site-packages/sh.py", line 516, in handle_command_exit_code
raise exc(self.ran, self.process.stdout, self.process.stderr)
sh.ErrorReturnCode_1
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1
2 from setuptools import setup, find_packages
3 from os import walk
4 from os.path import join, dirname, sep
5 import os
6 import glob
7
8 # NOTE: All package data should also be set in MANIFEST.in
9
10 packages = find_packages()
11
12 package_data = {'': ['*.tmpl',
13 '*.patch', ], }
14
15 data_files = []
16
17 # By specifying every file manually, package_data will be able to
18 # include them in binary distributions. Note that we have to add
19 # everything as a 'pythonforandroid' rule, using '' apparently doesn't
20 # work.
21 def recursively_include(results, directory, patterns):
22 for root, subfolders, files in walk(directory):
23 for fn in files:
24 if not any([glob.fnmatch.fnmatch(fn, pattern) for pattern in patterns]):
25 continue
26 filename = join(root, fn)
27 directory = 'pythonforandroid'
28 if directory not in results:
29 results[directory] = []
30 results[directory].append(join(*filename.split(sep)[1:]))
31
32 recursively_include(package_data, 'pythonforandroid/recipes',
33 ['*.patch', 'Setup*', '*.pyx', '*.py', '*.c', '*.h', ])
34 recursively_include(package_data, 'pythonforandroid/bootstraps',
35 ['*.properties', '*.xml', '*.java', '*.tmpl', '*.txt', '*.png',
36 '*.mk', '*.c', '*.h', '*.py', '*.sh', '*.jpg', '*.aidl', ])
37 recursively_include(package_data, 'pythonforandroid/bootstraps',
38 ['sdl-config', ])
39 recursively_include(package_data, 'pythonforandroid',
40 ['liblink', 'biglink', 'liblink.sh'])
41
42 setup(name='python-for-android',
43 version='0.3',
44 description='Android APK packager for Python scripts and apps',
45 author='The Kivy team',
46 author_email='[email protected]',
47 url='https://github.com/kivy/python-for-android',
48 license='MIT',
49 install_requires=['appdirs', 'colorama>0.3', 'sh', 'jinja2', 'argparse',
50 'six'],
51 entry_points={
52 'console_scripts': [
53 'python-for-android = pythonforandroid.toolchain:main',
54 'p4a = pythonforandroid.toolchain:main',
55 ],
56 'distutils.commands': [
57 'bdist_apk = pythonforandroid.bdist_apk:BdistAPK',
58 ],
59 },
60 classifiers = [
61 'Development Status :: 3 - Alpha',
62 'Intended Audience :: Developers',
63 'License :: OSI Approved :: MIT License',
64 'Operating System :: Microsoft :: Windows',
65 'Operating System :: OS Independent',
66 'Operating System :: POSIX :: Linux',
67 'Operating System :: MacOS :: MacOS X',
68 'Programming Language :: C',
69 'Programming Language :: Python :: 2',
70 'Programming Language :: Python :: 3',
71 'Topic :: Software Development',
72 'Topic :: Utilities',
73 ],
74 packages=packages,
75 package_data=package_data,
76 )
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -30,7 +30,8 @@
results[directory].append(join(*filename.split(sep)[1:]))
recursively_include(package_data, 'pythonforandroid/recipes',
- ['*.patch', 'Setup*', '*.pyx', '*.py', '*.c', '*.h', ])
+ ['*.patch', 'Setup*', '*.pyx', '*.py', '*.c', '*.h',
+ '*.mk', ])
recursively_include(package_data, 'pythonforandroid/bootstraps',
['*.properties', '*.xml', '*.java', '*.tmpl', '*.txt', '*.png',
'*.mk', '*.c', '*.h', '*.py', '*.sh', '*.jpg', '*.aidl', ])
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -30,7 +30,8 @@\n results[directory].append(join(*filename.split(sep)[1:]))\n \n recursively_include(package_data, 'pythonforandroid/recipes',\n- ['*.patch', 'Setup*', '*.pyx', '*.py', '*.c', '*.h', ])\n+ ['*.patch', 'Setup*', '*.pyx', '*.py', '*.c', '*.h',\n+ '*.mk', ])\n recursively_include(package_data, 'pythonforandroid/bootstraps',\n ['*.properties', '*.xml', '*.java', '*.tmpl', '*.txt', '*.png',\n '*.mk', '*.c', '*.h', '*.py', '*.sh', '*.jpg', '*.aidl', ])\n", "issue": "Jpeg recipe is broken\nIt is missing /home/brussee/.local/lib/python2.7/site-packages/pythonforandroid/recipes/jpeg/Application.mk\nPerpaps just the path is incorrect?\n\n```\n[INFO]: Prebuilding jpeg for armeabi\n[INFO]: jpeg has no prebuild_armeabi, skipping\n[DEBUG]: -> running cp /home/brussee/.local/lib/python2.7/site-packages/pythonforandroid/recipes/jpeg/Application.mk /home/brussee/.local/share/python-for-android/build/other_builds/jpeg/armeabi/jpeg/Application.mk\n[DEBUG]: /bin/cp: cannot stat \u2018/home/brussee/.local/lib/python2.7/site-packages/pythonforandroid/recipes/jpeg/Application.mk\u2019: No such file or directory\nTraceback (most recent call last):\n File \"/home/brussee/.local/bin/p4a\", line 9, in <module>\n load_entry_point('python-for-android==0.3', 'console_scripts', 'p4a')()\n File \"/home/brussee/.local/lib/python2.7/site-packages/pythonforandroid/toolchain.py\", line 708, in main\n ToolchainCL()\n File \"/home/brussee/.local/lib/python2.7/site-packages/pythonforandroid/toolchain.py\", line 323, in __init__\n getattr(self, args.command)(unknown)\n File \"/home/brussee/.local/lib/python2.7/site-packages/pythonforandroid/toolchain.py\", line 105, in wrapper_func\n build_dist_from_args(ctx, dist, dist_args)\n File \"/home/brussee/.local/lib/python2.7/site-packages/pythonforandroid/toolchain.py\", line 142, in build_dist_from_args\n build_recipes(build_order, python_modules, ctx)\n File \"/home/brussee/.local/lib/python2.7/site-packages/pythonforandroid/build.py\", line 543, in build_recipes\n recipe.prebuild_arch(arch)\n File \"/home/brussee/.local/lib/python2.7/site-packages/pythonforandroid/recipes/jpeg/__init__.py\", line 22, in prebuild_arch\n shprint(sh.cp, join(self.get_recipe_dir(), 'Application.mk'), app_mk)\n File \"/home/brussee/.local/lib/python2.7/site-packages/pythonforandroid/logger.py\", line 160, in shprint\n for line in output:\n File \"/home/brussee/.local/lib/python2.7/site-packages/sh.py\", line 565, in next\n self.wait()\n File \"/home/brussee/.local/lib/python2.7/site-packages/sh.py\", line 500, in wait\n self.handle_command_exit_code(exit_code)\n File \"/home/brussee/.local/lib/python2.7/site-packages/sh.py\", line 516, in handle_command_exit_code\n raise exc(self.ran, self.process.stdout, self.process.stderr)\nsh.ErrorReturnCode_1\n```\n\n", "before_files": [{"content": "\nfrom setuptools import setup, find_packages\nfrom os import walk\nfrom os.path import join, dirname, sep\nimport os\nimport glob\n\n# NOTE: All package data should also be set in MANIFEST.in\n\npackages = find_packages()\n\npackage_data = {'': ['*.tmpl',\n '*.patch', ], }\n\ndata_files = []\n\n# By specifying every file manually, package_data will be able to\n# include them in binary distributions. Note that we have to add\n# everything as a 'pythonforandroid' rule, using '' apparently doesn't\n# work.\ndef recursively_include(results, directory, patterns):\n for root, subfolders, files in walk(directory):\n for fn in files:\n if not any([glob.fnmatch.fnmatch(fn, pattern) for pattern in patterns]):\n continue\n filename = join(root, fn)\n directory = 'pythonforandroid'\n if directory not in results:\n results[directory] = []\n results[directory].append(join(*filename.split(sep)[1:]))\n\nrecursively_include(package_data, 'pythonforandroid/recipes',\n ['*.patch', 'Setup*', '*.pyx', '*.py', '*.c', '*.h', ])\nrecursively_include(package_data, 'pythonforandroid/bootstraps',\n ['*.properties', '*.xml', '*.java', '*.tmpl', '*.txt', '*.png',\n '*.mk', '*.c', '*.h', '*.py', '*.sh', '*.jpg', '*.aidl', ])\nrecursively_include(package_data, 'pythonforandroid/bootstraps',\n ['sdl-config', ])\nrecursively_include(package_data, 'pythonforandroid',\n ['liblink', 'biglink', 'liblink.sh'])\n\nsetup(name='python-for-android',\n version='0.3',\n description='Android APK packager for Python scripts and apps',\n author='The Kivy team',\n author_email='[email protected]',\n url='https://github.com/kivy/python-for-android', \n license='MIT', \n install_requires=['appdirs', 'colorama>0.3', 'sh', 'jinja2', 'argparse',\n 'six'],\n entry_points={\n 'console_scripts': [\n 'python-for-android = pythonforandroid.toolchain:main',\n 'p4a = pythonforandroid.toolchain:main',\n ],\n 'distutils.commands': [\n 'bdist_apk = pythonforandroid.bdist_apk:BdistAPK',\n ],\n },\n classifiers = [\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: OS Independent',\n 'Operating System :: POSIX :: Linux',\n 'Operating System :: MacOS :: MacOS X',\n 'Programming Language :: C',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 3',\n 'Topic :: Software Development',\n 'Topic :: Utilities',\n ],\n packages=packages,\n package_data=package_data,\n )\n", "path": "setup.py"}], "after_files": [{"content": "\nfrom setuptools import setup, find_packages\nfrom os import walk\nfrom os.path import join, dirname, sep\nimport os\nimport glob\n\n# NOTE: All package data should also be set in MANIFEST.in\n\npackages = find_packages()\n\npackage_data = {'': ['*.tmpl',\n '*.patch', ], }\n\ndata_files = []\n\n# By specifying every file manually, package_data will be able to\n# include them in binary distributions. Note that we have to add\n# everything as a 'pythonforandroid' rule, using '' apparently doesn't\n# work.\ndef recursively_include(results, directory, patterns):\n for root, subfolders, files in walk(directory):\n for fn in files:\n if not any([glob.fnmatch.fnmatch(fn, pattern) for pattern in patterns]):\n continue\n filename = join(root, fn)\n directory = 'pythonforandroid'\n if directory not in results:\n results[directory] = []\n results[directory].append(join(*filename.split(sep)[1:]))\n\nrecursively_include(package_data, 'pythonforandroid/recipes',\n ['*.patch', 'Setup*', '*.pyx', '*.py', '*.c', '*.h',\n '*.mk', ])\nrecursively_include(package_data, 'pythonforandroid/bootstraps',\n ['*.properties', '*.xml', '*.java', '*.tmpl', '*.txt', '*.png',\n '*.mk', '*.c', '*.h', '*.py', '*.sh', '*.jpg', '*.aidl', ])\nrecursively_include(package_data, 'pythonforandroid/bootstraps',\n ['sdl-config', ])\nrecursively_include(package_data, 'pythonforandroid',\n ['liblink', 'biglink', 'liblink.sh'])\n\nsetup(name='python-for-android',\n version='0.3',\n description='Android APK packager for Python scripts and apps',\n author='The Kivy team',\n author_email='[email protected]',\n url='https://github.com/kivy/python-for-android', \n license='MIT', \n install_requires=['appdirs', 'colorama>0.3', 'sh', 'jinja2', 'argparse',\n 'six'],\n entry_points={\n 'console_scripts': [\n 'python-for-android = pythonforandroid.toolchain:main',\n 'p4a = pythonforandroid.toolchain:main',\n ],\n 'distutils.commands': [\n 'bdist_apk = pythonforandroid.bdist_apk:BdistAPK',\n ],\n },\n classifiers = [\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: OS Independent',\n 'Operating System :: POSIX :: Linux',\n 'Operating System :: MacOS :: MacOS X',\n 'Programming Language :: C',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 3',\n 'Topic :: Software Development',\n 'Topic :: Utilities',\n ],\n packages=packages,\n package_data=package_data,\n )\n", "path": "setup.py"}]}
| 1,709 | 167 |
gh_patches_debug_5623
|
rasdani/github-patches
|
git_diff
|
spack__spack-3415
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bison doesn't find m4 at run time
While building `flex`, I had `bison` fail because it could not execute `m4`. The reason was that I had uninstalled the `m4` package (via Spack) which `bison` installed as its build dependency. Then, `bison` failed since this `m4` executable did not exist any more.
I think `m4` needs to be a run-time dependency of `bison` as well.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `var/spack/repos/builtin/packages/autoconf/package.py`
Content:
```
1 ##############################################################################
2 # Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.
3 # Produced at the Lawrence Livermore National Laboratory.
4 #
5 # This file is part of Spack.
6 # Created by Todd Gamblin, [email protected], All rights reserved.
7 # LLNL-CODE-647188
8 #
9 # For details, see https://github.com/llnl/spack
10 # Please also see the LICENSE file for our notice and the LGPL.
11 #
12 # This program is free software; you can redistribute it and/or modify
13 # it under the terms of the GNU Lesser General Public License (as
14 # published by the Free Software Foundation) version 2.1, February 1999.
15 #
16 # This program is distributed in the hope that it will be useful, but
17 # WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
18 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
19 # conditions of the GNU Lesser General Public License for more details.
20 #
21 # You should have received a copy of the GNU Lesser General Public
22 # License along with this program; if not, write to the Free Software
23 # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
24 ##############################################################################
25 from spack import *
26
27
28 class Autoconf(AutotoolsPackage):
29 """Autoconf -- system configuration part of autotools"""
30
31 homepage = 'https://www.gnu.org/software/autoconf/'
32 url = 'http://ftp.gnu.org/gnu/autoconf/autoconf-2.69.tar.gz'
33
34 version('2.69', '82d05e03b93e45f5a39b828dc9c6c29b')
35 version('2.62', '6c1f3b3734999035d77da5024aab4fbd')
36 version('2.59', 'd4d45eaa1769d45e59dcb131a4af17a0')
37 version('2.13', '9de56d4a161a723228220b0f425dc711')
38
39 depends_on('[email protected]:', type='build')
40
41 build_directory = 'spack-build'
42
43 def _make_executable(self, name):
44 return Executable(join_path(self.prefix.bin, name))
45
46 def setup_dependent_package(self, module, dependent_spec):
47 # Autoconf is very likely to be a build dependency,
48 # so we add the tools it provides to the dependent module
49 executables = ['autoconf',
50 'autoheader',
51 'autom4te',
52 'autoreconf',
53 'autoscan',
54 'autoupdate',
55 'ifnames']
56 for name in executables:
57 setattr(module, name, self._make_executable(name))
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/var/spack/repos/builtin/packages/autoconf/package.py b/var/spack/repos/builtin/packages/autoconf/package.py
--- a/var/spack/repos/builtin/packages/autoconf/package.py
+++ b/var/spack/repos/builtin/packages/autoconf/package.py
@@ -36,7 +36,9 @@
version('2.59', 'd4d45eaa1769d45e59dcb131a4af17a0')
version('2.13', '9de56d4a161a723228220b0f425dc711')
- depends_on('[email protected]:', type='build')
+ # Note: m4 is not a pure build-time dependency of autoconf. m4 is
+ # needed when autoconf runs, not only when autoconf is built.
+ depends_on('[email protected]:', type=('build', 'run'))
build_directory = 'spack-build'
|
{"golden_diff": "diff --git a/var/spack/repos/builtin/packages/autoconf/package.py b/var/spack/repos/builtin/packages/autoconf/package.py\n--- a/var/spack/repos/builtin/packages/autoconf/package.py\n+++ b/var/spack/repos/builtin/packages/autoconf/package.py\n@@ -36,7 +36,9 @@\n version('2.59', 'd4d45eaa1769d45e59dcb131a4af17a0')\n version('2.13', '9de56d4a161a723228220b0f425dc711')\n \n- depends_on('[email protected]:', type='build')\n+ # Note: m4 is not a pure build-time dependency of autoconf. m4 is\n+ # needed when autoconf runs, not only when autoconf is built.\n+ depends_on('[email protected]:', type=('build', 'run'))\n \n build_directory = 'spack-build'\n", "issue": "bison doesn't find m4 at run time\nWhile building `flex`, I had `bison` fail because it could not execute `m4`. The reason was that I had uninstalled the `m4` package (via Spack) which `bison` installed as its build dependency. Then, `bison` failed since this `m4` executable did not exist any more.\r\n\r\nI think `m4` needs to be a run-time dependency of `bison` as well.\n", "before_files": [{"content": "##############################################################################\n# Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.\n# Produced at the Lawrence Livermore National Laboratory.\n#\n# This file is part of Spack.\n# Created by Todd Gamblin, [email protected], All rights reserved.\n# LLNL-CODE-647188\n#\n# For details, see https://github.com/llnl/spack\n# Please also see the LICENSE file for our notice and the LGPL.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU Lesser General Public License (as\n# published by the Free Software Foundation) version 2.1, February 1999.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and\n# conditions of the GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public\n# License along with this program; if not, write to the Free Software\n# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA\n##############################################################################\nfrom spack import *\n\n\nclass Autoconf(AutotoolsPackage):\n \"\"\"Autoconf -- system configuration part of autotools\"\"\"\n\n homepage = 'https://www.gnu.org/software/autoconf/'\n url = 'http://ftp.gnu.org/gnu/autoconf/autoconf-2.69.tar.gz'\n\n version('2.69', '82d05e03b93e45f5a39b828dc9c6c29b')\n version('2.62', '6c1f3b3734999035d77da5024aab4fbd')\n version('2.59', 'd4d45eaa1769d45e59dcb131a4af17a0')\n version('2.13', '9de56d4a161a723228220b0f425dc711')\n\n depends_on('[email protected]:', type='build')\n\n build_directory = 'spack-build'\n\n def _make_executable(self, name):\n return Executable(join_path(self.prefix.bin, name))\n\n def setup_dependent_package(self, module, dependent_spec):\n # Autoconf is very likely to be a build dependency,\n # so we add the tools it provides to the dependent module\n executables = ['autoconf',\n 'autoheader',\n 'autom4te',\n 'autoreconf',\n 'autoscan',\n 'autoupdate',\n 'ifnames']\n for name in executables:\n setattr(module, name, self._make_executable(name))\n", "path": "var/spack/repos/builtin/packages/autoconf/package.py"}], "after_files": [{"content": "##############################################################################\n# Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.\n# Produced at the Lawrence Livermore National Laboratory.\n#\n# This file is part of Spack.\n# Created by Todd Gamblin, [email protected], All rights reserved.\n# LLNL-CODE-647188\n#\n# For details, see https://github.com/llnl/spack\n# Please also see the LICENSE file for our notice and the LGPL.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU Lesser General Public License (as\n# published by the Free Software Foundation) version 2.1, February 1999.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and\n# conditions of the GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public\n# License along with this program; if not, write to the Free Software\n# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA\n##############################################################################\nfrom spack import *\n\n\nclass Autoconf(AutotoolsPackage):\n \"\"\"Autoconf -- system configuration part of autotools\"\"\"\n\n homepage = 'https://www.gnu.org/software/autoconf/'\n url = 'http://ftp.gnu.org/gnu/autoconf/autoconf-2.69.tar.gz'\n\n version('2.69', '82d05e03b93e45f5a39b828dc9c6c29b')\n version('2.62', '6c1f3b3734999035d77da5024aab4fbd')\n version('2.59', 'd4d45eaa1769d45e59dcb131a4af17a0')\n version('2.13', '9de56d4a161a723228220b0f425dc711')\n\n # Note: m4 is not a pure build-time dependency of autoconf. m4 is\n # needed when autoconf runs, not only when autoconf is built.\n depends_on('[email protected]:', type=('build', 'run'))\n\n build_directory = 'spack-build'\n\n def _make_executable(self, name):\n return Executable(join_path(self.prefix.bin, name))\n\n def setup_dependent_package(self, module, dependent_spec):\n # Autoconf is very likely to be a build dependency,\n # so we add the tools it provides to the dependent module\n executables = ['autoconf',\n 'autoheader',\n 'autom4te',\n 'autoreconf',\n 'autoscan',\n 'autoupdate',\n 'ifnames']\n for name in executables:\n setattr(module, name, self._make_executable(name))\n", "path": "var/spack/repos/builtin/packages/autoconf/package.py"}]}
| 1,138 | 237 |
gh_patches_debug_13432
|
rasdani/github-patches
|
git_diff
|
nilearn__nilearn-936
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Plots won't show up
Hi everyone,
using nilearn on OSX El Capitan, when executing the example scripts like plot_demo_glass_brain.py no plots will show up. PiP and iPython linked to the same folders, matplotlib alone does show plots. All dependencies are up-to-date and installed. Don't really know how to fix this.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nilearn/plotting/__init__.py`
Content:
```
1 """
2 Plotting code for nilearn
3 """
4 # Authors: Chris Filo Gorgolewski, Gael Varoquaux
5
6 ###############################################################################
7 # Make sure that we don't get DISPLAY problems when running without X on
8 # unices
9 def _set_mpl_backend():
10 try:
11 # We are doing local imports here to avoid poluting our namespace
12 import matplotlib
13 import os
14 # Set the backend to a non-interactive one for unices without X
15 if os.name == 'posix' and 'DISPLAY' not in os.environ:
16 matplotlib.use('Agg')
17 except ImportError:
18 from .._utils.testing import skip_if_running_nose
19 # No need to fail when running tests
20 skip_if_running_nose('matplotlib not installed')
21 raise
22 else:
23 from ..version import (_import_module_with_version_check,
24 OPTIONAL_MATPLOTLIB_MIN_VERSION)
25 # When matplotlib was successfully imported we need to check
26 # that the version is greater that the minimum required one
27 _import_module_with_version_check('matplotlib',
28 OPTIONAL_MATPLOTLIB_MIN_VERSION)
29
30 _set_mpl_backend()
31
32 ###############################################################################
33
34 from . import cm
35 from .img_plotting import plot_img, plot_anat, plot_epi, \
36 plot_roi, plot_stat_map, plot_glass_brain, plot_connectome, \
37 plot_prob_atlas, show
38 from .find_cuts import find_xyz_cut_coords, find_cut_slices
39
40 __all__ = ['cm', 'plot_img', 'plot_anat', 'plot_epi',
41 'plot_roi', 'plot_stat_map', 'plot_glass_brain',
42 'plot_connectome', 'plot_prob_atlas',
43 'find_xyz_cut_coords', 'find_cut_slices',
44 'show']
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nilearn/plotting/__init__.py b/nilearn/plotting/__init__.py
--- a/nilearn/plotting/__init__.py
+++ b/nilearn/plotting/__init__.py
@@ -11,8 +11,12 @@
# We are doing local imports here to avoid poluting our namespace
import matplotlib
import os
+ import sys
# Set the backend to a non-interactive one for unices without X
- if os.name == 'posix' and 'DISPLAY' not in os.environ:
+ if (os.name == 'posix' and 'DISPLAY' not in os.environ
+ and not (sys.platform == 'darwin'
+ and matplotlib.get_backend() == 'MacOSX'
+ )):
matplotlib.use('Agg')
except ImportError:
from .._utils.testing import skip_if_running_nose
|
{"golden_diff": "diff --git a/nilearn/plotting/__init__.py b/nilearn/plotting/__init__.py\n--- a/nilearn/plotting/__init__.py\n+++ b/nilearn/plotting/__init__.py\n@@ -11,8 +11,12 @@\n # We are doing local imports here to avoid poluting our namespace\n import matplotlib\n import os\n+ import sys\n # Set the backend to a non-interactive one for unices without X\n- if os.name == 'posix' and 'DISPLAY' not in os.environ:\n+ if (os.name == 'posix' and 'DISPLAY' not in os.environ\n+ and not (sys.platform == 'darwin'\n+ and matplotlib.get_backend() == 'MacOSX'\n+ )):\n matplotlib.use('Agg')\n except ImportError:\n from .._utils.testing import skip_if_running_nose\n", "issue": "Plots won't show up\nHi everyone,\n\nusing nilearn on OSX El Capitan, when executing the example scripts like plot_demo_glass_brain.py no plots will show up. PiP and iPython linked to the same folders, matplotlib alone does show plots. All dependencies are up-to-date and installed. Don't really know how to fix this.\n\n", "before_files": [{"content": "\"\"\"\nPlotting code for nilearn\n\"\"\"\n# Authors: Chris Filo Gorgolewski, Gael Varoquaux\n\n###############################################################################\n# Make sure that we don't get DISPLAY problems when running without X on\n# unices\ndef _set_mpl_backend():\n try:\n # We are doing local imports here to avoid poluting our namespace\n import matplotlib\n import os\n # Set the backend to a non-interactive one for unices without X\n if os.name == 'posix' and 'DISPLAY' not in os.environ:\n matplotlib.use('Agg')\n except ImportError:\n from .._utils.testing import skip_if_running_nose\n # No need to fail when running tests\n skip_if_running_nose('matplotlib not installed')\n raise\n else:\n from ..version import (_import_module_with_version_check,\n OPTIONAL_MATPLOTLIB_MIN_VERSION)\n # When matplotlib was successfully imported we need to check\n # that the version is greater that the minimum required one\n _import_module_with_version_check('matplotlib',\n OPTIONAL_MATPLOTLIB_MIN_VERSION)\n\n_set_mpl_backend()\n\n###############################################################################\n\nfrom . import cm\nfrom .img_plotting import plot_img, plot_anat, plot_epi, \\\n plot_roi, plot_stat_map, plot_glass_brain, plot_connectome, \\\n plot_prob_atlas, show\nfrom .find_cuts import find_xyz_cut_coords, find_cut_slices\n\n__all__ = ['cm', 'plot_img', 'plot_anat', 'plot_epi',\n 'plot_roi', 'plot_stat_map', 'plot_glass_brain',\n 'plot_connectome', 'plot_prob_atlas',\n 'find_xyz_cut_coords', 'find_cut_slices',\n 'show']\n", "path": "nilearn/plotting/__init__.py"}], "after_files": [{"content": "\"\"\"\nPlotting code for nilearn\n\"\"\"\n# Authors: Chris Filo Gorgolewski, Gael Varoquaux\n\n###############################################################################\n# Make sure that we don't get DISPLAY problems when running without X on\n# unices\ndef _set_mpl_backend():\n try:\n # We are doing local imports here to avoid poluting our namespace\n import matplotlib\n import os\n import sys\n # Set the backend to a non-interactive one for unices without X\n if (os.name == 'posix' and 'DISPLAY' not in os.environ\n and not (sys.platform == 'darwin'\n and matplotlib.get_backend() == 'MacOSX'\n )):\n matplotlib.use('Agg')\n except ImportError:\n from .._utils.testing import skip_if_running_nose\n # No need to fail when running tests\n skip_if_running_nose('matplotlib not installed')\n raise\n else:\n from ..version import (_import_module_with_version_check,\n OPTIONAL_MATPLOTLIB_MIN_VERSION)\n # When matplotlib was successfully imported we need to check\n # that the version is greater that the minimum required one\n _import_module_with_version_check('matplotlib',\n OPTIONAL_MATPLOTLIB_MIN_VERSION)\n\n_set_mpl_backend()\n\n###############################################################################\n\nfrom . import cm\nfrom .img_plotting import plot_img, plot_anat, plot_epi, \\\n plot_roi, plot_stat_map, plot_glass_brain, plot_connectome, \\\n plot_prob_atlas, show\nfrom .find_cuts import find_xyz_cut_coords, find_cut_slices\n\n__all__ = ['cm', 'plot_img', 'plot_anat', 'plot_epi',\n 'plot_roi', 'plot_stat_map', 'plot_glass_brain',\n 'plot_connectome', 'plot_prob_atlas',\n 'find_xyz_cut_coords', 'find_cut_slices',\n 'show']\n", "path": "nilearn/plotting/__init__.py"}]}
| 796 | 199 |
gh_patches_debug_41715
|
rasdani/github-patches
|
git_diff
|
pantsbuild__pants-4887
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Prefer wheels during plugin install
Pants has sprouted a dependency on a package that is most easily installed via a `whl` on pypi (`openssl` for `requests[security]`).
But currently the plugin installation infrastructure does not enable usage of `WheelPackage` due to a defensive avoidance of assuming that `wheel` is installed in an environment that pants is being loaded from:
https://github.com/pantsbuild/pants/blob/e0d5108ff75a41421a11321c73c817332f0a1c86/src/python/pants/init/plugin_resolver.py#L79-L82
As far as @kwlzn or I can tell, `wheel` should always be on the PYTHONPATH, as pants itself depends on it: it's not clear that it actually needs to be "installed" on the PATH as well.
cc @jsirois for any context he might have on that comment.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/python/pants/init/plugin_resolver.py`
Content:
```
1 # coding=utf-8
2 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
3 # Licensed under the Apache License, Version 2.0 (see LICENSE).
4
5 from __future__ import (absolute_import, division, generators, nested_scopes, print_function,
6 unicode_literals, with_statement)
7
8 import hashlib
9 import logging
10 import os
11
12 from pex import resolver
13 from pex.base import requirement_is_exact
14 from pex.package import EggPackage, SourcePackage
15 from pkg_resources import working_set as global_working_set
16 from pkg_resources import Requirement
17
18 from pants.option.global_options import GlobalOptionsRegistrar
19 from pants.python.python_repos import PythonRepos
20 from pants.subsystem.subsystem import Subsystem
21 from pants.util.dirutil import safe_open
22 from pants.util.memo import memoized_property
23 from pants.version import PANTS_SEMVER
24
25
26 logger = logging.getLogger(__name__)
27
28
29 class PluginResolver(object):
30 def __init__(self, options_bootstrapper):
31 self._options_bootstrapper = options_bootstrapper
32
33 bootstrap_options = self._options_bootstrapper.get_bootstrap_options().for_global_scope()
34 self._plugin_requirements = bootstrap_options.plugins
35 self._plugin_cache_dir = bootstrap_options.plugin_cache_dir
36
37 def resolve(self, working_set=None):
38 """Resolves any configured plugins and adds them to the global working set.
39
40 :param working_set: The working set to add the resolved plugins to instead of the global
41 working set (for testing).
42 :type: :class:`pkg_resources.WorkingSet`
43 """
44 working_set = working_set or global_working_set
45 if self._plugin_requirements:
46 for plugin_location in self._resolve_plugin_locations():
47 working_set.add_entry(plugin_location)
48 return working_set
49
50 def _resolve_plugin_locations(self):
51 # We jump through some hoops here to avoid a live resolve if possible for purposes of speed.
52 # Even with a local resolve cache fully up to date, running a resolve to activate a plugin
53 # takes ~250ms whereas loading from a pre-cached list takes ~50ms.
54 if all(requirement_is_exact(Requirement.parse(req)) for req in self._plugin_requirements):
55 return self._resolve_exact_plugin_locations()
56 else:
57 return (plugin.location for plugin in self._resolve_plugins())
58
59 def _resolve_exact_plugin_locations(self):
60 hasher = hashlib.sha1()
61 for req in sorted(self._plugin_requirements):
62 hasher.update(req)
63 resolve_hash = hasher.hexdigest()
64 resolved_plugins_list = os.path.join(self.plugin_cache_dir,
65 'plugins-{}.txt'.format(resolve_hash))
66
67 if not os.path.exists(resolved_plugins_list):
68 tmp_plugins_list = resolved_plugins_list + '~'
69 with safe_open(tmp_plugins_list, 'w') as fp:
70 for plugin in self._resolve_plugins():
71 fp.write(plugin.location)
72 fp.write('\n')
73 os.rename(tmp_plugins_list, resolved_plugins_list)
74 with open(resolved_plugins_list) as fp:
75 for plugin_location in fp:
76 yield plugin_location.strip()
77
78 def _resolve_plugins(self):
79 # When bootstrapping plugins without the full pants python backend machinery in-play, we are not
80 # guaranteed a properly initialized interpreter with wheel support so we enforce eggs only for
81 # bdists with this custom precedence.
82 precedence = (EggPackage, SourcePackage)
83 logger.info('Resolving new plugins...:\n {}'.format('\n '.join(self._plugin_requirements)))
84 return resolver.resolve(self._plugin_requirements,
85 fetchers=self._python_repos.get_fetchers(),
86 context=self._python_repos.get_network_context(),
87 precedence=precedence,
88 cache=self.plugin_cache_dir,
89 cache_ttl=10 * 365 * 24 * 60 * 60, # Effectively never expire.
90 allow_prereleases=PANTS_SEMVER.is_prerelease)
91
92 @memoized_property
93 def plugin_cache_dir(self):
94 """The path of the directory pants plugins bdists are cached in."""
95 return self._plugin_cache_dir
96
97 @memoized_property
98 def _python_repos(self):
99 return self._create_global_subsystem(PythonRepos)
100
101 def _create_global_subsystem(self, subsystem_type):
102 options_scope = subsystem_type.options_scope
103 return subsystem_type(options_scope, self._options.for_scope(options_scope))
104
105 @memoized_property
106 def _options(self):
107 # NB: The PluginResolver runs very early in the pants startup sequence before the standard
108 # Subsystem facility is wired up. As a result PluginResolver is not itself a Subsystem with
109 # PythonRepos as a dependency. Instead it does the minimum possible work to hand-roll
110 # bootstrapping of the Subsystem it needs.
111 subsystems = Subsystem.closure([PythonRepos])
112 known_scope_infos = [subsystem.get_scope_info() for subsystem in subsystems]
113 options = self._options_bootstrapper.get_full_options(known_scope_infos)
114
115 # Ignore command line flags since we'd blow up on any we don't understand (most of them).
116 # If someone wants to bootstrap plugins in a one-off custom way they'll need to use env vars
117 # or a --pants-config-files pointing to a custom pants.ini snippet.
118 defaulted_only_options = options.drop_flag_values()
119
120 GlobalOptionsRegistrar.register_options_on_scope(defaulted_only_options)
121 for subsystem in subsystems:
122 subsystem.register_options_on_scope(defaulted_only_options)
123 return defaulted_only_options
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/python/pants/init/plugin_resolver.py b/src/python/pants/init/plugin_resolver.py
--- a/src/python/pants/init/plugin_resolver.py
+++ b/src/python/pants/init/plugin_resolver.py
@@ -8,17 +8,18 @@
import hashlib
import logging
import os
+import site
from pex import resolver
from pex.base import requirement_is_exact
-from pex.package import EggPackage, SourcePackage
from pkg_resources import working_set as global_working_set
from pkg_resources import Requirement
+from wheel.install import WheelFile
from pants.option.global_options import GlobalOptionsRegistrar
from pants.python.python_repos import PythonRepos
from pants.subsystem.subsystem import Subsystem
-from pants.util.dirutil import safe_open
+from pants.util.dirutil import safe_mkdir, safe_open
from pants.util.memo import memoized_property
from pants.version import PANTS_SEMVER
@@ -27,6 +28,26 @@
class PluginResolver(object):
+ @staticmethod
+ def _is_wheel(path):
+ return os.path.isfile(path) and path.endswith('.whl')
+
+ @staticmethod
+ def _activate_wheel(wheel_path):
+ install_dir = '{}-install'.format(wheel_path)
+ safe_mkdir(install_dir, clean=True)
+ WheelFile(wheel_path).install(force=True,
+ overrides={
+ 'purelib': install_dir,
+ 'headers': os.path.join(install_dir, 'headers'),
+ 'scripts': os.path.join(install_dir, 'bin'),
+ 'platlib': install_dir,
+ 'data': install_dir
+ })
+ # Activate any .pth files installed above.
+ site.addsitedir(install_dir)
+ return install_dir
+
def __init__(self, options_bootstrapper):
self._options_bootstrapper = options_bootstrapper
@@ -44,6 +65,8 @@
working_set = working_set or global_working_set
if self._plugin_requirements:
for plugin_location in self._resolve_plugin_locations():
+ if self._is_wheel(plugin_location):
+ plugin_location = self._activate_wheel(plugin_location)
working_set.add_entry(plugin_location)
return working_set
@@ -76,15 +99,10 @@
yield plugin_location.strip()
def _resolve_plugins(self):
- # When bootstrapping plugins without the full pants python backend machinery in-play, we are not
- # guaranteed a properly initialized interpreter with wheel support so we enforce eggs only for
- # bdists with this custom precedence.
- precedence = (EggPackage, SourcePackage)
logger.info('Resolving new plugins...:\n {}'.format('\n '.join(self._plugin_requirements)))
return resolver.resolve(self._plugin_requirements,
fetchers=self._python_repos.get_fetchers(),
context=self._python_repos.get_network_context(),
- precedence=precedence,
cache=self.plugin_cache_dir,
cache_ttl=10 * 365 * 24 * 60 * 60, # Effectively never expire.
allow_prereleases=PANTS_SEMVER.is_prerelease)
|
{"golden_diff": "diff --git a/src/python/pants/init/plugin_resolver.py b/src/python/pants/init/plugin_resolver.py\n--- a/src/python/pants/init/plugin_resolver.py\n+++ b/src/python/pants/init/plugin_resolver.py\n@@ -8,17 +8,18 @@\n import hashlib\n import logging\n import os\n+import site\n \n from pex import resolver\n from pex.base import requirement_is_exact\n-from pex.package import EggPackage, SourcePackage\n from pkg_resources import working_set as global_working_set\n from pkg_resources import Requirement\n+from wheel.install import WheelFile\n \n from pants.option.global_options import GlobalOptionsRegistrar\n from pants.python.python_repos import PythonRepos\n from pants.subsystem.subsystem import Subsystem\n-from pants.util.dirutil import safe_open\n+from pants.util.dirutil import safe_mkdir, safe_open\n from pants.util.memo import memoized_property\n from pants.version import PANTS_SEMVER\n \n@@ -27,6 +28,26 @@\n \n \n class PluginResolver(object):\n+ @staticmethod\n+ def _is_wheel(path):\n+ return os.path.isfile(path) and path.endswith('.whl')\n+\n+ @staticmethod\n+ def _activate_wheel(wheel_path):\n+ install_dir = '{}-install'.format(wheel_path)\n+ safe_mkdir(install_dir, clean=True)\n+ WheelFile(wheel_path).install(force=True,\n+ overrides={\n+ 'purelib': install_dir,\n+ 'headers': os.path.join(install_dir, 'headers'),\n+ 'scripts': os.path.join(install_dir, 'bin'),\n+ 'platlib': install_dir,\n+ 'data': install_dir\n+ })\n+ # Activate any .pth files installed above.\n+ site.addsitedir(install_dir)\n+ return install_dir\n+\n def __init__(self, options_bootstrapper):\n self._options_bootstrapper = options_bootstrapper\n \n@@ -44,6 +65,8 @@\n working_set = working_set or global_working_set\n if self._plugin_requirements:\n for plugin_location in self._resolve_plugin_locations():\n+ if self._is_wheel(plugin_location):\n+ plugin_location = self._activate_wheel(plugin_location)\n working_set.add_entry(plugin_location)\n return working_set\n \n@@ -76,15 +99,10 @@\n yield plugin_location.strip()\n \n def _resolve_plugins(self):\n- # When bootstrapping plugins without the full pants python backend machinery in-play, we are not\n- # guaranteed a properly initialized interpreter with wheel support so we enforce eggs only for\n- # bdists with this custom precedence.\n- precedence = (EggPackage, SourcePackage)\n logger.info('Resolving new plugins...:\\n {}'.format('\\n '.join(self._plugin_requirements)))\n return resolver.resolve(self._plugin_requirements,\n fetchers=self._python_repos.get_fetchers(),\n context=self._python_repos.get_network_context(),\n- precedence=precedence,\n cache=self.plugin_cache_dir,\n cache_ttl=10 * 365 * 24 * 60 * 60, # Effectively never expire.\n allow_prereleases=PANTS_SEMVER.is_prerelease)\n", "issue": "Prefer wheels during plugin install\nPants has sprouted a dependency on a package that is most easily installed via a `whl` on pypi (`openssl` for `requests[security]`).\r\n\r\nBut currently the plugin installation infrastructure does not enable usage of `WheelPackage` due to a defensive avoidance of assuming that `wheel` is installed in an environment that pants is being loaded from:\r\nhttps://github.com/pantsbuild/pants/blob/e0d5108ff75a41421a11321c73c817332f0a1c86/src/python/pants/init/plugin_resolver.py#L79-L82\r\n\r\nAs far as @kwlzn or I can tell, `wheel` should always be on the PYTHONPATH, as pants itself depends on it: it's not clear that it actually needs to be \"installed\" on the PATH as well.\r\n\r\ncc @jsirois for any context he might have on that comment.\n", "before_files": [{"content": "# coding=utf-8\n# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import (absolute_import, division, generators, nested_scopes, print_function,\n unicode_literals, with_statement)\n\nimport hashlib\nimport logging\nimport os\n\nfrom pex import resolver\nfrom pex.base import requirement_is_exact\nfrom pex.package import EggPackage, SourcePackage\nfrom pkg_resources import working_set as global_working_set\nfrom pkg_resources import Requirement\n\nfrom pants.option.global_options import GlobalOptionsRegistrar\nfrom pants.python.python_repos import PythonRepos\nfrom pants.subsystem.subsystem import Subsystem\nfrom pants.util.dirutil import safe_open\nfrom pants.util.memo import memoized_property\nfrom pants.version import PANTS_SEMVER\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass PluginResolver(object):\n def __init__(self, options_bootstrapper):\n self._options_bootstrapper = options_bootstrapper\n\n bootstrap_options = self._options_bootstrapper.get_bootstrap_options().for_global_scope()\n self._plugin_requirements = bootstrap_options.plugins\n self._plugin_cache_dir = bootstrap_options.plugin_cache_dir\n\n def resolve(self, working_set=None):\n \"\"\"Resolves any configured plugins and adds them to the global working set.\n\n :param working_set: The working set to add the resolved plugins to instead of the global\n working set (for testing).\n :type: :class:`pkg_resources.WorkingSet`\n \"\"\"\n working_set = working_set or global_working_set\n if self._plugin_requirements:\n for plugin_location in self._resolve_plugin_locations():\n working_set.add_entry(plugin_location)\n return working_set\n\n def _resolve_plugin_locations(self):\n # We jump through some hoops here to avoid a live resolve if possible for purposes of speed.\n # Even with a local resolve cache fully up to date, running a resolve to activate a plugin\n # takes ~250ms whereas loading from a pre-cached list takes ~50ms.\n if all(requirement_is_exact(Requirement.parse(req)) for req in self._plugin_requirements):\n return self._resolve_exact_plugin_locations()\n else:\n return (plugin.location for plugin in self._resolve_plugins())\n\n def _resolve_exact_plugin_locations(self):\n hasher = hashlib.sha1()\n for req in sorted(self._plugin_requirements):\n hasher.update(req)\n resolve_hash = hasher.hexdigest()\n resolved_plugins_list = os.path.join(self.plugin_cache_dir,\n 'plugins-{}.txt'.format(resolve_hash))\n\n if not os.path.exists(resolved_plugins_list):\n tmp_plugins_list = resolved_plugins_list + '~'\n with safe_open(tmp_plugins_list, 'w') as fp:\n for plugin in self._resolve_plugins():\n fp.write(plugin.location)\n fp.write('\\n')\n os.rename(tmp_plugins_list, resolved_plugins_list)\n with open(resolved_plugins_list) as fp:\n for plugin_location in fp:\n yield plugin_location.strip()\n\n def _resolve_plugins(self):\n # When bootstrapping plugins without the full pants python backend machinery in-play, we are not\n # guaranteed a properly initialized interpreter with wheel support so we enforce eggs only for\n # bdists with this custom precedence.\n precedence = (EggPackage, SourcePackage)\n logger.info('Resolving new plugins...:\\n {}'.format('\\n '.join(self._plugin_requirements)))\n return resolver.resolve(self._plugin_requirements,\n fetchers=self._python_repos.get_fetchers(),\n context=self._python_repos.get_network_context(),\n precedence=precedence,\n cache=self.plugin_cache_dir,\n cache_ttl=10 * 365 * 24 * 60 * 60, # Effectively never expire.\n allow_prereleases=PANTS_SEMVER.is_prerelease)\n\n @memoized_property\n def plugin_cache_dir(self):\n \"\"\"The path of the directory pants plugins bdists are cached in.\"\"\"\n return self._plugin_cache_dir\n\n @memoized_property\n def _python_repos(self):\n return self._create_global_subsystem(PythonRepos)\n\n def _create_global_subsystem(self, subsystem_type):\n options_scope = subsystem_type.options_scope\n return subsystem_type(options_scope, self._options.for_scope(options_scope))\n\n @memoized_property\n def _options(self):\n # NB: The PluginResolver runs very early in the pants startup sequence before the standard\n # Subsystem facility is wired up. As a result PluginResolver is not itself a Subsystem with\n # PythonRepos as a dependency. Instead it does the minimum possible work to hand-roll\n # bootstrapping of the Subsystem it needs.\n subsystems = Subsystem.closure([PythonRepos])\n known_scope_infos = [subsystem.get_scope_info() for subsystem in subsystems]\n options = self._options_bootstrapper.get_full_options(known_scope_infos)\n\n # Ignore command line flags since we'd blow up on any we don't understand (most of them).\n # If someone wants to bootstrap plugins in a one-off custom way they'll need to use env vars\n # or a --pants-config-files pointing to a custom pants.ini snippet.\n defaulted_only_options = options.drop_flag_values()\n\n GlobalOptionsRegistrar.register_options_on_scope(defaulted_only_options)\n for subsystem in subsystems:\n subsystem.register_options_on_scope(defaulted_only_options)\n return defaulted_only_options\n", "path": "src/python/pants/init/plugin_resolver.py"}], "after_files": [{"content": "# coding=utf-8\n# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import (absolute_import, division, generators, nested_scopes, print_function,\n unicode_literals, with_statement)\n\nimport hashlib\nimport logging\nimport os\nimport site\n\nfrom pex import resolver\nfrom pex.base import requirement_is_exact\nfrom pkg_resources import working_set as global_working_set\nfrom pkg_resources import Requirement\nfrom wheel.install import WheelFile\n\nfrom pants.option.global_options import GlobalOptionsRegistrar\nfrom pants.python.python_repos import PythonRepos\nfrom pants.subsystem.subsystem import Subsystem\nfrom pants.util.dirutil import safe_mkdir, safe_open\nfrom pants.util.memo import memoized_property\nfrom pants.version import PANTS_SEMVER\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass PluginResolver(object):\n @staticmethod\n def _is_wheel(path):\n return os.path.isfile(path) and path.endswith('.whl')\n\n @staticmethod\n def _activate_wheel(wheel_path):\n install_dir = '{}-install'.format(wheel_path)\n safe_mkdir(install_dir, clean=True)\n WheelFile(wheel_path).install(force=True,\n overrides={\n 'purelib': install_dir,\n 'headers': os.path.join(install_dir, 'headers'),\n 'scripts': os.path.join(install_dir, 'bin'),\n 'platlib': install_dir,\n 'data': install_dir\n })\n # Activate any .pth files installed above.\n site.addsitedir(install_dir)\n return install_dir\n\n def __init__(self, options_bootstrapper):\n self._options_bootstrapper = options_bootstrapper\n\n bootstrap_options = self._options_bootstrapper.get_bootstrap_options().for_global_scope()\n self._plugin_requirements = bootstrap_options.plugins\n self._plugin_cache_dir = bootstrap_options.plugin_cache_dir\n\n def resolve(self, working_set=None):\n \"\"\"Resolves any configured plugins and adds them to the global working set.\n\n :param working_set: The working set to add the resolved plugins to instead of the global\n working set (for testing).\n :type: :class:`pkg_resources.WorkingSet`\n \"\"\"\n working_set = working_set or global_working_set\n if self._plugin_requirements:\n for plugin_location in self._resolve_plugin_locations():\n if self._is_wheel(plugin_location):\n plugin_location = self._activate_wheel(plugin_location)\n working_set.add_entry(plugin_location)\n return working_set\n\n def _resolve_plugin_locations(self):\n # We jump through some hoops here to avoid a live resolve if possible for purposes of speed.\n # Even with a local resolve cache fully up to date, running a resolve to activate a plugin\n # takes ~250ms whereas loading from a pre-cached list takes ~50ms.\n if all(requirement_is_exact(Requirement.parse(req)) for req in self._plugin_requirements):\n return self._resolve_exact_plugin_locations()\n else:\n return (plugin.location for plugin in self._resolve_plugins())\n\n def _resolve_exact_plugin_locations(self):\n hasher = hashlib.sha1()\n for req in sorted(self._plugin_requirements):\n hasher.update(req)\n resolve_hash = hasher.hexdigest()\n resolved_plugins_list = os.path.join(self.plugin_cache_dir,\n 'plugins-{}.txt'.format(resolve_hash))\n\n if not os.path.exists(resolved_plugins_list):\n tmp_plugins_list = resolved_plugins_list + '~'\n with safe_open(tmp_plugins_list, 'w') as fp:\n for plugin in self._resolve_plugins():\n fp.write(plugin.location)\n fp.write('\\n')\n os.rename(tmp_plugins_list, resolved_plugins_list)\n with open(resolved_plugins_list) as fp:\n for plugin_location in fp:\n yield plugin_location.strip()\n\n def _resolve_plugins(self):\n logger.info('Resolving new plugins...:\\n {}'.format('\\n '.join(self._plugin_requirements)))\n return resolver.resolve(self._plugin_requirements,\n fetchers=self._python_repos.get_fetchers(),\n context=self._python_repos.get_network_context(),\n cache=self.plugin_cache_dir,\n cache_ttl=10 * 365 * 24 * 60 * 60, # Effectively never expire.\n allow_prereleases=PANTS_SEMVER.is_prerelease)\n\n @memoized_property\n def plugin_cache_dir(self):\n \"\"\"The path of the directory pants plugins bdists are cached in.\"\"\"\n return self._plugin_cache_dir\n\n @memoized_property\n def _python_repos(self):\n return self._create_global_subsystem(PythonRepos)\n\n def _create_global_subsystem(self, subsystem_type):\n options_scope = subsystem_type.options_scope\n return subsystem_type(options_scope, self._options.for_scope(options_scope))\n\n @memoized_property\n def _options(self):\n # NB: The PluginResolver runs very early in the pants startup sequence before the standard\n # Subsystem facility is wired up. As a result PluginResolver is not itself a Subsystem with\n # PythonRepos as a dependency. Instead it does the minimum possible work to hand-roll\n # bootstrapping of the Subsystem it needs.\n subsystems = Subsystem.closure([PythonRepos])\n known_scope_infos = [subsystem.get_scope_info() for subsystem in subsystems]\n options = self._options_bootstrapper.get_full_options(known_scope_infos)\n\n # Ignore command line flags since we'd blow up on any we don't understand (most of them).\n # If someone wants to bootstrap plugins in a one-off custom way they'll need to use env vars\n # or a --pants-config-files pointing to a custom pants.ini snippet.\n defaulted_only_options = options.drop_flag_values()\n\n GlobalOptionsRegistrar.register_options_on_scope(defaulted_only_options)\n for subsystem in subsystems:\n subsystem.register_options_on_scope(defaulted_only_options)\n return defaulted_only_options\n", "path": "src/python/pants/init/plugin_resolver.py"}]}
| 1,910 | 693 |
gh_patches_debug_5579
|
rasdani/github-patches
|
git_diff
|
mitmproxy__mitmproxy-6373
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Outfile -w cannot be loaded
#### Problem Description
When the dump file is getting bigger, about 100mb it's not loaded anymore.
#### Steps to reproduce the behavior:
Make a big outfile and try to open it with an new instance of mitmweb.
#### System Information
Mitmweb Windows 10 6.0.2
mitmweb Not loading my saved flow
So I recorded some actions with mitmweb and saved the flow.
Then I closed mitmweb, and reopened it. Then I went to open the saved flow file (which is 100 megabytes). But when I open it, the requests and responses do not appear?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mitmproxy/tools/web/master.py`
Content:
```
1 import errno
2 import logging
3
4 import tornado.httpserver
5 import tornado.ioloop
6
7 from mitmproxy import addons
8 from mitmproxy import flow
9 from mitmproxy import log
10 from mitmproxy import master
11 from mitmproxy import options
12 from mitmproxy import optmanager
13 from mitmproxy.addons import errorcheck
14 from mitmproxy.addons import eventstore
15 from mitmproxy.addons import intercept
16 from mitmproxy.addons import readfile
17 from mitmproxy.addons import termlog
18 from mitmproxy.addons import view
19 from mitmproxy.addons.proxyserver import Proxyserver
20 from mitmproxy.tools.web import app
21 from mitmproxy.tools.web import static_viewer
22 from mitmproxy.tools.web import webaddons
23
24 logger = logging.getLogger(__name__)
25
26
27 class WebMaster(master.Master):
28 def __init__(self, opts: options.Options, with_termlog: bool = True):
29 super().__init__(opts)
30 self.view = view.View()
31 self.view.sig_view_add.connect(self._sig_view_add)
32 self.view.sig_view_remove.connect(self._sig_view_remove)
33 self.view.sig_view_update.connect(self._sig_view_update)
34 self.view.sig_view_refresh.connect(self._sig_view_refresh)
35
36 self.events = eventstore.EventStore()
37 self.events.sig_add.connect(self._sig_events_add)
38 self.events.sig_refresh.connect(self._sig_events_refresh)
39
40 self.options.changed.connect(self._sig_options_update)
41
42 if with_termlog:
43 self.addons.add(termlog.TermLog())
44 self.addons.add(*addons.default_addons())
45 self.addons.add(
46 webaddons.WebAddon(),
47 intercept.Intercept(),
48 readfile.ReadFile(),
49 static_viewer.StaticViewer(),
50 self.view,
51 self.events,
52 errorcheck.ErrorCheck(),
53 )
54 self.app = app.Application(self, self.options.web_debug)
55 self.proxyserver: Proxyserver = self.addons.get("proxyserver")
56 self.proxyserver.servers.changed.connect(self._sig_servers_changed)
57
58 def _sig_view_add(self, flow: flow.Flow) -> None:
59 app.ClientConnection.broadcast(
60 resource="flows", cmd="add", data=app.flow_to_json(flow)
61 )
62
63 def _sig_view_update(self, flow: flow.Flow) -> None:
64 app.ClientConnection.broadcast(
65 resource="flows", cmd="update", data=app.flow_to_json(flow)
66 )
67
68 def _sig_view_remove(self, flow: flow.Flow, index: int) -> None:
69 app.ClientConnection.broadcast(resource="flows", cmd="remove", data=flow.id)
70
71 def _sig_view_refresh(self) -> None:
72 app.ClientConnection.broadcast(resource="flows", cmd="reset")
73
74 def _sig_events_add(self, entry: log.LogEntry) -> None:
75 app.ClientConnection.broadcast(
76 resource="events", cmd="add", data=app.logentry_to_json(entry)
77 )
78
79 def _sig_events_refresh(self) -> None:
80 app.ClientConnection.broadcast(resource="events", cmd="reset")
81
82 def _sig_options_update(self, updated: set[str]) -> None:
83 options_dict = optmanager.dump_dicts(self.options, updated)
84 app.ClientConnection.broadcast(
85 resource="options", cmd="update", data=options_dict
86 )
87
88 def _sig_servers_changed(self) -> None:
89 app.ClientConnection.broadcast(
90 resource="state",
91 cmd="update",
92 data={"servers": [s.to_json() for s in self.proxyserver.servers]},
93 )
94
95 async def running(self):
96 # Register tornado with the current event loop
97 tornado.ioloop.IOLoop.current()
98
99 # Add our web app.
100 http_server = tornado.httpserver.HTTPServer(self.app)
101 try:
102 http_server.listen(self.options.web_port, self.options.web_host)
103 except OSError as e:
104 message = f"Web server failed to listen on {self.options.web_host or '*'}:{self.options.web_port} with {e}"
105 if e.errno == errno.EADDRINUSE:
106 message += f"\nTry specifying a different port by using `--set web_port={self.options.web_port + 2}`."
107 raise OSError(e.errno, message, e.filename) from e
108
109 logger.info(
110 f"Web server listening at http://{self.options.web_host}:{self.options.web_port}/",
111 )
112
113 return await super().running()
114
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mitmproxy/tools/web/master.py b/mitmproxy/tools/web/master.py
--- a/mitmproxy/tools/web/master.py
+++ b/mitmproxy/tools/web/master.py
@@ -97,7 +97,9 @@
tornado.ioloop.IOLoop.current()
# Add our web app.
- http_server = tornado.httpserver.HTTPServer(self.app)
+ http_server = tornado.httpserver.HTTPServer(
+ self.app, max_buffer_size=2**32
+ ) # 4GB
try:
http_server.listen(self.options.web_port, self.options.web_host)
except OSError as e:
|
{"golden_diff": "diff --git a/mitmproxy/tools/web/master.py b/mitmproxy/tools/web/master.py\n--- a/mitmproxy/tools/web/master.py\n+++ b/mitmproxy/tools/web/master.py\n@@ -97,7 +97,9 @@\n tornado.ioloop.IOLoop.current()\n \n # Add our web app.\n- http_server = tornado.httpserver.HTTPServer(self.app)\n+ http_server = tornado.httpserver.HTTPServer(\n+ self.app, max_buffer_size=2**32\n+ ) # 4GB\n try:\n http_server.listen(self.options.web_port, self.options.web_host)\n except OSError as e:\n", "issue": "Outfile -w cannot be loaded\n#### Problem Description\nWhen the dump file is getting bigger, about 100mb it's not loaded anymore.\n\n#### Steps to reproduce the behavior:\nMake a big outfile and try to open it with an new instance of mitmweb.\n\n#### System Information\nMitmweb Windows 10 6.0.2\nmitmweb Not loading my saved flow\nSo I recorded some actions with mitmweb and saved the flow.\r\n\r\nThen I closed mitmweb, and reopened it. Then I went to open the saved flow file (which is 100 megabytes). But when I open it, the requests and responses do not appear?\r\n\n", "before_files": [{"content": "import errno\nimport logging\n\nimport tornado.httpserver\nimport tornado.ioloop\n\nfrom mitmproxy import addons\nfrom mitmproxy import flow\nfrom mitmproxy import log\nfrom mitmproxy import master\nfrom mitmproxy import options\nfrom mitmproxy import optmanager\nfrom mitmproxy.addons import errorcheck\nfrom mitmproxy.addons import eventstore\nfrom mitmproxy.addons import intercept\nfrom mitmproxy.addons import readfile\nfrom mitmproxy.addons import termlog\nfrom mitmproxy.addons import view\nfrom mitmproxy.addons.proxyserver import Proxyserver\nfrom mitmproxy.tools.web import app\nfrom mitmproxy.tools.web import static_viewer\nfrom mitmproxy.tools.web import webaddons\n\nlogger = logging.getLogger(__name__)\n\n\nclass WebMaster(master.Master):\n def __init__(self, opts: options.Options, with_termlog: bool = True):\n super().__init__(opts)\n self.view = view.View()\n self.view.sig_view_add.connect(self._sig_view_add)\n self.view.sig_view_remove.connect(self._sig_view_remove)\n self.view.sig_view_update.connect(self._sig_view_update)\n self.view.sig_view_refresh.connect(self._sig_view_refresh)\n\n self.events = eventstore.EventStore()\n self.events.sig_add.connect(self._sig_events_add)\n self.events.sig_refresh.connect(self._sig_events_refresh)\n\n self.options.changed.connect(self._sig_options_update)\n\n if with_termlog:\n self.addons.add(termlog.TermLog())\n self.addons.add(*addons.default_addons())\n self.addons.add(\n webaddons.WebAddon(),\n intercept.Intercept(),\n readfile.ReadFile(),\n static_viewer.StaticViewer(),\n self.view,\n self.events,\n errorcheck.ErrorCheck(),\n )\n self.app = app.Application(self, self.options.web_debug)\n self.proxyserver: Proxyserver = self.addons.get(\"proxyserver\")\n self.proxyserver.servers.changed.connect(self._sig_servers_changed)\n\n def _sig_view_add(self, flow: flow.Flow) -> None:\n app.ClientConnection.broadcast(\n resource=\"flows\", cmd=\"add\", data=app.flow_to_json(flow)\n )\n\n def _sig_view_update(self, flow: flow.Flow) -> None:\n app.ClientConnection.broadcast(\n resource=\"flows\", cmd=\"update\", data=app.flow_to_json(flow)\n )\n\n def _sig_view_remove(self, flow: flow.Flow, index: int) -> None:\n app.ClientConnection.broadcast(resource=\"flows\", cmd=\"remove\", data=flow.id)\n\n def _sig_view_refresh(self) -> None:\n app.ClientConnection.broadcast(resource=\"flows\", cmd=\"reset\")\n\n def _sig_events_add(self, entry: log.LogEntry) -> None:\n app.ClientConnection.broadcast(\n resource=\"events\", cmd=\"add\", data=app.logentry_to_json(entry)\n )\n\n def _sig_events_refresh(self) -> None:\n app.ClientConnection.broadcast(resource=\"events\", cmd=\"reset\")\n\n def _sig_options_update(self, updated: set[str]) -> None:\n options_dict = optmanager.dump_dicts(self.options, updated)\n app.ClientConnection.broadcast(\n resource=\"options\", cmd=\"update\", data=options_dict\n )\n\n def _sig_servers_changed(self) -> None:\n app.ClientConnection.broadcast(\n resource=\"state\",\n cmd=\"update\",\n data={\"servers\": [s.to_json() for s in self.proxyserver.servers]},\n )\n\n async def running(self):\n # Register tornado with the current event loop\n tornado.ioloop.IOLoop.current()\n\n # Add our web app.\n http_server = tornado.httpserver.HTTPServer(self.app)\n try:\n http_server.listen(self.options.web_port, self.options.web_host)\n except OSError as e:\n message = f\"Web server failed to listen on {self.options.web_host or '*'}:{self.options.web_port} with {e}\"\n if e.errno == errno.EADDRINUSE:\n message += f\"\\nTry specifying a different port by using `--set web_port={self.options.web_port + 2}`.\"\n raise OSError(e.errno, message, e.filename) from e\n\n logger.info(\n f\"Web server listening at http://{self.options.web_host}:{self.options.web_port}/\",\n )\n\n return await super().running()\n", "path": "mitmproxy/tools/web/master.py"}], "after_files": [{"content": "import errno\nimport logging\n\nimport tornado.httpserver\nimport tornado.ioloop\n\nfrom mitmproxy import addons\nfrom mitmproxy import flow\nfrom mitmproxy import log\nfrom mitmproxy import master\nfrom mitmproxy import options\nfrom mitmproxy import optmanager\nfrom mitmproxy.addons import errorcheck\nfrom mitmproxy.addons import eventstore\nfrom mitmproxy.addons import intercept\nfrom mitmproxy.addons import readfile\nfrom mitmproxy.addons import termlog\nfrom mitmproxy.addons import view\nfrom mitmproxy.addons.proxyserver import Proxyserver\nfrom mitmproxy.tools.web import app\nfrom mitmproxy.tools.web import static_viewer\nfrom mitmproxy.tools.web import webaddons\n\nlogger = logging.getLogger(__name__)\n\n\nclass WebMaster(master.Master):\n def __init__(self, opts: options.Options, with_termlog: bool = True):\n super().__init__(opts)\n self.view = view.View()\n self.view.sig_view_add.connect(self._sig_view_add)\n self.view.sig_view_remove.connect(self._sig_view_remove)\n self.view.sig_view_update.connect(self._sig_view_update)\n self.view.sig_view_refresh.connect(self._sig_view_refresh)\n\n self.events = eventstore.EventStore()\n self.events.sig_add.connect(self._sig_events_add)\n self.events.sig_refresh.connect(self._sig_events_refresh)\n\n self.options.changed.connect(self._sig_options_update)\n\n if with_termlog:\n self.addons.add(termlog.TermLog())\n self.addons.add(*addons.default_addons())\n self.addons.add(\n webaddons.WebAddon(),\n intercept.Intercept(),\n readfile.ReadFile(),\n static_viewer.StaticViewer(),\n self.view,\n self.events,\n errorcheck.ErrorCheck(),\n )\n self.app = app.Application(self, self.options.web_debug)\n self.proxyserver: Proxyserver = self.addons.get(\"proxyserver\")\n self.proxyserver.servers.changed.connect(self._sig_servers_changed)\n\n def _sig_view_add(self, flow: flow.Flow) -> None:\n app.ClientConnection.broadcast(\n resource=\"flows\", cmd=\"add\", data=app.flow_to_json(flow)\n )\n\n def _sig_view_update(self, flow: flow.Flow) -> None:\n app.ClientConnection.broadcast(\n resource=\"flows\", cmd=\"update\", data=app.flow_to_json(flow)\n )\n\n def _sig_view_remove(self, flow: flow.Flow, index: int) -> None:\n app.ClientConnection.broadcast(resource=\"flows\", cmd=\"remove\", data=flow.id)\n\n def _sig_view_refresh(self) -> None:\n app.ClientConnection.broadcast(resource=\"flows\", cmd=\"reset\")\n\n def _sig_events_add(self, entry: log.LogEntry) -> None:\n app.ClientConnection.broadcast(\n resource=\"events\", cmd=\"add\", data=app.logentry_to_json(entry)\n )\n\n def _sig_events_refresh(self) -> None:\n app.ClientConnection.broadcast(resource=\"events\", cmd=\"reset\")\n\n def _sig_options_update(self, updated: set[str]) -> None:\n options_dict = optmanager.dump_dicts(self.options, updated)\n app.ClientConnection.broadcast(\n resource=\"options\", cmd=\"update\", data=options_dict\n )\n\n def _sig_servers_changed(self) -> None:\n app.ClientConnection.broadcast(\n resource=\"state\",\n cmd=\"update\",\n data={\"servers\": [s.to_json() for s in self.proxyserver.servers]},\n )\n\n async def running(self):\n # Register tornado with the current event loop\n tornado.ioloop.IOLoop.current()\n\n # Add our web app.\n http_server = tornado.httpserver.HTTPServer(\n self.app, max_buffer_size=2**32\n ) # 4GB\n try:\n http_server.listen(self.options.web_port, self.options.web_host)\n except OSError as e:\n message = f\"Web server failed to listen on {self.options.web_host or '*'}:{self.options.web_port} with {e}\"\n if e.errno == errno.EADDRINUSE:\n message += f\"\\nTry specifying a different port by using `--set web_port={self.options.web_port + 2}`.\"\n raise OSError(e.errno, message, e.filename) from e\n\n logger.info(\n f\"Web server listening at http://{self.options.web_host}:{self.options.web_port}/\",\n )\n\n return await super().running()\n", "path": "mitmproxy/tools/web/master.py"}]}
| 1,569 | 140 |
gh_patches_debug_54195
|
rasdani/github-patches
|
git_diff
|
vyperlang__vyper-1275
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
State leakage across test runs when using parrellization
### What is wrong.
The tests at `tests/examples/safe_remote_purchase/test_safe_remote_purchase.py` fail when run using `pytest-xdist` to parallelize test runs.
```
def test_abort(w3, assert_tx_failed, check_balance, get_contract, contract_code):
a0, a1, a2 = w3.eth.accounts[:3]
c = get_contract(contract_code, value=2)
# Only sender can trigger refund
assert_tx_failed(lambda: c.abort(transact={'from': a2}))
# Refund works correctly
c.abort(transact={'from': a0, 'gasPrice': 0})
> assert check_balance() == (INIT_BAL_a0 - w3.toWei(2, 'ether'), INIT_BAL_a1)
E assert (100000000000...0000000000000) == (9999980000000...0000000000000)
E At index 0 diff: 1000000000000000000000000 != 999998000000000000000000
E Use -v to get the full diff
tests/examples/safe_remote_purchase/test_safe_remote_purchase.py:62: AssertionError
```
replicate by installing `pytest-xdist` and running with
```
pytest tests/examples/safe_remote_purchase/test_safe_remote_purchase.py -n 2
```
It's likely this isn't deterministic and you may need to run the full suite.
### How can it be fixed.
Figure out where statefulness is leaking across test runs and fix it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 from setuptools import setup, find_packages
4
5
6 test_deps = [
7 'pytest',
8 'pytest-cov',
9 'py-evm==0.2.0a34',
10 'eth-tester==0.1.0b33',
11 'web3==4.8.2',
12 ]
13
14
15 extras = {
16 'test': test_deps
17 }
18
19
20 setup(
21 name='vyper',
22 # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.
23 version='0.1.0-beta.8',
24 description='Vyper Programming Language for Ethereum',
25 long_description_markdown_filename='README.md',
26 author='Vitalik Buterin',
27 author_email='',
28 url='https://github.com/ethereum/vyper',
29 license="MIT",
30 keywords='ethereum',
31 include_package_data=True,
32 packages=find_packages(exclude=('tests', 'docs')),
33 python_requires='>=3.6',
34 py_modules=['vyper'],
35 install_requires=[
36 'pycryptodome>=3.5.1,<4',
37 ],
38 setup_requires=[
39 'pytest-runner',
40 'setuptools-markdown'
41 ],
42 tests_require=test_deps,
43 extras_require=extras,
44 scripts=[
45 'bin/vyper',
46 'bin/vyper-serve',
47 'bin/vyper-lll'
48 ],
49 classifiers=[
50 'Intended Audience :: Developers',
51 'License :: OSI Approved :: MIT License',
52 'Programming Language :: Python :: 3.6',
53 ]
54 )
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -4,11 +4,12 @@
test_deps = [
- 'pytest',
- 'pytest-cov',
- 'py-evm==0.2.0a34',
- 'eth-tester==0.1.0b33',
- 'web3==4.8.2',
+ 'pytest>=3.6',
+ 'pytest-cov==2.4.0',
+ 'pytest-xdist==1.18.1',
+ 'py-evm==0.2.0a39',
+ 'eth-tester==0.1.0b37',
+ 'web3==5.0.0a6'
]
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -4,11 +4,12 @@\n \n \n test_deps = [\n- 'pytest',\n- 'pytest-cov',\n- 'py-evm==0.2.0a34',\n- 'eth-tester==0.1.0b33',\n- 'web3==4.8.2',\n+ 'pytest>=3.6',\n+ 'pytest-cov==2.4.0',\n+ 'pytest-xdist==1.18.1',\n+ 'py-evm==0.2.0a39',\n+ 'eth-tester==0.1.0b37',\n+ 'web3==5.0.0a6'\n ]\n", "issue": "State leakage across test runs when using parrellization\n### What is wrong.\r\n\r\nThe tests at `tests/examples/safe_remote_purchase/test_safe_remote_purchase.py` fail when run using `pytest-xdist` to parallelize test runs.\r\n\r\n```\r\n def test_abort(w3, assert_tx_failed, check_balance, get_contract, contract_code):\r\n a0, a1, a2 = w3.eth.accounts[:3]\r\n c = get_contract(contract_code, value=2)\r\n # Only sender can trigger refund\r\n assert_tx_failed(lambda: c.abort(transact={'from': a2}))\r\n # Refund works correctly\r\n c.abort(transact={'from': a0, 'gasPrice': 0})\r\n> assert check_balance() == (INIT_BAL_a0 - w3.toWei(2, 'ether'), INIT_BAL_a1)\r\nE assert (100000000000...0000000000000) == (9999980000000...0000000000000)\r\nE At index 0 diff: 1000000000000000000000000 != 999998000000000000000000\r\nE Use -v to get the full diff\r\n\r\ntests/examples/safe_remote_purchase/test_safe_remote_purchase.py:62: AssertionError\r\n```\r\n\r\nreplicate by installing `pytest-xdist` and running with\r\n\r\n```\r\npytest tests/examples/safe_remote_purchase/test_safe_remote_purchase.py -n 2\r\n```\r\n\r\nIt's likely this isn't deterministic and you may need to run the full suite.\r\n\r\n### How can it be fixed.\r\n\r\nFigure out where statefulness is leaking across test runs and fix it.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom setuptools import setup, find_packages\n\n\ntest_deps = [\n 'pytest',\n 'pytest-cov',\n 'py-evm==0.2.0a34',\n 'eth-tester==0.1.0b33',\n 'web3==4.8.2',\n]\n\n\nextras = {\n 'test': test_deps\n}\n\n\nsetup(\n name='vyper',\n # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.\n version='0.1.0-beta.8',\n description='Vyper Programming Language for Ethereum',\n long_description_markdown_filename='README.md',\n author='Vitalik Buterin',\n author_email='',\n url='https://github.com/ethereum/vyper',\n license=\"MIT\",\n keywords='ethereum',\n include_package_data=True,\n packages=find_packages(exclude=('tests', 'docs')),\n python_requires='>=3.6',\n py_modules=['vyper'],\n install_requires=[\n 'pycryptodome>=3.5.1,<4',\n ],\n setup_requires=[\n 'pytest-runner',\n 'setuptools-markdown'\n ],\n tests_require=test_deps,\n extras_require=extras,\n scripts=[\n 'bin/vyper',\n 'bin/vyper-serve',\n 'bin/vyper-lll'\n ],\n classifiers=[\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 3.6',\n ]\n)\n", "path": "setup.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom setuptools import setup, find_packages\n\n\ntest_deps = [\n 'pytest>=3.6',\n 'pytest-cov==2.4.0',\n 'pytest-xdist==1.18.1',\n 'py-evm==0.2.0a39',\n 'eth-tester==0.1.0b37',\n 'web3==5.0.0a6'\n]\n\n\nextras = {\n 'test': test_deps\n}\n\n\nsetup(\n name='vyper',\n # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.\n version='0.1.0-beta.8',\n description='Vyper Programming Language for Ethereum',\n long_description_markdown_filename='README.md',\n author='Vitalik Buterin',\n author_email='',\n url='https://github.com/ethereum/vyper',\n license=\"MIT\",\n keywords='ethereum',\n include_package_data=True,\n packages=find_packages(exclude=('tests', 'docs')),\n python_requires='>=3.6',\n py_modules=['vyper'],\n install_requires=[\n 'pycryptodome>=3.5.1,<4',\n ],\n setup_requires=[\n 'pytest-runner',\n 'setuptools-markdown'\n ],\n tests_require=test_deps,\n extras_require=extras,\n scripts=[\n 'bin/vyper',\n 'bin/vyper-serve',\n 'bin/vyper-lll'\n ],\n classifiers=[\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 3.6',\n ]\n)\n", "path": "setup.py"}]}
| 1,093 | 176 |
gh_patches_debug_217
|
rasdani/github-patches
|
git_diff
|
liqd__a4-meinberlin-3701
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
testing 4293: can't edit polls somebody else created even if I have the rights
**URL:** https://meinberlin-dev.liqd.net/dashboard/modules/umfrage-24-4/poll/
**user:** group member
**expected behaviour:** I can edit polls somebody else created if I have the right to do so
**behaviour:** cannot save, getting an red altert
**important screensize:**
**device & browser:**
**Comment/Question:** also true for new polls whose rights have been given to me. for polls I started myself it is fine.

Screenshot?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `meinberlin/apps/polls/rules.py`
Content:
```
1 import rules
2
3 from adhocracy4.modules import predicates as module_predicates
4
5 rules.set_perm(
6 'a4polls.change_poll',
7 module_predicates.is_context_initiator |
8 module_predicates.is_context_moderator
9 )
10
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/meinberlin/apps/polls/rules.py b/meinberlin/apps/polls/rules.py
--- a/meinberlin/apps/polls/rules.py
+++ b/meinberlin/apps/polls/rules.py
@@ -4,6 +4,5 @@
rules.set_perm(
'a4polls.change_poll',
- module_predicates.is_context_initiator |
- module_predicates.is_context_moderator
+ module_predicates.is_project_admin
)
|
{"golden_diff": "diff --git a/meinberlin/apps/polls/rules.py b/meinberlin/apps/polls/rules.py\n--- a/meinberlin/apps/polls/rules.py\n+++ b/meinberlin/apps/polls/rules.py\n@@ -4,6 +4,5 @@\n \n rules.set_perm(\n 'a4polls.change_poll',\n- module_predicates.is_context_initiator |\n- module_predicates.is_context_moderator\n+ module_predicates.is_project_admin\n )\n", "issue": "testing 4293: can't edit polls somebody else created even if I have the rights\n**URL:** https://meinberlin-dev.liqd.net/dashboard/modules/umfrage-24-4/poll/\r\n**user:** group member\r\n**expected behaviour:** I can edit polls somebody else created if I have the right to do so\r\n**behaviour:** cannot save, getting an red altert\r\n**important screensize:**\r\n**device & browser:** \r\n **Comment/Question:** also true for new polls whose rights have been given to me. for polls I started myself it is fine.\r\n\r\n\r\n\r\nScreenshot?\r\n\n", "before_files": [{"content": "import rules\n\nfrom adhocracy4.modules import predicates as module_predicates\n\nrules.set_perm(\n 'a4polls.change_poll',\n module_predicates.is_context_initiator |\n module_predicates.is_context_moderator\n)\n", "path": "meinberlin/apps/polls/rules.py"}], "after_files": [{"content": "import rules\n\nfrom adhocracy4.modules import predicates as module_predicates\n\nrules.set_perm(\n 'a4polls.change_poll',\n module_predicates.is_project_admin\n)\n", "path": "meinberlin/apps/polls/rules.py"}]}
| 541 | 103 |
gh_patches_debug_19101
|
rasdani/github-patches
|
git_diff
|
aio-libs-abandoned__aioredis-py-1075
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[2.0] Update setup.py Trove classifiers
The [classifiers](https://github.com/aio-libs/aioredis-py/blob/5a713fff3717094cca63e4a5f4b1cb7d6894a08f/setup.py#L25-L30) currently only list Python 3.6 and 3.7. This should be updated to include all versions that are tested. Or my personal preference is just to delete classifiers for minor versions, since probably no-one will remember to update them when new versions of Python are released.
It also indicates the status as 4 - Beta. That should probably change when we release the final 2.0.0.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import os.path
2 import re
3
4 from setuptools import find_packages, setup
5
6
7 def read(*parts):
8 with open(os.path.join(*parts)) as f:
9 return f.read().strip()
10
11
12 def read_version():
13 regexp = re.compile(r"^__version__\W*=\W*\"([\d.abrc]+)\"")
14 init_py = os.path.join(os.path.dirname(__file__), "aioredis", "__init__.py")
15 with open(init_py) as f:
16 for line in f:
17 match = regexp.match(line)
18 if match is not None:
19 return match.group(1)
20 raise RuntimeError(f"Cannot find version in {init_py}")
21
22
23 classifiers = [
24 "License :: OSI Approved :: MIT License",
25 "Development Status :: 4 - Beta",
26 "Programming Language :: Python",
27 "Programming Language :: Python :: 3",
28 "Programming Language :: Python :: 3.6",
29 "Programming Language :: Python :: 3.7",
30 "Programming Language :: Python :: 3 :: Only",
31 "Operating System :: POSIX",
32 "Environment :: Web Environment",
33 "Intended Audience :: Developers",
34 "Topic :: Software Development",
35 "Topic :: Software Development :: Libraries",
36 "Framework :: AsyncIO",
37 ]
38
39 setup(
40 name="aioredis",
41 version=read_version(),
42 description="asyncio (PEP 3156) Redis support",
43 long_description="\n\n".join((read("README.md"), read("CHANGELOG.md"))),
44 long_description_content_type="text/markdown",
45 classifiers=classifiers,
46 platforms=["POSIX"],
47 url="https://github.com/aio-libs/aioredis",
48 license="MIT",
49 packages=find_packages(exclude=["tests"]),
50 install_requires=[
51 "async-timeout",
52 "typing-extensions",
53 ],
54 extras_require={
55 "hiredis": 'hiredis>=1.0; implementation_name=="cpython"',
56 },
57 package_data={"aioredis": ["py.typed"]},
58 python_requires=">=3.6",
59 include_package_data=True,
60 )
61
```
Path: `aioredis/__init__.py`
Content:
```
1 from aioredis.client import Redis, StrictRedis
2 from aioredis.connection import (
3 BlockingConnectionPool,
4 Connection,
5 ConnectionPool,
6 SSLConnection,
7 UnixDomainSocketConnection,
8 )
9 from aioredis.exceptions import (
10 AuthenticationError,
11 AuthenticationWrongNumberOfArgsError,
12 BusyLoadingError,
13 ChildDeadlockedError,
14 ConnectionError,
15 DataError,
16 InvalidResponse,
17 PubSubError,
18 ReadOnlyError,
19 RedisError,
20 ResponseError,
21 TimeoutError,
22 WatchError,
23 )
24 from aioredis.utils import from_url
25
26
27 def int_or_str(value):
28 try:
29 return int(value)
30 except ValueError:
31 return value
32
33
34 __version__ = "2.0.0b1"
35 VERSION = tuple(map(int_or_str, __version__.split(".")))
36
37 __all__ = [
38 "AuthenticationError",
39 "AuthenticationWrongNumberOfArgsError",
40 "BlockingConnectionPool",
41 "BusyLoadingError",
42 "ChildDeadlockedError",
43 "Connection",
44 "ConnectionError",
45 "ConnectionPool",
46 "DataError",
47 "from_url",
48 "InvalidResponse",
49 "PubSubError",
50 "ReadOnlyError",
51 "Redis",
52 "RedisError",
53 "ResponseError",
54 "SSLConnection",
55 "StrictRedis",
56 "TimeoutError",
57 "UnixDomainSocketConnection",
58 "WatchError",
59 ]
60
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/aioredis/__init__.py b/aioredis/__init__.py
--- a/aioredis/__init__.py
+++ b/aioredis/__init__.py
@@ -31,7 +31,7 @@
return value
-__version__ = "2.0.0b1"
+__version__ = "2.0.0"
VERSION = tuple(map(int_or_str, __version__.split(".")))
__all__ = [
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -22,11 +22,13 @@
classifiers = [
"License :: OSI Approved :: MIT License",
- "Development Status :: 4 - Beta",
+ "Development Status :: 5 - Production/Stable",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3 :: Only",
"Operating System :: POSIX",
"Environment :: Web Environment",
|
{"golden_diff": "diff --git a/aioredis/__init__.py b/aioredis/__init__.py\n--- a/aioredis/__init__.py\n+++ b/aioredis/__init__.py\n@@ -31,7 +31,7 @@\n return value\n \n \n-__version__ = \"2.0.0b1\"\n+__version__ = \"2.0.0\"\n VERSION = tuple(map(int_or_str, __version__.split(\".\")))\n \n __all__ = [\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -22,11 +22,13 @@\n \n classifiers = [\n \"License :: OSI Approved :: MIT License\",\n- \"Development Status :: 4 - Beta\",\n+ \"Development Status :: 5 - Production/Stable\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n+ \"Programming Language :: Python :: 3.8\",\n+ \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Operating System :: POSIX\",\n \"Environment :: Web Environment\",\n", "issue": "[2.0] Update setup.py Trove classifiers\nThe [classifiers](https://github.com/aio-libs/aioredis-py/blob/5a713fff3717094cca63e4a5f4b1cb7d6894a08f/setup.py#L25-L30) currently only list Python 3.6 and 3.7. This should be updated to include all versions that are tested. Or my personal preference is just to delete classifiers for minor versions, since probably no-one will remember to update them when new versions of Python are released.\r\n\r\nIt also indicates the status as 4 - Beta. That should probably change when we release the final 2.0.0.\n", "before_files": [{"content": "import os.path\nimport re\n\nfrom setuptools import find_packages, setup\n\n\ndef read(*parts):\n with open(os.path.join(*parts)) as f:\n return f.read().strip()\n\n\ndef read_version():\n regexp = re.compile(r\"^__version__\\W*=\\W*\\\"([\\d.abrc]+)\\\"\")\n init_py = os.path.join(os.path.dirname(__file__), \"aioredis\", \"__init__.py\")\n with open(init_py) as f:\n for line in f:\n match = regexp.match(line)\n if match is not None:\n return match.group(1)\n raise RuntimeError(f\"Cannot find version in {init_py}\")\n\n\nclassifiers = [\n \"License :: OSI Approved :: MIT License\",\n \"Development Status :: 4 - Beta\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Operating System :: POSIX\",\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"Topic :: Software Development\",\n \"Topic :: Software Development :: Libraries\",\n \"Framework :: AsyncIO\",\n]\n\nsetup(\n name=\"aioredis\",\n version=read_version(),\n description=\"asyncio (PEP 3156) Redis support\",\n long_description=\"\\n\\n\".join((read(\"README.md\"), read(\"CHANGELOG.md\"))),\n long_description_content_type=\"text/markdown\",\n classifiers=classifiers,\n platforms=[\"POSIX\"],\n url=\"https://github.com/aio-libs/aioredis\",\n license=\"MIT\",\n packages=find_packages(exclude=[\"tests\"]),\n install_requires=[\n \"async-timeout\",\n \"typing-extensions\",\n ],\n extras_require={\n \"hiredis\": 'hiredis>=1.0; implementation_name==\"cpython\"',\n },\n package_data={\"aioredis\": [\"py.typed\"]},\n python_requires=\">=3.6\",\n include_package_data=True,\n)\n", "path": "setup.py"}, {"content": "from aioredis.client import Redis, StrictRedis\nfrom aioredis.connection import (\n BlockingConnectionPool,\n Connection,\n ConnectionPool,\n SSLConnection,\n UnixDomainSocketConnection,\n)\nfrom aioredis.exceptions import (\n AuthenticationError,\n AuthenticationWrongNumberOfArgsError,\n BusyLoadingError,\n ChildDeadlockedError,\n ConnectionError,\n DataError,\n InvalidResponse,\n PubSubError,\n ReadOnlyError,\n RedisError,\n ResponseError,\n TimeoutError,\n WatchError,\n)\nfrom aioredis.utils import from_url\n\n\ndef int_or_str(value):\n try:\n return int(value)\n except ValueError:\n return value\n\n\n__version__ = \"2.0.0b1\"\nVERSION = tuple(map(int_or_str, __version__.split(\".\")))\n\n__all__ = [\n \"AuthenticationError\",\n \"AuthenticationWrongNumberOfArgsError\",\n \"BlockingConnectionPool\",\n \"BusyLoadingError\",\n \"ChildDeadlockedError\",\n \"Connection\",\n \"ConnectionError\",\n \"ConnectionPool\",\n \"DataError\",\n \"from_url\",\n \"InvalidResponse\",\n \"PubSubError\",\n \"ReadOnlyError\",\n \"Redis\",\n \"RedisError\",\n \"ResponseError\",\n \"SSLConnection\",\n \"StrictRedis\",\n \"TimeoutError\",\n \"UnixDomainSocketConnection\",\n \"WatchError\",\n]\n", "path": "aioredis/__init__.py"}], "after_files": [{"content": "import os.path\nimport re\n\nfrom setuptools import find_packages, setup\n\n\ndef read(*parts):\n with open(os.path.join(*parts)) as f:\n return f.read().strip()\n\n\ndef read_version():\n regexp = re.compile(r\"^__version__\\W*=\\W*\\\"([\\d.abrc]+)\\\"\")\n init_py = os.path.join(os.path.dirname(__file__), \"aioredis\", \"__init__.py\")\n with open(init_py) as f:\n for line in f:\n match = regexp.match(line)\n if match is not None:\n return match.group(1)\n raise RuntimeError(f\"Cannot find version in {init_py}\")\n\n\nclassifiers = [\n \"License :: OSI Approved :: MIT License\",\n \"Development Status :: 5 - Production/Stable\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Operating System :: POSIX\",\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"Topic :: Software Development\",\n \"Topic :: Software Development :: Libraries\",\n \"Framework :: AsyncIO\",\n]\n\nsetup(\n name=\"aioredis\",\n version=read_version(),\n description=\"asyncio (PEP 3156) Redis support\",\n long_description=\"\\n\\n\".join((read(\"README.md\"), read(\"CHANGELOG.md\"))),\n long_description_content_type=\"text/markdown\",\n classifiers=classifiers,\n platforms=[\"POSIX\"],\n url=\"https://github.com/aio-libs/aioredis\",\n license=\"MIT\",\n packages=find_packages(exclude=[\"tests\"]),\n install_requires=[\n \"async-timeout\",\n \"typing-extensions\",\n ],\n extras_require={\n \"hiredis\": 'hiredis>=1.0; implementation_name==\"cpython\"',\n },\n package_data={\"aioredis\": [\"py.typed\"]},\n python_requires=\">=3.6\",\n include_package_data=True,\n)\n", "path": "setup.py"}, {"content": "from aioredis.client import Redis, StrictRedis\nfrom aioredis.connection import (\n BlockingConnectionPool,\n Connection,\n ConnectionPool,\n SSLConnection,\n UnixDomainSocketConnection,\n)\nfrom aioredis.exceptions import (\n AuthenticationError,\n AuthenticationWrongNumberOfArgsError,\n BusyLoadingError,\n ChildDeadlockedError,\n ConnectionError,\n DataError,\n InvalidResponse,\n PubSubError,\n ReadOnlyError,\n RedisError,\n ResponseError,\n TimeoutError,\n WatchError,\n)\nfrom aioredis.utils import from_url\n\n\ndef int_or_str(value):\n try:\n return int(value)\n except ValueError:\n return value\n\n\n__version__ = \"2.0.0\"\nVERSION = tuple(map(int_or_str, __version__.split(\".\")))\n\n__all__ = [\n \"AuthenticationError\",\n \"AuthenticationWrongNumberOfArgsError\",\n \"BlockingConnectionPool\",\n \"BusyLoadingError\",\n \"ChildDeadlockedError\",\n \"Connection\",\n \"ConnectionError\",\n \"ConnectionPool\",\n \"DataError\",\n \"from_url\",\n \"InvalidResponse\",\n \"PubSubError\",\n \"ReadOnlyError\",\n \"Redis\",\n \"RedisError\",\n \"ResponseError\",\n \"SSLConnection\",\n \"StrictRedis\",\n \"TimeoutError\",\n \"UnixDomainSocketConnection\",\n \"WatchError\",\n]\n", "path": "aioredis/__init__.py"}]}
| 1,398 | 272 |
gh_patches_debug_8811
|
rasdani/github-patches
|
git_diff
|
psf__black-2816
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add test for `A᧚ = 0`
Black v19.10b0 fails to parse certain assignments involving unicode identifiers - [playground link here](https://black.now.sh/?version=stable&state=_Td6WFoAAATm1rRGAgAhARYAAAB0L-Wj4ABLADtdAD2IimZxl1N_WjMy7A7oUimP5kl9tNitfjGTgMhZej2xgomiikPHniF7YMrqeF7JYab2JGKtxYQLJtMAAACEQNE3-XEpLQABV0wDcxaqH7bzfQEAAAAABFla).
```python
A᧚
A፩
```
This code is in fact valid Python, as you can confirm by pasting it into a repl or with `compile("A\u19da = 0")`.
Found, as with most of my bugs, via [Hypothesmith](https://github.com/Zac-HD/hypothesmith). Given that this applies to multiple unicode digit characters, it might be due to unicode digits in identifiers?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `fuzz.py`
Content:
```
1 """Property-based tests for Black.
2
3 By Zac Hatfield-Dodds, based on my Hypothesmith tool for source code
4 generation. You can run this file with `python`, `pytest`, or (soon)
5 a coverage-guided fuzzer I'm working on.
6 """
7
8 import re
9
10 import hypothesmith
11 from hypothesis import HealthCheck, given, settings, strategies as st
12
13 import black
14 from blib2to3.pgen2.tokenize import TokenError
15
16
17 # This test uses the Hypothesis and Hypothesmith libraries to generate random
18 # syntatically-valid Python source code and run Black in odd modes.
19 @settings(
20 max_examples=1000, # roughly 1k tests/minute, or half that under coverage
21 derandomize=True, # deterministic mode to avoid CI flakiness
22 deadline=None, # ignore Hypothesis' health checks; we already know that
23 suppress_health_check=HealthCheck.all(), # this is slow and filter-heavy.
24 )
25 @given(
26 # Note that while Hypothesmith might generate code unlike that written by
27 # humans, it's a general test that should pass for any *valid* source code.
28 # (so e.g. running it against code scraped of the internet might also help)
29 src_contents=hypothesmith.from_grammar() | hypothesmith.from_node(),
30 # Using randomly-varied modes helps us to exercise less common code paths.
31 mode=st.builds(
32 black.FileMode,
33 line_length=st.just(88) | st.integers(0, 200),
34 string_normalization=st.booleans(),
35 preview=st.booleans(),
36 is_pyi=st.booleans(),
37 magic_trailing_comma=st.booleans(),
38 ),
39 )
40 def test_idempotent_any_syntatically_valid_python(
41 src_contents: str, mode: black.FileMode
42 ) -> None:
43 # Before starting, let's confirm that the input string is valid Python:
44 compile(src_contents, "<string>", "exec") # else the bug is in hypothesmith
45
46 # Then format the code...
47 try:
48 dst_contents = black.format_str(src_contents, mode=mode)
49 except black.InvalidInput:
50 # This is a bug - if it's valid Python code, as above, Black should be
51 # able to cope with it. See issues #970, #1012, #1358, and #1557.
52 # TODO: remove this try-except block when issues are resolved.
53 return
54 except TokenError as e:
55 if ( # Special-case logic for backslashes followed by newlines or end-of-input
56 e.args[0] == "EOF in multi-line statement"
57 and re.search(r"\\($|\r?\n)", src_contents) is not None
58 ):
59 # This is a bug - if it's valid Python code, as above, Black should be
60 # able to cope with it. See issue #1012.
61 # TODO: remove this block when the issue is resolved.
62 return
63 raise
64
65 # And check that we got equivalent and stable output.
66 black.assert_equivalent(src_contents, dst_contents)
67 black.assert_stable(src_contents, dst_contents, mode=mode)
68
69 # Future test: check that pure-python and mypyc versions of black
70 # give identical output for identical input?
71
72
73 if __name__ == "__main__":
74 # Run tests, including shrinking and reporting any known failures.
75 test_idempotent_any_syntatically_valid_python()
76
77 # If Atheris is available, run coverage-guided fuzzing.
78 # (if you want only bounded fuzzing, just use `pytest fuzz.py`)
79 try:
80 import sys
81 import atheris
82 except ImportError:
83 pass
84 else:
85 test = test_idempotent_any_syntatically_valid_python
86 atheris.Setup(sys.argv, test.hypothesis.fuzz_one_input)
87 atheris.Fuzz()
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/fuzz.py b/fuzz.py
--- a/fuzz.py
+++ b/fuzz.py
@@ -48,7 +48,7 @@
dst_contents = black.format_str(src_contents, mode=mode)
except black.InvalidInput:
# This is a bug - if it's valid Python code, as above, Black should be
- # able to cope with it. See issues #970, #1012, #1358, and #1557.
+ # able to cope with it. See issues #970, #1012
# TODO: remove this try-except block when issues are resolved.
return
except TokenError as e:
|
{"golden_diff": "diff --git a/fuzz.py b/fuzz.py\n--- a/fuzz.py\n+++ b/fuzz.py\n@@ -48,7 +48,7 @@\n dst_contents = black.format_str(src_contents, mode=mode)\n except black.InvalidInput:\n # This is a bug - if it's valid Python code, as above, Black should be\n- # able to cope with it. See issues #970, #1012, #1358, and #1557.\n+ # able to cope with it. See issues #970, #1012\n # TODO: remove this try-except block when issues are resolved.\n return\n except TokenError as e:\n", "issue": "Add test for `A\u19da = 0`\nBlack v19.10b0 fails to parse certain assignments involving unicode identifiers - [playground link here](https://black.now.sh/?version=stable&state=_Td6WFoAAATm1rRGAgAhARYAAAB0L-Wj4ABLADtdAD2IimZxl1N_WjMy7A7oUimP5kl9tNitfjGTgMhZej2xgomiikPHniF7YMrqeF7JYab2JGKtxYQLJtMAAACEQNE3-XEpLQABV0wDcxaqH7bzfQEAAAAABFla).\r\n\r\n```python\r\nA\u19da\r\nA\u1369\r\n```\r\n\r\nThis code is in fact valid Python, as you can confirm by pasting it into a repl or with `compile(\"A\\u19da = 0\")`.\r\n\r\nFound, as with most of my bugs, via [Hypothesmith](https://github.com/Zac-HD/hypothesmith). Given that this applies to multiple unicode digit characters, it might be due to unicode digits in identifiers?\n", "before_files": [{"content": "\"\"\"Property-based tests for Black.\n\nBy Zac Hatfield-Dodds, based on my Hypothesmith tool for source code\ngeneration. You can run this file with `python`, `pytest`, or (soon)\na coverage-guided fuzzer I'm working on.\n\"\"\"\n\nimport re\n\nimport hypothesmith\nfrom hypothesis import HealthCheck, given, settings, strategies as st\n\nimport black\nfrom blib2to3.pgen2.tokenize import TokenError\n\n\n# This test uses the Hypothesis and Hypothesmith libraries to generate random\n# syntatically-valid Python source code and run Black in odd modes.\n@settings(\n max_examples=1000, # roughly 1k tests/minute, or half that under coverage\n derandomize=True, # deterministic mode to avoid CI flakiness\n deadline=None, # ignore Hypothesis' health checks; we already know that\n suppress_health_check=HealthCheck.all(), # this is slow and filter-heavy.\n)\n@given(\n # Note that while Hypothesmith might generate code unlike that written by\n # humans, it's a general test that should pass for any *valid* source code.\n # (so e.g. running it against code scraped of the internet might also help)\n src_contents=hypothesmith.from_grammar() | hypothesmith.from_node(),\n # Using randomly-varied modes helps us to exercise less common code paths.\n mode=st.builds(\n black.FileMode,\n line_length=st.just(88) | st.integers(0, 200),\n string_normalization=st.booleans(),\n preview=st.booleans(),\n is_pyi=st.booleans(),\n magic_trailing_comma=st.booleans(),\n ),\n)\ndef test_idempotent_any_syntatically_valid_python(\n src_contents: str, mode: black.FileMode\n) -> None:\n # Before starting, let's confirm that the input string is valid Python:\n compile(src_contents, \"<string>\", \"exec\") # else the bug is in hypothesmith\n\n # Then format the code...\n try:\n dst_contents = black.format_str(src_contents, mode=mode)\n except black.InvalidInput:\n # This is a bug - if it's valid Python code, as above, Black should be\n # able to cope with it. See issues #970, #1012, #1358, and #1557.\n # TODO: remove this try-except block when issues are resolved.\n return\n except TokenError as e:\n if ( # Special-case logic for backslashes followed by newlines or end-of-input\n e.args[0] == \"EOF in multi-line statement\"\n and re.search(r\"\\\\($|\\r?\\n)\", src_contents) is not None\n ):\n # This is a bug - if it's valid Python code, as above, Black should be\n # able to cope with it. See issue #1012.\n # TODO: remove this block when the issue is resolved.\n return\n raise\n\n # And check that we got equivalent and stable output.\n black.assert_equivalent(src_contents, dst_contents)\n black.assert_stable(src_contents, dst_contents, mode=mode)\n\n # Future test: check that pure-python and mypyc versions of black\n # give identical output for identical input?\n\n\nif __name__ == \"__main__\":\n # Run tests, including shrinking and reporting any known failures.\n test_idempotent_any_syntatically_valid_python()\n\n # If Atheris is available, run coverage-guided fuzzing.\n # (if you want only bounded fuzzing, just use `pytest fuzz.py`)\n try:\n import sys\n import atheris\n except ImportError:\n pass\n else:\n test = test_idempotent_any_syntatically_valid_python\n atheris.Setup(sys.argv, test.hypothesis.fuzz_one_input)\n atheris.Fuzz()\n", "path": "fuzz.py"}], "after_files": [{"content": "\"\"\"Property-based tests for Black.\n\nBy Zac Hatfield-Dodds, based on my Hypothesmith tool for source code\ngeneration. You can run this file with `python`, `pytest`, or (soon)\na coverage-guided fuzzer I'm working on.\n\"\"\"\n\nimport re\n\nimport hypothesmith\nfrom hypothesis import HealthCheck, given, settings, strategies as st\n\nimport black\nfrom blib2to3.pgen2.tokenize import TokenError\n\n\n# This test uses the Hypothesis and Hypothesmith libraries to generate random\n# syntatically-valid Python source code and run Black in odd modes.\n@settings(\n max_examples=1000, # roughly 1k tests/minute, or half that under coverage\n derandomize=True, # deterministic mode to avoid CI flakiness\n deadline=None, # ignore Hypothesis' health checks; we already know that\n suppress_health_check=HealthCheck.all(), # this is slow and filter-heavy.\n)\n@given(\n # Note that while Hypothesmith might generate code unlike that written by\n # humans, it's a general test that should pass for any *valid* source code.\n # (so e.g. running it against code scraped of the internet might also help)\n src_contents=hypothesmith.from_grammar() | hypothesmith.from_node(),\n # Using randomly-varied modes helps us to exercise less common code paths.\n mode=st.builds(\n black.FileMode,\n line_length=st.just(88) | st.integers(0, 200),\n string_normalization=st.booleans(),\n preview=st.booleans(),\n is_pyi=st.booleans(),\n magic_trailing_comma=st.booleans(),\n ),\n)\ndef test_idempotent_any_syntatically_valid_python(\n src_contents: str, mode: black.FileMode\n) -> None:\n # Before starting, let's confirm that the input string is valid Python:\n compile(src_contents, \"<string>\", \"exec\") # else the bug is in hypothesmith\n\n # Then format the code...\n try:\n dst_contents = black.format_str(src_contents, mode=mode)\n except black.InvalidInput:\n # This is a bug - if it's valid Python code, as above, Black should be\n # able to cope with it. See issues #970, #1012\n # TODO: remove this try-except block when issues are resolved.\n return\n except TokenError as e:\n if ( # Special-case logic for backslashes followed by newlines or end-of-input\n e.args[0] == \"EOF in multi-line statement\"\n and re.search(r\"\\\\($|\\r?\\n)\", src_contents) is not None\n ):\n # This is a bug - if it's valid Python code, as above, Black should be\n # able to cope with it. See issue #1012.\n # TODO: remove this block when the issue is resolved.\n return\n raise\n\n # And check that we got equivalent and stable output.\n black.assert_equivalent(src_contents, dst_contents)\n black.assert_stable(src_contents, dst_contents, mode=mode)\n\n # Future test: check that pure-python and mypyc versions of black\n # give identical output for identical input?\n\n\nif __name__ == \"__main__\":\n # Run tests, including shrinking and reporting any known failures.\n test_idempotent_any_syntatically_valid_python()\n\n # If Atheris is available, run coverage-guided fuzzing.\n # (if you want only bounded fuzzing, just use `pytest fuzz.py`)\n try:\n import sys\n import atheris\n except ImportError:\n pass\n else:\n test = test_idempotent_any_syntatically_valid_python\n atheris.Setup(sys.argv, test.hypothesis.fuzz_one_input)\n atheris.Fuzz()\n", "path": "fuzz.py"}]}
| 1,552 | 161 |
gh_patches_debug_52
|
rasdani/github-patches
|
git_diff
|
Anselmoo__spectrafit-655
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Docs]: Using builtin release drafter
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Missing Information in the Docs
https://docs.github.com/en/repositories/releasing-projects-on-github/automatically-generated-release-notes
### Anything else?
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `spectrafit/__init__.py`
Content:
```
1 """SpectraFit, fast command line tool for fitting data."""
2 __version__ = "0.16.4"
3
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/spectrafit/__init__.py b/spectrafit/__init__.py
--- a/spectrafit/__init__.py
+++ b/spectrafit/__init__.py
@@ -1,2 +1,2 @@
"""SpectraFit, fast command line tool for fitting data."""
-__version__ = "0.16.4"
+__version__ = "1.0.0a0"
|
{"golden_diff": "diff --git a/spectrafit/__init__.py b/spectrafit/__init__.py\n--- a/spectrafit/__init__.py\n+++ b/spectrafit/__init__.py\n@@ -1,2 +1,2 @@\n \"\"\"SpectraFit, fast command line tool for fitting data.\"\"\"\n-__version__ = \"0.16.4\"\n+__version__ = \"1.0.0a0\"\n", "issue": "[Docs]: Using builtin release drafter\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Missing Information in the Docs\n\nhttps://docs.github.com/en/repositories/releasing-projects-on-github/automatically-generated-release-notes\n\n### Anything else?\n\n_No response_\n\n### Code of Conduct\n\n- [X] I agree to follow this project's Code of Conduct\n", "before_files": [{"content": "\"\"\"SpectraFit, fast command line tool for fitting data.\"\"\"\n__version__ = \"0.16.4\"\n", "path": "spectrafit/__init__.py"}], "after_files": [{"content": "\"\"\"SpectraFit, fast command line tool for fitting data.\"\"\"\n__version__ = \"1.0.0a0\"\n", "path": "spectrafit/__init__.py"}]}
| 376 | 95 |
gh_patches_debug_45047
|
rasdani/github-patches
|
git_diff
|
e-valuation__EvaP-750
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
collapse contributors with no answers in course detail pages
Contributors who didn't get any answers should be collapsed on the results pages, so that the empty answer lines are not shown.
This should also happen if there are answers in the database, but none of them can be seen by the current user.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `evap/results/views.py`
Content:
```
1 from django.core.exceptions import PermissionDenied
2 from django.shortcuts import get_object_or_404, render
3 from django.contrib.auth.decorators import login_required
4
5 from evap.evaluation.models import Semester, Degree, Contribution
6 from evap.evaluation.tools import calculate_results, calculate_average_grades_and_deviation, TextResult
7
8
9 from collections import OrderedDict, namedtuple
10
11
12 @login_required
13 def index(request):
14 semesters = Semester.get_all_with_published_courses()
15
16 return render(request, "results_index.html", dict(semesters=semesters))
17
18
19 @login_required
20 def semester_detail(request, semester_id):
21 semester = get_object_or_404(Semester, id=semester_id)
22 courses = list(semester.course_set.filter(state="published").prefetch_related("degrees"))
23
24 # annotate each course object with its grades
25 for course in courses:
26 course.avg_grade, course.avg_deviation = calculate_average_grades_and_deviation(course)
27
28 CourseTuple = namedtuple('CourseTuple', ('courses', 'single_results'))
29
30 courses_by_degree = OrderedDict()
31 for degree in Degree.objects.all():
32 courses_by_degree[degree] = CourseTuple([], [])
33 for course in courses:
34 if course.is_single_result():
35 for degree in course.degrees.all():
36 section = calculate_results(course)[0]
37 result = section.results[0]
38 courses_by_degree[degree].single_results.append((course, result))
39 else:
40 for degree in course.degrees.all():
41 courses_by_degree[degree].courses.append(course)
42
43 template_data = dict(semester=semester, courses_by_degree=courses_by_degree, staff=request.user.is_staff)
44 return render(request, "results_semester_detail.html", template_data)
45
46
47 @login_required
48 def course_detail(request, semester_id, course_id):
49 semester = get_object_or_404(Semester, id=semester_id)
50 course = get_object_or_404(semester.course_set, id=course_id)
51
52 if not course.can_user_see_results(request.user):
53 raise PermissionDenied
54
55 sections = calculate_results(course)
56
57 public_view = request.GET.get('public_view', 'false') # default: show own view
58 public_view = {'true': True, 'false': False}.get(public_view.lower()) # convert parameter to boolean
59
60 represented_users = list(request.user.represented_users.all())
61 represented_users.append(request.user)
62
63 for section in sections:
64 results = []
65 for result in section.results:
66 if isinstance(result, TextResult):
67 answers = [answer for answer in result.answers if user_can_see_text_answer(request.user, represented_users, answer, public_view)]
68 if answers:
69 results.append(TextResult(question=result.question, answers=answers))
70 else:
71 results.append(result)
72 section.results[:] = results
73
74 # filter empty sections and group by contributor
75 course_sections = []
76 contributor_sections = OrderedDict()
77 for section in sections:
78 if not section.results:
79 continue
80 if section.contributor is None:
81 course_sections.append(section)
82 else:
83 contributor_sections.setdefault(section.contributor, []).append(section)
84
85 # show a warning if course is still in evaluation (for staff preview)
86 evaluation_warning = course.state != 'published'
87
88 # results for a course might not be visible because there are not enough answers
89 # but it can still be "published" e.g. to show the comment results to contributors.
90 # users who can open the results page see a warning message in this case
91 sufficient_votes_warning = not course.can_publish_grades
92
93 show_grades = request.user.is_staff or course.can_publish_grades
94
95 course.avg_grade, course.avg_deviation = calculate_average_grades_and_deviation(course)
96
97 template_data = dict(
98 course=course,
99 course_sections=course_sections,
100 contributor_sections=contributor_sections,
101 evaluation_warning=evaluation_warning,
102 sufficient_votes_warning=sufficient_votes_warning,
103 show_grades=show_grades,
104 staff=request.user.is_staff,
105 contributor=course.is_user_contributor_or_delegate(request.user),
106 can_download_grades=request.user.can_download_grades,
107 public_view=public_view)
108 return render(request, "results_course_detail.html", template_data)
109
110 def user_can_see_text_answer(user, represented_users, text_answer, public_view=False):
111 if public_view:
112 return False
113 if user.is_staff:
114 return True
115 contributor = text_answer.contribution.contributor
116 if text_answer.is_private:
117 return contributor == user
118 if text_answer.is_published:
119 if contributor in represented_users:
120 return True
121 if text_answer.contribution.course.contributions.filter(contributor__in=represented_users, comment_visibility=Contribution.ALL_COMMENTS).exists():
122 return True
123 if text_answer.contribution.is_general and \
124 text_answer.contribution.course.contributions.filter(contributor__in=represented_users, comment_visibility=Contribution.COURSE_COMMENTS).exists():
125 return True
126
127 return False
128
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/evap/results/views.py b/evap/results/views.py
--- a/evap/results/views.py
+++ b/evap/results/views.py
@@ -3,8 +3,7 @@
from django.contrib.auth.decorators import login_required
from evap.evaluation.models import Semester, Degree, Contribution
-from evap.evaluation.tools import calculate_results, calculate_average_grades_and_deviation, TextResult
-
+from evap.evaluation.tools import calculate_results, calculate_average_grades_and_deviation, TextResult, RatingResult
from collections import OrderedDict, namedtuple
@@ -21,7 +20,7 @@
semester = get_object_or_404(Semester, id=semester_id)
courses = list(semester.course_set.filter(state="published").prefetch_related("degrees"))
- # annotate each course object with its grades
+ # Annotate each course object with its grades.
for course in courses:
course.avg_grade, course.avg_deviation = calculate_average_grades_and_deviation(course)
@@ -54,8 +53,8 @@
sections = calculate_results(course)
- public_view = request.GET.get('public_view', 'false') # default: show own view
- public_view = {'true': True, 'false': False}.get(public_view.lower()) # convert parameter to boolean
+ public_view = request.GET.get('public_view', 'false') # Default: show own view.
+ public_view = {'true': True, 'false': False}.get(public_view.lower()) # Convert parameter to boolean.
represented_users = list(request.user.represented_users.all())
represented_users.append(request.user)
@@ -71,7 +70,7 @@
results.append(result)
section.results[:] = results
- # filter empty sections and group by contributor
+ # Filter empty sections and group by contributor.
course_sections = []
contributor_sections = OrderedDict()
for section in sections:
@@ -80,14 +79,21 @@
if section.contributor is None:
course_sections.append(section)
else:
- contributor_sections.setdefault(section.contributor, []).append(section)
+ contributor_sections.setdefault(section.contributor,
+ {'total_votes': 0, 'sections': []})['sections'].append(section)
+
+ # Sum up all Sections for this contributor.
+ # If section is not a RatingResult:
+ # Add 1 as we assume it is a TextResult or something similar that should be displayed.
+ contributor_sections[section.contributor]['total_votes'] +=\
+ sum([s.total_count if isinstance(s, RatingResult) else 1 for s in section.results])
- # show a warning if course is still in evaluation (for staff preview)
+ # Show a warning if course is still in evaluation (for staff preview).
evaluation_warning = course.state != 'published'
- # results for a course might not be visible because there are not enough answers
+ # Results for a course might not be visible because there are not enough answers
# but it can still be "published" e.g. to show the comment results to contributors.
- # users who can open the results page see a warning message in this case
+ # Users who can open the results page see a warning message in this case.
sufficient_votes_warning = not course.can_publish_grades
show_grades = request.user.is_staff or course.can_publish_grades
@@ -107,6 +113,7 @@
public_view=public_view)
return render(request, "results_course_detail.html", template_data)
+
def user_can_see_text_answer(user, represented_users, text_answer, public_view=False):
if public_view:
return False
@@ -118,10 +125,11 @@
if text_answer.is_published:
if contributor in represented_users:
return True
- if text_answer.contribution.course.contributions.filter(contributor__in=represented_users, comment_visibility=Contribution.ALL_COMMENTS).exists():
+ if text_answer.contribution.course.contributions.filter(
+ contributor__in=represented_users, comment_visibility=Contribution.ALL_COMMENTS).exists():
return True
- if text_answer.contribution.is_general and \
- text_answer.contribution.course.contributions.filter(contributor__in=represented_users, comment_visibility=Contribution.COURSE_COMMENTS).exists():
+ if text_answer.contribution.is_general and text_answer.contribution.course.contributions.filter(
+ contributor__in=represented_users, comment_visibility=Contribution.COURSE_COMMENTS).exists():
return True
return False
|
{"golden_diff": "diff --git a/evap/results/views.py b/evap/results/views.py\n--- a/evap/results/views.py\n+++ b/evap/results/views.py\n@@ -3,8 +3,7 @@\n from django.contrib.auth.decorators import login_required\n \n from evap.evaluation.models import Semester, Degree, Contribution\n-from evap.evaluation.tools import calculate_results, calculate_average_grades_and_deviation, TextResult\n-\n+from evap.evaluation.tools import calculate_results, calculate_average_grades_and_deviation, TextResult, RatingResult\n \n from collections import OrderedDict, namedtuple\n \n@@ -21,7 +20,7 @@\n semester = get_object_or_404(Semester, id=semester_id)\n courses = list(semester.course_set.filter(state=\"published\").prefetch_related(\"degrees\"))\n \n- # annotate each course object with its grades\n+ # Annotate each course object with its grades.\n for course in courses:\n course.avg_grade, course.avg_deviation = calculate_average_grades_and_deviation(course)\n \n@@ -54,8 +53,8 @@\n \n sections = calculate_results(course)\n \n- public_view = request.GET.get('public_view', 'false') # default: show own view\n- public_view = {'true': True, 'false': False}.get(public_view.lower()) # convert parameter to boolean\n+ public_view = request.GET.get('public_view', 'false') # Default: show own view.\n+ public_view = {'true': True, 'false': False}.get(public_view.lower()) # Convert parameter to boolean.\n \n represented_users = list(request.user.represented_users.all())\n represented_users.append(request.user)\n@@ -71,7 +70,7 @@\n results.append(result)\n section.results[:] = results\n \n- # filter empty sections and group by contributor\n+ # Filter empty sections and group by contributor.\n course_sections = []\n contributor_sections = OrderedDict()\n for section in sections:\n@@ -80,14 +79,21 @@\n if section.contributor is None:\n course_sections.append(section)\n else:\n- contributor_sections.setdefault(section.contributor, []).append(section)\n+ contributor_sections.setdefault(section.contributor,\n+ {'total_votes': 0, 'sections': []})['sections'].append(section)\n+\n+ # Sum up all Sections for this contributor.\n+ # If section is not a RatingResult:\n+ # Add 1 as we assume it is a TextResult or something similar that should be displayed.\n+ contributor_sections[section.contributor]['total_votes'] +=\\\n+ sum([s.total_count if isinstance(s, RatingResult) else 1 for s in section.results])\n \n- # show a warning if course is still in evaluation (for staff preview)\n+ # Show a warning if course is still in evaluation (for staff preview).\n evaluation_warning = course.state != 'published'\n \n- # results for a course might not be visible because there are not enough answers\n+ # Results for a course might not be visible because there are not enough answers\n # but it can still be \"published\" e.g. to show the comment results to contributors.\n- # users who can open the results page see a warning message in this case\n+ # Users who can open the results page see a warning message in this case.\n sufficient_votes_warning = not course.can_publish_grades\n \n show_grades = request.user.is_staff or course.can_publish_grades\n@@ -107,6 +113,7 @@\n public_view=public_view)\n return render(request, \"results_course_detail.html\", template_data)\n \n+\n def user_can_see_text_answer(user, represented_users, text_answer, public_view=False):\n if public_view:\n return False\n@@ -118,10 +125,11 @@\n if text_answer.is_published:\n if contributor in represented_users:\n return True\n- if text_answer.contribution.course.contributions.filter(contributor__in=represented_users, comment_visibility=Contribution.ALL_COMMENTS).exists():\n+ if text_answer.contribution.course.contributions.filter(\n+ contributor__in=represented_users, comment_visibility=Contribution.ALL_COMMENTS).exists():\n return True\n- if text_answer.contribution.is_general and \\\n- text_answer.contribution.course.contributions.filter(contributor__in=represented_users, comment_visibility=Contribution.COURSE_COMMENTS).exists():\n+ if text_answer.contribution.is_general and text_answer.contribution.course.contributions.filter(\n+ contributor__in=represented_users, comment_visibility=Contribution.COURSE_COMMENTS).exists():\n return True\n \n return False\n", "issue": "collapse contributors with no answers in course detail pages\nContributors who didn't get any answers should be collapsed on the results pages, so that the empty answer lines are not shown.\nThis should also happen if there are answers in the database, but none of them can be seen by the current user.\n\n\n\n", "before_files": [{"content": "from django.core.exceptions import PermissionDenied\nfrom django.shortcuts import get_object_or_404, render\nfrom django.contrib.auth.decorators import login_required\n\nfrom evap.evaluation.models import Semester, Degree, Contribution\nfrom evap.evaluation.tools import calculate_results, calculate_average_grades_and_deviation, TextResult\n\n\nfrom collections import OrderedDict, namedtuple\n\n\n@login_required\ndef index(request):\n semesters = Semester.get_all_with_published_courses()\n\n return render(request, \"results_index.html\", dict(semesters=semesters))\n\n\n@login_required\ndef semester_detail(request, semester_id):\n semester = get_object_or_404(Semester, id=semester_id)\n courses = list(semester.course_set.filter(state=\"published\").prefetch_related(\"degrees\"))\n\n # annotate each course object with its grades\n for course in courses:\n course.avg_grade, course.avg_deviation = calculate_average_grades_and_deviation(course)\n\n CourseTuple = namedtuple('CourseTuple', ('courses', 'single_results'))\n\n courses_by_degree = OrderedDict()\n for degree in Degree.objects.all():\n courses_by_degree[degree] = CourseTuple([], [])\n for course in courses:\n if course.is_single_result():\n for degree in course.degrees.all():\n section = calculate_results(course)[0]\n result = section.results[0]\n courses_by_degree[degree].single_results.append((course, result))\n else:\n for degree in course.degrees.all():\n courses_by_degree[degree].courses.append(course)\n\n template_data = dict(semester=semester, courses_by_degree=courses_by_degree, staff=request.user.is_staff)\n return render(request, \"results_semester_detail.html\", template_data)\n\n\n@login_required\ndef course_detail(request, semester_id, course_id):\n semester = get_object_or_404(Semester, id=semester_id)\n course = get_object_or_404(semester.course_set, id=course_id)\n\n if not course.can_user_see_results(request.user):\n raise PermissionDenied\n\n sections = calculate_results(course)\n\n public_view = request.GET.get('public_view', 'false') # default: show own view\n public_view = {'true': True, 'false': False}.get(public_view.lower()) # convert parameter to boolean\n\n represented_users = list(request.user.represented_users.all())\n represented_users.append(request.user)\n\n for section in sections:\n results = []\n for result in section.results:\n if isinstance(result, TextResult):\n answers = [answer for answer in result.answers if user_can_see_text_answer(request.user, represented_users, answer, public_view)]\n if answers:\n results.append(TextResult(question=result.question, answers=answers))\n else:\n results.append(result)\n section.results[:] = results\n\n # filter empty sections and group by contributor\n course_sections = []\n contributor_sections = OrderedDict()\n for section in sections:\n if not section.results:\n continue\n if section.contributor is None:\n course_sections.append(section)\n else:\n contributor_sections.setdefault(section.contributor, []).append(section)\n\n # show a warning if course is still in evaluation (for staff preview)\n evaluation_warning = course.state != 'published'\n\n # results for a course might not be visible because there are not enough answers\n # but it can still be \"published\" e.g. to show the comment results to contributors.\n # users who can open the results page see a warning message in this case\n sufficient_votes_warning = not course.can_publish_grades\n\n show_grades = request.user.is_staff or course.can_publish_grades\n\n course.avg_grade, course.avg_deviation = calculate_average_grades_and_deviation(course)\n\n template_data = dict(\n course=course,\n course_sections=course_sections,\n contributor_sections=contributor_sections,\n evaluation_warning=evaluation_warning,\n sufficient_votes_warning=sufficient_votes_warning,\n show_grades=show_grades,\n staff=request.user.is_staff,\n contributor=course.is_user_contributor_or_delegate(request.user),\n can_download_grades=request.user.can_download_grades,\n public_view=public_view)\n return render(request, \"results_course_detail.html\", template_data)\n\ndef user_can_see_text_answer(user, represented_users, text_answer, public_view=False):\n if public_view:\n return False\n if user.is_staff:\n return True\n contributor = text_answer.contribution.contributor\n if text_answer.is_private:\n return contributor == user\n if text_answer.is_published:\n if contributor in represented_users:\n return True\n if text_answer.contribution.course.contributions.filter(contributor__in=represented_users, comment_visibility=Contribution.ALL_COMMENTS).exists():\n return True\n if text_answer.contribution.is_general and \\\n text_answer.contribution.course.contributions.filter(contributor__in=represented_users, comment_visibility=Contribution.COURSE_COMMENTS).exists():\n return True\n\n return False\n", "path": "evap/results/views.py"}], "after_files": [{"content": "from django.core.exceptions import PermissionDenied\nfrom django.shortcuts import get_object_or_404, render\nfrom django.contrib.auth.decorators import login_required\n\nfrom evap.evaluation.models import Semester, Degree, Contribution\nfrom evap.evaluation.tools import calculate_results, calculate_average_grades_and_deviation, TextResult, RatingResult\n\nfrom collections import OrderedDict, namedtuple\n\n\n@login_required\ndef index(request):\n semesters = Semester.get_all_with_published_courses()\n\n return render(request, \"results_index.html\", dict(semesters=semesters))\n\n\n@login_required\ndef semester_detail(request, semester_id):\n semester = get_object_or_404(Semester, id=semester_id)\n courses = list(semester.course_set.filter(state=\"published\").prefetch_related(\"degrees\"))\n\n # Annotate each course object with its grades.\n for course in courses:\n course.avg_grade, course.avg_deviation = calculate_average_grades_and_deviation(course)\n\n CourseTuple = namedtuple('CourseTuple', ('courses', 'single_results'))\n\n courses_by_degree = OrderedDict()\n for degree in Degree.objects.all():\n courses_by_degree[degree] = CourseTuple([], [])\n for course in courses:\n if course.is_single_result():\n for degree in course.degrees.all():\n section = calculate_results(course)[0]\n result = section.results[0]\n courses_by_degree[degree].single_results.append((course, result))\n else:\n for degree in course.degrees.all():\n courses_by_degree[degree].courses.append(course)\n\n template_data = dict(semester=semester, courses_by_degree=courses_by_degree, staff=request.user.is_staff)\n return render(request, \"results_semester_detail.html\", template_data)\n\n\n@login_required\ndef course_detail(request, semester_id, course_id):\n semester = get_object_or_404(Semester, id=semester_id)\n course = get_object_or_404(semester.course_set, id=course_id)\n\n if not course.can_user_see_results(request.user):\n raise PermissionDenied\n\n sections = calculate_results(course)\n\n public_view = request.GET.get('public_view', 'false') # Default: show own view.\n public_view = {'true': True, 'false': False}.get(public_view.lower()) # Convert parameter to boolean.\n\n represented_users = list(request.user.represented_users.all())\n represented_users.append(request.user)\n\n for section in sections:\n results = []\n for result in section.results:\n if isinstance(result, TextResult):\n answers = [answer for answer in result.answers if user_can_see_text_answer(request.user, represented_users, answer, public_view)]\n if answers:\n results.append(TextResult(question=result.question, answers=answers))\n else:\n results.append(result)\n section.results[:] = results\n\n # Filter empty sections and group by contributor.\n course_sections = []\n contributor_sections = OrderedDict()\n for section in sections:\n if not section.results:\n continue\n if section.contributor is None:\n course_sections.append(section)\n else:\n contributor_sections.setdefault(section.contributor,\n {'total_votes': 0, 'sections': []})['sections'].append(section)\n\n # Sum up all Sections for this contributor.\n # If section is not a RatingResult:\n # Add 1 as we assume it is a TextResult or something similar that should be displayed.\n contributor_sections[section.contributor]['total_votes'] +=\\\n sum([s.total_count if isinstance(s, RatingResult) else 1 for s in section.results])\n\n # Show a warning if course is still in evaluation (for staff preview).\n evaluation_warning = course.state != 'published'\n\n # Results for a course might not be visible because there are not enough answers\n # but it can still be \"published\" e.g. to show the comment results to contributors.\n # Users who can open the results page see a warning message in this case.\n sufficient_votes_warning = not course.can_publish_grades\n\n show_grades = request.user.is_staff or course.can_publish_grades\n\n course.avg_grade, course.avg_deviation = calculate_average_grades_and_deviation(course)\n\n template_data = dict(\n course=course,\n course_sections=course_sections,\n contributor_sections=contributor_sections,\n evaluation_warning=evaluation_warning,\n sufficient_votes_warning=sufficient_votes_warning,\n show_grades=show_grades,\n staff=request.user.is_staff,\n contributor=course.is_user_contributor_or_delegate(request.user),\n can_download_grades=request.user.can_download_grades,\n public_view=public_view)\n return render(request, \"results_course_detail.html\", template_data)\n\n\ndef user_can_see_text_answer(user, represented_users, text_answer, public_view=False):\n if public_view:\n return False\n if user.is_staff:\n return True\n contributor = text_answer.contribution.contributor\n if text_answer.is_private:\n return contributor == user\n if text_answer.is_published:\n if contributor in represented_users:\n return True\n if text_answer.contribution.course.contributions.filter(\n contributor__in=represented_users, comment_visibility=Contribution.ALL_COMMENTS).exists():\n return True\n if text_answer.contribution.is_general and text_answer.contribution.course.contributions.filter(\n contributor__in=represented_users, comment_visibility=Contribution.COURSE_COMMENTS).exists():\n return True\n\n return False\n", "path": "evap/results/views.py"}]}
| 1,747 | 1,003 |
gh_patches_debug_631
|
rasdani/github-patches
|
git_diff
|
pex-tool__pex-2042
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.121
On the docket:
+ [x] Building Pex with requirements.txt that includes local directory + Python version specifier fails #2037
+ [x] Failed to resolve compatible distributions when building Pex from .whl with local dependencies #2038
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.120"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.120"
+__version__ = "2.1.121"
|
{"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.120\"\n+__version__ = \"2.1.121\"\n", "issue": "Release 2.1.121\nOn the docket:\r\n+ [x] Building Pex with requirements.txt that includes local directory + Python version specifier fails #2037 \r\n+ [x] Failed to resolve compatible distributions when building Pex from .whl with local dependencies #2038 \r\n\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.120\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.121\"\n", "path": "pex/version.py"}]}
| 376 | 98 |
gh_patches_debug_25893
|
rasdani/github-patches
|
git_diff
|
liqd__a4-meinberlin-551
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use predicates to prevent access to items if no phase has started yet
With topicprio and polls items are created before any phase has been started.
On the project_detail page a according message and no content is shown.
This is checked via the `PhaseDispatchMixin`.
As for items it is not possible to use the `PhaseDispatchMixin` a new
predicate has been introduced which tests if a phase has been started
yet. Be aware, that this does not imply an active phase.
If no phase has been started yet access to the items should be prevented.
But if participation did finish the items should still be available.
See 362a8ce6a79888b2cc1dbdeb789107a156c5d15f for reference.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/polls/rules.py`
Content:
```
1 import rules
2 from rules.predicates import is_superuser
3
4 from adhocracy4.modules import predicates as module_predicates
5
6 from . import models
7
8 rules.add_perm(
9 'meinberlin_polls.change_poll',
10 is_superuser | module_predicates.is_context_initiator
11 )
12
13 rules.add_perm(
14 'meinberlin_polls.view_poll',
15 module_predicates.is_public_context
16 )
17
18 rules.add_perm(
19 'meinberlin_polls.comment_poll',
20 module_predicates.is_allowed_comment_item
21 )
22
23 rules.add_perm(
24 'meinberlin_polls.add_vote',
25 module_predicates.is_allowed_add_item(models.Vote)
26 )
27
28 rules.add_perm(
29 'meinberlin_polls.change_vote',
30 module_predicates.is_allowed_add_item(models.Vote)
31 )
32
```
Path: `apps/topicprio/rules.py`
Content:
```
1 import rules
2
3 from adhocracy4.modules import predicates as module_predicates
4
5 rules.add_perm(
6 'meinberlin_topicprio.add_topic',
7 module_predicates.is_project_admin
8 )
9
10 rules.add_perm(
11 'meinberlin_topicprio.change_topic',
12 module_predicates.is_project_admin
13 )
14
15 rules.add_perm(
16 'meinberlin_topicprio.rate_topic',
17 module_predicates.is_allowed_rate_item
18 )
19
20 rules.add_perm(
21 'meinberlin_topicprio.comment_topic',
22 module_predicates.is_allowed_comment_item
23 )
24
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/apps/polls/rules.py b/apps/polls/rules.py
--- a/apps/polls/rules.py
+++ b/apps/polls/rules.py
@@ -2,6 +2,7 @@
from rules.predicates import is_superuser
from adhocracy4.modules import predicates as module_predicates
+from apps.contrib import predicates as contrib_predicates
from . import models
@@ -12,7 +13,9 @@
rules.add_perm(
'meinberlin_polls.view_poll',
- module_predicates.is_public_context
+ (module_predicates.is_project_admin |
+ (module_predicates.is_allowed_view_item &
+ contrib_predicates.has_context_started))
)
rules.add_perm(
diff --git a/apps/topicprio/rules.py b/apps/topicprio/rules.py
--- a/apps/topicprio/rules.py
+++ b/apps/topicprio/rules.py
@@ -1,6 +1,7 @@
import rules
from adhocracy4.modules import predicates as module_predicates
+from apps.contrib import predicates as contrib_predicates
rules.add_perm(
'meinberlin_topicprio.add_topic',
@@ -12,6 +13,13 @@
module_predicates.is_project_admin
)
+rules.add_perm(
+ 'meinberlin_topicprio.view_topic',
+ (module_predicates.is_project_admin |
+ (module_predicates.is_allowed_view_item &
+ contrib_predicates.has_context_started))
+)
+
rules.add_perm(
'meinberlin_topicprio.rate_topic',
module_predicates.is_allowed_rate_item
|
{"golden_diff": "diff --git a/apps/polls/rules.py b/apps/polls/rules.py\n--- a/apps/polls/rules.py\n+++ b/apps/polls/rules.py\n@@ -2,6 +2,7 @@\n from rules.predicates import is_superuser\n \n from adhocracy4.modules import predicates as module_predicates\n+from apps.contrib import predicates as contrib_predicates\n \n from . import models\n \n@@ -12,7 +13,9 @@\n \n rules.add_perm(\n 'meinberlin_polls.view_poll',\n- module_predicates.is_public_context\n+ (module_predicates.is_project_admin |\n+ (module_predicates.is_allowed_view_item &\n+ contrib_predicates.has_context_started))\n )\n \n rules.add_perm(\ndiff --git a/apps/topicprio/rules.py b/apps/topicprio/rules.py\n--- a/apps/topicprio/rules.py\n+++ b/apps/topicprio/rules.py\n@@ -1,6 +1,7 @@\n import rules\n \n from adhocracy4.modules import predicates as module_predicates\n+from apps.contrib import predicates as contrib_predicates\n \n rules.add_perm(\n 'meinberlin_topicprio.add_topic',\n@@ -12,6 +13,13 @@\n module_predicates.is_project_admin\n )\n \n+rules.add_perm(\n+ 'meinberlin_topicprio.view_topic',\n+ (module_predicates.is_project_admin |\n+ (module_predicates.is_allowed_view_item &\n+ contrib_predicates.has_context_started))\n+)\n+\n rules.add_perm(\n 'meinberlin_topicprio.rate_topic',\n module_predicates.is_allowed_rate_item\n", "issue": "Use predicates to prevent access to items if no phase has started yet\nWith topicprio and polls items are created before any phase has been started.\r\nOn the project_detail page a according message and no content is shown.\r\nThis is checked via the `PhaseDispatchMixin`.\r\nAs for items it is not possible to use the `PhaseDispatchMixin` a new\r\npredicate has been introduced which tests if a phase has been started\r\nyet. Be aware, that this does not imply an active phase.\r\nIf no phase has been started yet access to the items should be prevented. \r\nBut if participation did finish the items should still be available.\r\nSee 362a8ce6a79888b2cc1dbdeb789107a156c5d15f for reference. \n", "before_files": [{"content": "import rules\nfrom rules.predicates import is_superuser\n\nfrom adhocracy4.modules import predicates as module_predicates\n\nfrom . import models\n\nrules.add_perm(\n 'meinberlin_polls.change_poll',\n is_superuser | module_predicates.is_context_initiator\n)\n\nrules.add_perm(\n 'meinberlin_polls.view_poll',\n module_predicates.is_public_context\n)\n\nrules.add_perm(\n 'meinberlin_polls.comment_poll',\n module_predicates.is_allowed_comment_item\n)\n\nrules.add_perm(\n 'meinberlin_polls.add_vote',\n module_predicates.is_allowed_add_item(models.Vote)\n)\n\nrules.add_perm(\n 'meinberlin_polls.change_vote',\n module_predicates.is_allowed_add_item(models.Vote)\n)\n", "path": "apps/polls/rules.py"}, {"content": "import rules\n\nfrom adhocracy4.modules import predicates as module_predicates\n\nrules.add_perm(\n 'meinberlin_topicprio.add_topic',\n module_predicates.is_project_admin\n)\n\nrules.add_perm(\n 'meinberlin_topicprio.change_topic',\n module_predicates.is_project_admin\n)\n\nrules.add_perm(\n 'meinberlin_topicprio.rate_topic',\n module_predicates.is_allowed_rate_item\n)\n\nrules.add_perm(\n 'meinberlin_topicprio.comment_topic',\n module_predicates.is_allowed_comment_item\n)\n", "path": "apps/topicprio/rules.py"}], "after_files": [{"content": "import rules\nfrom rules.predicates import is_superuser\n\nfrom adhocracy4.modules import predicates as module_predicates\nfrom apps.contrib import predicates as contrib_predicates\n\nfrom . import models\n\nrules.add_perm(\n 'meinberlin_polls.change_poll',\n is_superuser | module_predicates.is_context_initiator\n)\n\nrules.add_perm(\n 'meinberlin_polls.view_poll',\n (module_predicates.is_project_admin |\n (module_predicates.is_allowed_view_item &\n contrib_predicates.has_context_started))\n)\n\nrules.add_perm(\n 'meinberlin_polls.comment_poll',\n module_predicates.is_allowed_comment_item\n)\n\nrules.add_perm(\n 'meinberlin_polls.add_vote',\n module_predicates.is_allowed_add_item(models.Vote)\n)\n\nrules.add_perm(\n 'meinberlin_polls.change_vote',\n module_predicates.is_allowed_add_item(models.Vote)\n)\n", "path": "apps/polls/rules.py"}, {"content": "import rules\n\nfrom adhocracy4.modules import predicates as module_predicates\nfrom apps.contrib import predicates as contrib_predicates\n\nrules.add_perm(\n 'meinberlin_topicprio.add_topic',\n module_predicates.is_project_admin\n)\n\nrules.add_perm(\n 'meinberlin_topicprio.change_topic',\n module_predicates.is_project_admin\n)\n\nrules.add_perm(\n 'meinberlin_topicprio.view_topic',\n (module_predicates.is_project_admin |\n (module_predicates.is_allowed_view_item &\n contrib_predicates.has_context_started))\n)\n\nrules.add_perm(\n 'meinberlin_topicprio.rate_topic',\n module_predicates.is_allowed_rate_item\n)\n\nrules.add_perm(\n 'meinberlin_topicprio.comment_topic',\n module_predicates.is_allowed_comment_item\n)\n", "path": "apps/topicprio/rules.py"}]}
| 831 | 340 |
gh_patches_debug_27110
|
rasdani/github-patches
|
git_diff
|
RedHatInsights__insights-core-3225
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Need to add the Yum updates datasource to the documentation
The [PR](https://github.com/RedHatInsights/insights-core/pull/2993/files#diff-22151ef794ba196097984a47bf24b6759c261de6dc062ac541da099084e5c50a) adding this datasource did not add the datasource to the documentation [here](https://github.com/RedHatInsights/insights-core/blob/master/docs/custom_datasources_index.rst).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `insights/specs/datasources/yum_updates.py`
Content:
```
1 """
2 Custom datasource for collecting yum updates
3 """
4 import json
5 import time
6
7 from insights import datasource, HostContext, SkipComponent
8 from insights.components.rhel_version import IsRhel7
9 from insights.core.spec_factory import DatasourceProvider
10
11 sorted_cmp = None
12 try:
13 # cmp_to_key is not available in python 2.6, but it has sorted function which accepts cmp function
14 def sorted_cmp(it, cmp):
15 from functools import cmp_to_key
16 return sorted(it, key=cmp_to_key(cmp))
17 except ImportError:
18 sorted_cmp = sorted
19
20
21 class UpdatesManager:
22 """ Performs package resolution on yum based systems """
23 def __init__(self):
24 import yum
25
26 self.base = yum.YumBase()
27 self.base.doGenericSetup(cache=1)
28 self.releasever = self.base.conf.yumvar['releasever']
29 self.basearch = self.base.conf.yumvar['basearch']
30 self.packages = []
31 self.repos = []
32 self.updict = {}
33
34 def __enter__(self):
35 return self
36
37 def __exit__(self, *args):
38 pass
39
40 @staticmethod
41 def pkg_cmp(a, b):
42 vercmp = a.verCMP(b)
43 if vercmp != 0:
44 return vercmp
45 if a.repoid != b.repoid:
46 return -1 if a.repoid < b.repoid else 1
47 return 0
48
49 def sorted_pkgs(self, pkgs):
50 return sorted_cmp(pkgs, self.pkg_cmp)
51
52 def load(self):
53 self.base.doRepoSetup()
54 self.base.doSackSetup()
55 self.packages = self.base.pkgSack.returnPackages()
56 self.repos = self.base.repos.repos
57 self._build_updict()
58
59 def _build_updict(self):
60 self.updict = {}
61 for pkg in self.packages:
62 self.updict.setdefault(pkg.na, []).append(pkg)
63
64 def enabled_repos(self):
65 return [repo.id for repo in self.base.repos.listEnabled()]
66
67 def installed_packages(self):
68 return self.base.rpmdb.returnPackages()
69
70 def updates(self, pkg):
71 nevra = pkg.nevra
72 updates_list = []
73 for upg in self.updict[pkg.na]:
74 if upg.verGT(pkg):
75 updates_list.append(upg)
76 return nevra, updates_list
77
78 @staticmethod
79 def pkg_nevra(pkg):
80 return "{}-{}:{}-{}.{}".format(pkg.name, pkg.epoch, pkg.version, pkg.release, pkg.arch)
81
82 @staticmethod
83 def pkg_repo(pkg):
84 return pkg.repoid
85
86 def advisory(self, pkg):
87 adv = self.base.upinfo.get_notice(pkg.nvr)
88 if adv:
89 return adv.get_metadata()['update_id']
90 return None
91
92 @staticmethod
93 def last_update():
94 return 0
95
96
97 @datasource(HostContext, [IsRhel7])
98 def yum_updates(_broker):
99 """
100 This datasource provides a list of available updates on the system.
101 It uses the yum python library installed locally, and collects list of
102 available package updates, along with advisory info where applicable.
103 """
104
105 if not _broker.get(IsRhel7):
106 raise SkipComponent("Yum updates currently only works on RHEL 7")
107
108 with UpdatesManager() as umgr:
109 umgr.load()
110
111 response = {
112 "releasever": umgr.releasever,
113 "basearch": umgr.basearch,
114 "update_list": {},
115 }
116
117 data = {'package_list': umgr.installed_packages()}
118 updates = {}
119 for pkg in data["package_list"]:
120 (nevra, updates_list) = umgr.updates(pkg)
121 updates[nevra] = updates_list
122 for (nevra, update_list) in updates.items():
123 if update_list:
124 out_list = []
125 for pkg in umgr.sorted_pkgs(update_list):
126 pkg_dict = {
127 "package": umgr.pkg_nevra(pkg),
128 "repository": umgr.pkg_repo(pkg),
129 "basearch": response["basearch"],
130 "releasever": response["releasever"],
131 }
132 erratum = umgr.advisory(pkg)
133 if erratum:
134 pkg_dict["erratum"] = erratum
135 out_list.append(pkg_dict)
136 response["update_list"][nevra] = {"available_updates": out_list}
137
138 ts = umgr.last_update()
139 if ts:
140 response["metadata_time"] = time.strftime("%FT%TZ", time.gmtime(ts))
141 return DatasourceProvider(content=json.dumps(response), relative_path='insights_commands/yum_updates_list')
142
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/insights/specs/datasources/yum_updates.py b/insights/specs/datasources/yum_updates.py
--- a/insights/specs/datasources/yum_updates.py
+++ b/insights/specs/datasources/yum_updates.py
@@ -70,7 +70,7 @@
def updates(self, pkg):
nevra = pkg.nevra
updates_list = []
- for upg in self.updict[pkg.na]:
+ for upg in self.updict.get(pkg.na, []):
if upg.verGT(pkg):
updates_list.append(upg)
return nevra, updates_list
@@ -100,6 +100,32 @@
This datasource provides a list of available updates on the system.
It uses the yum python library installed locally, and collects list of
available package updates, along with advisory info where applicable.
+
+ Sample data returned::
+
+ {
+ "releasever": "8",
+ "basearch": "x86_64",
+ "update_list": {
+ "NetworkManager-1:1.22.8-4.el8.x86_64": {
+ "available_updates": [
+ {
+ "package": "NetworkManager-1:1.22.8-5.el8_2.x86_64",
+ "repository": "rhel-8-for-x86_64-baseos-rpms",
+ "basearch": "x86_64",
+ "releasever": "8",
+ "erratum": "RHSA-2020:3011"
+ }
+ ]
+ }
+ },
+ "metadata_time": "2021-01-01T09:39:45Z"
+ }
+
+ Returns:
+ list: List of available updates
+ Raises:
+ SkipComponent: Raised on systems different than RHEL 7
"""
if not _broker.get(IsRhel7):
|
{"golden_diff": "diff --git a/insights/specs/datasources/yum_updates.py b/insights/specs/datasources/yum_updates.py\n--- a/insights/specs/datasources/yum_updates.py\n+++ b/insights/specs/datasources/yum_updates.py\n@@ -70,7 +70,7 @@\n def updates(self, pkg):\n nevra = pkg.nevra\n updates_list = []\n- for upg in self.updict[pkg.na]:\n+ for upg in self.updict.get(pkg.na, []):\n if upg.verGT(pkg):\n updates_list.append(upg)\n return nevra, updates_list\n@@ -100,6 +100,32 @@\n This datasource provides a list of available updates on the system.\n It uses the yum python library installed locally, and collects list of\n available package updates, along with advisory info where applicable.\n+\n+ Sample data returned::\n+\n+ {\n+ \"releasever\": \"8\",\n+ \"basearch\": \"x86_64\",\n+ \"update_list\": {\n+ \"NetworkManager-1:1.22.8-4.el8.x86_64\": {\n+ \"available_updates\": [\n+ {\n+ \"package\": \"NetworkManager-1:1.22.8-5.el8_2.x86_64\",\n+ \"repository\": \"rhel-8-for-x86_64-baseos-rpms\",\n+ \"basearch\": \"x86_64\",\n+ \"releasever\": \"8\",\n+ \"erratum\": \"RHSA-2020:3011\"\n+ }\n+ ]\n+ }\n+ },\n+ \"metadata_time\": \"2021-01-01T09:39:45Z\"\n+ }\n+\n+ Returns:\n+ list: List of available updates\n+ Raises:\n+ SkipComponent: Raised on systems different than RHEL 7\n \"\"\"\n \n if not _broker.get(IsRhel7):\n", "issue": "Need to add the Yum updates datasource to the documentation\nThe [PR](https://github.com/RedHatInsights/insights-core/pull/2993/files#diff-22151ef794ba196097984a47bf24b6759c261de6dc062ac541da099084e5c50a) adding this datasource did not add the datasource to the documentation [here](https://github.com/RedHatInsights/insights-core/blob/master/docs/custom_datasources_index.rst).\n", "before_files": [{"content": "\"\"\"\nCustom datasource for collecting yum updates\n\"\"\"\nimport json\nimport time\n\nfrom insights import datasource, HostContext, SkipComponent\nfrom insights.components.rhel_version import IsRhel7\nfrom insights.core.spec_factory import DatasourceProvider\n\nsorted_cmp = None\ntry:\n # cmp_to_key is not available in python 2.6, but it has sorted function which accepts cmp function\n def sorted_cmp(it, cmp):\n from functools import cmp_to_key\n return sorted(it, key=cmp_to_key(cmp))\nexcept ImportError:\n sorted_cmp = sorted\n\n\nclass UpdatesManager:\n \"\"\" Performs package resolution on yum based systems \"\"\"\n def __init__(self):\n import yum\n\n self.base = yum.YumBase()\n self.base.doGenericSetup(cache=1)\n self.releasever = self.base.conf.yumvar['releasever']\n self.basearch = self.base.conf.yumvar['basearch']\n self.packages = []\n self.repos = []\n self.updict = {}\n\n def __enter__(self):\n return self\n\n def __exit__(self, *args):\n pass\n\n @staticmethod\n def pkg_cmp(a, b):\n vercmp = a.verCMP(b)\n if vercmp != 0:\n return vercmp\n if a.repoid != b.repoid:\n return -1 if a.repoid < b.repoid else 1\n return 0\n\n def sorted_pkgs(self, pkgs):\n return sorted_cmp(pkgs, self.pkg_cmp)\n\n def load(self):\n self.base.doRepoSetup()\n self.base.doSackSetup()\n self.packages = self.base.pkgSack.returnPackages()\n self.repos = self.base.repos.repos\n self._build_updict()\n\n def _build_updict(self):\n self.updict = {}\n for pkg in self.packages:\n self.updict.setdefault(pkg.na, []).append(pkg)\n\n def enabled_repos(self):\n return [repo.id for repo in self.base.repos.listEnabled()]\n\n def installed_packages(self):\n return self.base.rpmdb.returnPackages()\n\n def updates(self, pkg):\n nevra = pkg.nevra\n updates_list = []\n for upg in self.updict[pkg.na]:\n if upg.verGT(pkg):\n updates_list.append(upg)\n return nevra, updates_list\n\n @staticmethod\n def pkg_nevra(pkg):\n return \"{}-{}:{}-{}.{}\".format(pkg.name, pkg.epoch, pkg.version, pkg.release, pkg.arch)\n\n @staticmethod\n def pkg_repo(pkg):\n return pkg.repoid\n\n def advisory(self, pkg):\n adv = self.base.upinfo.get_notice(pkg.nvr)\n if adv:\n return adv.get_metadata()['update_id']\n return None\n\n @staticmethod\n def last_update():\n return 0\n\n\n@datasource(HostContext, [IsRhel7])\ndef yum_updates(_broker):\n \"\"\"\n This datasource provides a list of available updates on the system.\n It uses the yum python library installed locally, and collects list of\n available package updates, along with advisory info where applicable.\n \"\"\"\n\n if not _broker.get(IsRhel7):\n raise SkipComponent(\"Yum updates currently only works on RHEL 7\")\n\n with UpdatesManager() as umgr:\n umgr.load()\n\n response = {\n \"releasever\": umgr.releasever,\n \"basearch\": umgr.basearch,\n \"update_list\": {},\n }\n\n data = {'package_list': umgr.installed_packages()}\n updates = {}\n for pkg in data[\"package_list\"]:\n (nevra, updates_list) = umgr.updates(pkg)\n updates[nevra] = updates_list\n for (nevra, update_list) in updates.items():\n if update_list:\n out_list = []\n for pkg in umgr.sorted_pkgs(update_list):\n pkg_dict = {\n \"package\": umgr.pkg_nevra(pkg),\n \"repository\": umgr.pkg_repo(pkg),\n \"basearch\": response[\"basearch\"],\n \"releasever\": response[\"releasever\"],\n }\n erratum = umgr.advisory(pkg)\n if erratum:\n pkg_dict[\"erratum\"] = erratum\n out_list.append(pkg_dict)\n response[\"update_list\"][nevra] = {\"available_updates\": out_list}\n\n ts = umgr.last_update()\n if ts:\n response[\"metadata_time\"] = time.strftime(\"%FT%TZ\", time.gmtime(ts))\n return DatasourceProvider(content=json.dumps(response), relative_path='insights_commands/yum_updates_list')\n", "path": "insights/specs/datasources/yum_updates.py"}], "after_files": [{"content": "\"\"\"\nCustom datasource for collecting yum updates\n\"\"\"\nimport json\nimport time\n\nfrom insights import datasource, HostContext, SkipComponent\nfrom insights.components.rhel_version import IsRhel7\nfrom insights.core.spec_factory import DatasourceProvider\n\nsorted_cmp = None\ntry:\n # cmp_to_key is not available in python 2.6, but it has sorted function which accepts cmp function\n def sorted_cmp(it, cmp):\n from functools import cmp_to_key\n return sorted(it, key=cmp_to_key(cmp))\nexcept ImportError:\n sorted_cmp = sorted\n\n\nclass UpdatesManager:\n \"\"\" Performs package resolution on yum based systems \"\"\"\n def __init__(self):\n import yum\n\n self.base = yum.YumBase()\n self.base.doGenericSetup(cache=1)\n self.releasever = self.base.conf.yumvar['releasever']\n self.basearch = self.base.conf.yumvar['basearch']\n self.packages = []\n self.repos = []\n self.updict = {}\n\n def __enter__(self):\n return self\n\n def __exit__(self, *args):\n pass\n\n @staticmethod\n def pkg_cmp(a, b):\n vercmp = a.verCMP(b)\n if vercmp != 0:\n return vercmp\n if a.repoid != b.repoid:\n return -1 if a.repoid < b.repoid else 1\n return 0\n\n def sorted_pkgs(self, pkgs):\n return sorted_cmp(pkgs, self.pkg_cmp)\n\n def load(self):\n self.base.doRepoSetup()\n self.base.doSackSetup()\n self.packages = self.base.pkgSack.returnPackages()\n self.repos = self.base.repos.repos\n self._build_updict()\n\n def _build_updict(self):\n self.updict = {}\n for pkg in self.packages:\n self.updict.setdefault(pkg.na, []).append(pkg)\n\n def enabled_repos(self):\n return [repo.id for repo in self.base.repos.listEnabled()]\n\n def installed_packages(self):\n return self.base.rpmdb.returnPackages()\n\n def updates(self, pkg):\n nevra = pkg.nevra\n updates_list = []\n for upg in self.updict.get(pkg.na, []):\n if upg.verGT(pkg):\n updates_list.append(upg)\n return nevra, updates_list\n\n @staticmethod\n def pkg_nevra(pkg):\n return \"{}-{}:{}-{}.{}\".format(pkg.name, pkg.epoch, pkg.version, pkg.release, pkg.arch)\n\n @staticmethod\n def pkg_repo(pkg):\n return pkg.repoid\n\n def advisory(self, pkg):\n adv = self.base.upinfo.get_notice(pkg.nvr)\n if adv:\n return adv.get_metadata()['update_id']\n return None\n\n @staticmethod\n def last_update():\n return 0\n\n\n@datasource(HostContext, [IsRhel7])\ndef yum_updates(_broker):\n \"\"\"\n This datasource provides a list of available updates on the system.\n It uses the yum python library installed locally, and collects list of\n available package updates, along with advisory info where applicable.\n\n Sample data returned::\n\n {\n \"releasever\": \"8\",\n \"basearch\": \"x86_64\",\n \"update_list\": {\n \"NetworkManager-1:1.22.8-4.el8.x86_64\": {\n \"available_updates\": [\n {\n \"package\": \"NetworkManager-1:1.22.8-5.el8_2.x86_64\",\n \"repository\": \"rhel-8-for-x86_64-baseos-rpms\",\n \"basearch\": \"x86_64\",\n \"releasever\": \"8\",\n \"erratum\": \"RHSA-2020:3011\"\n }\n ]\n }\n },\n \"metadata_time\": \"2021-01-01T09:39:45Z\"\n }\n\n Returns:\n list: List of available updates\n Raises:\n SkipComponent: Raised on systems different than RHEL 7\n \"\"\"\n\n if not _broker.get(IsRhel7):\n raise SkipComponent(\"Yum updates currently only works on RHEL 7\")\n\n with UpdatesManager() as umgr:\n umgr.load()\n\n response = {\n \"releasever\": umgr.releasever,\n \"basearch\": umgr.basearch,\n \"update_list\": {},\n }\n\n data = {'package_list': umgr.installed_packages()}\n updates = {}\n for pkg in data[\"package_list\"]:\n (nevra, updates_list) = umgr.updates(pkg)\n updates[nevra] = updates_list\n for (nevra, update_list) in updates.items():\n if update_list:\n out_list = []\n for pkg in umgr.sorted_pkgs(update_list):\n pkg_dict = {\n \"package\": umgr.pkg_nevra(pkg),\n \"repository\": umgr.pkg_repo(pkg),\n \"basearch\": response[\"basearch\"],\n \"releasever\": response[\"releasever\"],\n }\n erratum = umgr.advisory(pkg)\n if erratum:\n pkg_dict[\"erratum\"] = erratum\n out_list.append(pkg_dict)\n response[\"update_list\"][nevra] = {\"available_updates\": out_list}\n\n ts = umgr.last_update()\n if ts:\n response[\"metadata_time\"] = time.strftime(\"%FT%TZ\", time.gmtime(ts))\n return DatasourceProvider(content=json.dumps(response), relative_path='insights_commands/yum_updates_list')\n", "path": "insights/specs/datasources/yum_updates.py"}]}
| 1,726 | 466 |
gh_patches_debug_10322
|
rasdani/github-patches
|
git_diff
|
streamlit__streamlit-2148
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
On the newest docs, "Deploy a Streamlit app" page is empty
**Link to doc page in question (if any):**
https://docs.streamlit.io/en/stable/deploy_streamlit_app.html
**What you think the docs should say:**
For now, show placeholder content (link to heroku deploy instructions?)
In the future, describe what to do for S4A.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/streamlit/elements/file_uploader.py`
Content:
```
1 from streamlit import config
2
3 from streamlit.proto.FileUploader_pb2 import FileUploader as FileUploaderProto
4 from streamlit.report_thread import get_report_ctx
5 from streamlit.file_util import get_encoded_file_data
6 from streamlit.errors import StreamlitDeprecationWarning
7 from .utils import NoValue, _set_widget_id
8
9
10 class FileUploaderMixin:
11 def file_uploader(
12 dg, label, type=None, accept_multiple_files=False, key=None, **kwargs
13 ):
14 """Display a file uploader widget.
15 By default, uploaded files are limited to 200MB. You can configure
16 this using the `server.maxUploadSize` config option.
17
18 Parameters
19 ----------
20 label : str or None
21 A short label explaining to the user what this file uploader is for.
22
23 type : str or list of str or None
24 Array of allowed extensions. ['png', 'jpg']
25 The default is None, which means all extensions are allowed.
26
27 accept_multiple_files : bool
28 If True, allows the user to upload multiple files at the same time,
29 in which case the return value will be a list of files.
30 Default: False
31
32 key : str
33 An optional string to use as the unique key for the widget.
34 If this is omitted, a key will be generated for the widget
35 based on its content. Multiple widgets of the same type may
36 not share the same key.
37
38 Returns
39 -------
40 None or UploadedFile or list of UploadedFile
41 - If allow_multiple_files is False, returns either None or
42 an UploadedFile object.
43 - If allow_multiple_files is True, returns a list with the
44 uploaded files as UploadedFile objects. If no files were
45 uploaded, returns an empty list.
46 The UploadedFile class is a subclass of BytesIO, and therefore
47 it is "file-like". This means you can pass them anywhere where
48 a file is expected.
49
50 Examples
51 --------
52 Insert a file uploader that accepts a single file at a time:
53
54 >>> uploaded_file = st.file_uploader("Choose a file")
55 >>> if uploaded_file is not None:
56 ... # To read file as bytes:
57 ... bytes_data = uploaded_file.read()
58 ... st.write(bytes_data)
59 >>>
60 ... # To convert to a string based IO:
61 ... stringio = StringIO(uploaded_file.decode("utf-8"))
62 ... st.write(stringio)
63 >>>
64 ... # To read file as string:
65 ... string_data = stringio.read()
66 ... st.write(string_data)
67 >>>
68 ... # Can be used wherever a "file-like" object is accepted:
69 ... dataframe = pd.read_csv(uploaded_file)
70 ... st.write(dataframe)
71
72 Insert a file uploader that accepts multiple files at a time:
73
74 >>> uploaded_files = st.file_uploader("Choose a CSV file", accept_multiple_files=True)
75 >>> for uploaded_file in uploaded_files:
76 ... bytes_data = uploaded_file.read()
77 ... st.write("filename:", uploaded_file.name)
78 ... st.write(bytes_data)
79 """
80
81 if type:
82 if isinstance(type, str):
83 type = [type]
84
85 # May need a regex or a library to validate file types are valid
86 # extensions.
87 type = [
88 file_type if file_type[0] == "." else f".{file_type}"
89 for file_type in type
90 ]
91
92 has_encoding = "encoding" in kwargs
93 show_deprecation_warning = config.get_option(
94 "deprecation.showfileUploaderEncoding"
95 )
96
97 if show_deprecation_warning and has_encoding:
98 dg.exception(FileUploaderEncodingWarning()) # type: ignore
99
100 file_uploader_proto = FileUploaderProto()
101 file_uploader_proto.label = label
102 file_uploader_proto.type[:] = type if type is not None else []
103 file_uploader_proto.max_upload_size_mb = config.get_option(
104 "server.maxUploadSize"
105 )
106 file_uploader_proto.multiple_files = accept_multiple_files
107 _set_widget_id("file_uploader", file_uploader_proto, user_key=key)
108
109 files = None
110 ctx = get_report_ctx()
111 if ctx is not None:
112 files = ctx.uploaded_file_mgr.get_files(
113 session_id=ctx.session_id, widget_id=file_uploader_proto.id
114 )
115
116 if files is None or len(files) == 0:
117 return_value = [] if accept_multiple_files else NoValue
118 else:
119 return_value = files if accept_multiple_files else files[0]
120
121 return dg._enqueue("file_uploader", file_uploader_proto, return_value) # type: ignore
122
123
124 class FileUploaderEncodingWarning(StreamlitDeprecationWarning):
125 def __init__(self):
126 msg = self._get_message()
127 config_option = "deprecation.showfileUploaderEncoding"
128 super(FileUploaderEncodingWarning, self).__init__(
129 msg=msg, config_option=config_option
130 )
131
132 def _get_message(self):
133 return """
134 The behavior of `st.file_uploader` no longer autodetects the file's encoding.
135 This means that _all files_ will be returned as binary buffers. If you need to
136 work with a string buffer, you can convert to a StringIO by decoding the binary
137 buffer as shown below:
138
139 ```
140 file_buffer = st.file_uploader(...)
141 string_io = file_buffer.decode()
142 ```
143 """
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lib/streamlit/elements/file_uploader.py b/lib/streamlit/elements/file_uploader.py
--- a/lib/streamlit/elements/file_uploader.py
+++ b/lib/streamlit/elements/file_uploader.py
@@ -43,6 +43,7 @@
- If allow_multiple_files is True, returns a list with the
uploaded files as UploadedFile objects. If no files were
uploaded, returns an empty list.
+
The UploadedFile class is a subclass of BytesIO, and therefore
it is "file-like". This means you can pass them anywhere where
a file is expected.
|
{"golden_diff": "diff --git a/lib/streamlit/elements/file_uploader.py b/lib/streamlit/elements/file_uploader.py\n--- a/lib/streamlit/elements/file_uploader.py\n+++ b/lib/streamlit/elements/file_uploader.py\n@@ -43,6 +43,7 @@\n - If allow_multiple_files is True, returns a list with the\n uploaded files as UploadedFile objects. If no files were\n uploaded, returns an empty list.\n+\n The UploadedFile class is a subclass of BytesIO, and therefore\n it is \"file-like\". This means you can pass them anywhere where\n a file is expected.\n", "issue": "On the newest docs, \"Deploy a Streamlit app\" page is empty\n**Link to doc page in question (if any):**\r\n\r\nhttps://docs.streamlit.io/en/stable/deploy_streamlit_app.html\r\n\r\n**What you think the docs should say:**\r\n\r\nFor now, show placeholder content (link to heroku deploy instructions?)\r\nIn the future, describe what to do for S4A.\r\n\n", "before_files": [{"content": "from streamlit import config\n\nfrom streamlit.proto.FileUploader_pb2 import FileUploader as FileUploaderProto\nfrom streamlit.report_thread import get_report_ctx\nfrom streamlit.file_util import get_encoded_file_data\nfrom streamlit.errors import StreamlitDeprecationWarning\nfrom .utils import NoValue, _set_widget_id\n\n\nclass FileUploaderMixin:\n def file_uploader(\n dg, label, type=None, accept_multiple_files=False, key=None, **kwargs\n ):\n \"\"\"Display a file uploader widget.\n By default, uploaded files are limited to 200MB. You can configure\n this using the `server.maxUploadSize` config option.\n\n Parameters\n ----------\n label : str or None\n A short label explaining to the user what this file uploader is for.\n\n type : str or list of str or None\n Array of allowed extensions. ['png', 'jpg']\n The default is None, which means all extensions are allowed.\n\n accept_multiple_files : bool\n If True, allows the user to upload multiple files at the same time,\n in which case the return value will be a list of files.\n Default: False\n\n key : str\n An optional string to use as the unique key for the widget.\n If this is omitted, a key will be generated for the widget\n based on its content. Multiple widgets of the same type may\n not share the same key.\n\n Returns\n -------\n None or UploadedFile or list of UploadedFile\n - If allow_multiple_files is False, returns either None or\n an UploadedFile object.\n - If allow_multiple_files is True, returns a list with the\n uploaded files as UploadedFile objects. If no files were\n uploaded, returns an empty list.\n The UploadedFile class is a subclass of BytesIO, and therefore\n it is \"file-like\". This means you can pass them anywhere where\n a file is expected.\n\n Examples\n --------\n Insert a file uploader that accepts a single file at a time:\n\n >>> uploaded_file = st.file_uploader(\"Choose a file\")\n >>> if uploaded_file is not None:\n ... # To read file as bytes:\n ... bytes_data = uploaded_file.read()\n ... st.write(bytes_data)\n >>>\n ... # To convert to a string based IO:\n ... stringio = StringIO(uploaded_file.decode(\"utf-8\"))\n ... st.write(stringio)\n >>>\n ... # To read file as string:\n ... string_data = stringio.read()\n ... st.write(string_data)\n >>>\n ... # Can be used wherever a \"file-like\" object is accepted:\n ... dataframe = pd.read_csv(uploaded_file)\n ... st.write(dataframe)\n\n Insert a file uploader that accepts multiple files at a time:\n\n >>> uploaded_files = st.file_uploader(\"Choose a CSV file\", accept_multiple_files=True)\n >>> for uploaded_file in uploaded_files:\n ... bytes_data = uploaded_file.read()\n ... st.write(\"filename:\", uploaded_file.name)\n ... st.write(bytes_data)\n \"\"\"\n\n if type:\n if isinstance(type, str):\n type = [type]\n\n # May need a regex or a library to validate file types are valid\n # extensions.\n type = [\n file_type if file_type[0] == \".\" else f\".{file_type}\"\n for file_type in type\n ]\n\n has_encoding = \"encoding\" in kwargs\n show_deprecation_warning = config.get_option(\n \"deprecation.showfileUploaderEncoding\"\n )\n\n if show_deprecation_warning and has_encoding:\n dg.exception(FileUploaderEncodingWarning()) # type: ignore\n\n file_uploader_proto = FileUploaderProto()\n file_uploader_proto.label = label\n file_uploader_proto.type[:] = type if type is not None else []\n file_uploader_proto.max_upload_size_mb = config.get_option(\n \"server.maxUploadSize\"\n )\n file_uploader_proto.multiple_files = accept_multiple_files\n _set_widget_id(\"file_uploader\", file_uploader_proto, user_key=key)\n\n files = None\n ctx = get_report_ctx()\n if ctx is not None:\n files = ctx.uploaded_file_mgr.get_files(\n session_id=ctx.session_id, widget_id=file_uploader_proto.id\n )\n\n if files is None or len(files) == 0:\n return_value = [] if accept_multiple_files else NoValue\n else:\n return_value = files if accept_multiple_files else files[0]\n\n return dg._enqueue(\"file_uploader\", file_uploader_proto, return_value) # type: ignore\n\n\nclass FileUploaderEncodingWarning(StreamlitDeprecationWarning):\n def __init__(self):\n msg = self._get_message()\n config_option = \"deprecation.showfileUploaderEncoding\"\n super(FileUploaderEncodingWarning, self).__init__(\n msg=msg, config_option=config_option\n )\n\n def _get_message(self):\n return \"\"\"\nThe behavior of `st.file_uploader` no longer autodetects the file's encoding.\nThis means that _all files_ will be returned as binary buffers. If you need to\nwork with a string buffer, you can convert to a StringIO by decoding the binary\nbuffer as shown below:\n\n```\nfile_buffer = st.file_uploader(...)\nstring_io = file_buffer.decode()\n```\n \"\"\"\n", "path": "lib/streamlit/elements/file_uploader.py"}], "after_files": [{"content": "from streamlit import config\n\nfrom streamlit.proto.FileUploader_pb2 import FileUploader as FileUploaderProto\nfrom streamlit.report_thread import get_report_ctx\nfrom streamlit.file_util import get_encoded_file_data\nfrom streamlit.errors import StreamlitDeprecationWarning\nfrom .utils import NoValue, _set_widget_id\n\n\nclass FileUploaderMixin:\n def file_uploader(\n dg, label, type=None, accept_multiple_files=False, key=None, **kwargs\n ):\n \"\"\"Display a file uploader widget.\n By default, uploaded files are limited to 200MB. You can configure\n this using the `server.maxUploadSize` config option.\n\n Parameters\n ----------\n label : str or None\n A short label explaining to the user what this file uploader is for.\n\n type : str or list of str or None\n Array of allowed extensions. ['png', 'jpg']\n The default is None, which means all extensions are allowed.\n\n accept_multiple_files : bool\n If True, allows the user to upload multiple files at the same time,\n in which case the return value will be a list of files.\n Default: False\n\n key : str\n An optional string to use as the unique key for the widget.\n If this is omitted, a key will be generated for the widget\n based on its content. Multiple widgets of the same type may\n not share the same key.\n\n Returns\n -------\n None or UploadedFile or list of UploadedFile\n - If allow_multiple_files is False, returns either None or\n an UploadedFile object.\n - If allow_multiple_files is True, returns a list with the\n uploaded files as UploadedFile objects. If no files were\n uploaded, returns an empty list.\n\n The UploadedFile class is a subclass of BytesIO, and therefore\n it is \"file-like\". This means you can pass them anywhere where\n a file is expected.\n\n Examples\n --------\n Insert a file uploader that accepts a single file at a time:\n\n >>> uploaded_file = st.file_uploader(\"Choose a file\")\n >>> if uploaded_file is not None:\n ... # To read file as bytes:\n ... bytes_data = uploaded_file.read()\n ... st.write(bytes_data)\n >>>\n ... # To convert to a string based IO:\n ... stringio = StringIO(uploaded_file.decode(\"utf-8\"))\n ... st.write(stringio)\n >>>\n ... # To read file as string:\n ... string_data = stringio.read()\n ... st.write(string_data)\n >>>\n ... # Can be used wherever a \"file-like\" object is accepted:\n ... dataframe = pd.read_csv(uploaded_file)\n ... st.write(dataframe)\n\n Insert a file uploader that accepts multiple files at a time:\n\n >>> uploaded_files = st.file_uploader(\"Choose a CSV file\", accept_multiple_files=True)\n >>> for uploaded_file in uploaded_files:\n ... bytes_data = uploaded_file.read()\n ... st.write(\"filename:\", uploaded_file.name)\n ... st.write(bytes_data)\n \"\"\"\n\n if type:\n if isinstance(type, str):\n type = [type]\n\n # May need a regex or a library to validate file types are valid\n # extensions.\n type = [\n file_type if file_type[0] == \".\" else f\".{file_type}\"\n for file_type in type\n ]\n\n has_encoding = \"encoding\" in kwargs\n show_deprecation_warning = config.get_option(\n \"deprecation.showfileUploaderEncoding\"\n )\n\n if show_deprecation_warning and has_encoding:\n dg.exception(FileUploaderEncodingWarning()) # type: ignore\n\n file_uploader_proto = FileUploaderProto()\n file_uploader_proto.label = label\n file_uploader_proto.type[:] = type if type is not None else []\n file_uploader_proto.max_upload_size_mb = config.get_option(\n \"server.maxUploadSize\"\n )\n file_uploader_proto.multiple_files = accept_multiple_files\n _set_widget_id(\"file_uploader\", file_uploader_proto, user_key=key)\n\n files = None\n ctx = get_report_ctx()\n if ctx is not None:\n files = ctx.uploaded_file_mgr.get_files(\n session_id=ctx.session_id, widget_id=file_uploader_proto.id\n )\n\n if files is None or len(files) == 0:\n return_value = [] if accept_multiple_files else NoValue\n else:\n return_value = files if accept_multiple_files else files[0]\n\n return dg._enqueue(\"file_uploader\", file_uploader_proto, return_value) # type: ignore\n\n\nclass FileUploaderEncodingWarning(StreamlitDeprecationWarning):\n def __init__(self):\n msg = self._get_message()\n config_option = \"deprecation.showfileUploaderEncoding\"\n super(FileUploaderEncodingWarning, self).__init__(\n msg=msg, config_option=config_option\n )\n\n def _get_message(self):\n return \"\"\"\nThe behavior of `st.file_uploader` no longer autodetects the file's encoding.\nThis means that _all files_ will be returned as binary buffers. If you need to\nwork with a string buffer, you can convert to a StringIO by decoding the binary\nbuffer as shown below:\n\n```\nfile_buffer = st.file_uploader(...)\nstring_io = file_buffer.decode()\n```\n \"\"\"\n", "path": "lib/streamlit/elements/file_uploader.py"}]}
| 1,837 | 134 |
gh_patches_debug_4347
|
rasdani/github-patches
|
git_diff
|
ipython__ipython-5395
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Converting notebooks with spaces in their names to RST gives broken images
I am using `ipython nbconvert --to rst example1.ipynb` to convert my example notebooks into reStructuredText, for incorporation into my package's Sphinx documentation. This works quite well, unless the filename has a space in it. In this case, any image files from my notebooks are lost when I run Sphinx's `make html`.
My guess is that the problem seems is in the generated rst file, where the `.. image` command may need to be escaped or quoted somehow to work with spaces in the filename.
I note that a similar issue was reported and resolved for latex output in issue #3774, however the solution was specific to latex.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `IPython/nbconvert/filters/markdown.py`
Content:
```
1 """Markdown filters
2 This file contains a collection of utility filters for dealing with
3 markdown within Jinja templates.
4 """
5 #-----------------------------------------------------------------------------
6 # Copyright (c) 2013, the IPython Development Team.
7 #
8 # Distributed under the terms of the Modified BSD License.
9 #
10 # The full license is in the file COPYING.txt, distributed with this software.
11 #-----------------------------------------------------------------------------
12
13 #-----------------------------------------------------------------------------
14 # Imports
15 #-----------------------------------------------------------------------------
16 from __future__ import print_function
17
18 # Stdlib imports
19 import os
20 import subprocess
21 import warnings
22 from io import TextIOWrapper, BytesIO
23
24 # IPython imports
25 from IPython.nbconvert.utils.pandoc import pandoc
26 from IPython.nbconvert.utils.exceptions import ConversionException
27 from IPython.utils.process import get_output_error_code
28 from IPython.utils.py3compat import cast_bytes
29 from IPython.utils.version import check_version
30
31 #-----------------------------------------------------------------------------
32 # Functions
33 #-----------------------------------------------------------------------------
34 marked = os.path.join(os.path.dirname(__file__), "marked.js")
35 _node = None
36
37 __all__ = [
38 'markdown2html',
39 'markdown2html_pandoc',
40 'markdown2html_marked',
41 'markdown2latex',
42 'markdown2rst',
43 ]
44
45 class NodeJSMissing(ConversionException):
46 """Exception raised when node.js is missing."""
47 pass
48
49 def markdown2latex(source):
50 """Convert a markdown string to LaTeX via pandoc.
51
52 This function will raise an error if pandoc is not installed.
53 Any error messages generated by pandoc are printed to stderr.
54
55 Parameters
56 ----------
57 source : string
58 Input string, assumed to be valid markdown.
59
60 Returns
61 -------
62 out : string
63 Output as returned by pandoc.
64 """
65 return pandoc(source, 'markdown', 'latex')
66
67 def markdown2html(source):
68 """Convert a markdown string to HTML"""
69 global _node
70 if _node is None:
71 # prefer md2html via marked if node.js >= 0.9.12 is available
72 # node is called nodejs on debian, so try that first
73 _node = 'nodejs'
74 if not _verify_node(_node):
75 _node = 'node'
76 if not _verify_node(_node):
77 warnings.warn( "Node.js 0.9.12 or later wasn't found.\n" +
78 "Nbconvert will try to use Pandoc instead.")
79 _node = False
80 if _node:
81 return markdown2html_marked(source)
82 else:
83 return markdown2html_pandoc(source)
84
85 def markdown2html_pandoc(source):
86 """Convert a markdown string to HTML via pandoc"""
87 return pandoc(source, 'markdown', 'html', extra_args=['--mathjax'])
88
89 def markdown2html_marked(source, encoding='utf-8'):
90 """Convert a markdown string to HTML via marked"""
91 command = [_node, marked]
92 try:
93 p = subprocess.Popen(command,
94 stdin=subprocess.PIPE, stdout=subprocess.PIPE
95 )
96 except OSError as e:
97 raise NodeJSMissing(
98 "The command '%s' returned an error: %s.\n" % (" ".join(command), e) +
99 "Please check that Node.js is installed."
100 )
101 out, _ = p.communicate(cast_bytes(source, encoding))
102 out = TextIOWrapper(BytesIO(out), encoding, 'replace').read()
103 return out.rstrip('\n')
104
105 def markdown2rst(source):
106 """Convert a markdown string to LaTeX via pandoc.
107
108 This function will raise an error if pandoc is not installed.
109 Any error messages generated by pandoc are printed to stderr.
110
111 Parameters
112 ----------
113 source : string
114 Input string, assumed to be valid markdown.
115
116 Returns
117 -------
118 out : string
119 Output as returned by pandoc.
120 """
121 return pandoc(source, 'markdown', 'rst')
122
123 def _verify_node(cmd):
124 """Verify that the node command exists and is at least the minimum supported
125 version of node.
126
127 Parameters
128 ----------
129 cmd : string
130 Node command to verify (i.e 'node')."""
131 try:
132 out, err, return_code = get_output_error_code([cmd, '--version'])
133 except OSError:
134 # Command not found
135 return False
136 if return_code:
137 # Command error
138 return False
139 return check_version(out.lstrip('v'), '0.9.12')
140
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/IPython/nbconvert/filters/markdown.py b/IPython/nbconvert/filters/markdown.py
--- a/IPython/nbconvert/filters/markdown.py
+++ b/IPython/nbconvert/filters/markdown.py
@@ -103,7 +103,7 @@
return out.rstrip('\n')
def markdown2rst(source):
- """Convert a markdown string to LaTeX via pandoc.
+ """Convert a markdown string to ReST via pandoc.
This function will raise an error if pandoc is not installed.
Any error messages generated by pandoc are printed to stderr.
|
{"golden_diff": "diff --git a/IPython/nbconvert/filters/markdown.py b/IPython/nbconvert/filters/markdown.py\n--- a/IPython/nbconvert/filters/markdown.py\n+++ b/IPython/nbconvert/filters/markdown.py\n@@ -103,7 +103,7 @@\n return out.rstrip('\\n')\n \n def markdown2rst(source):\n- \"\"\"Convert a markdown string to LaTeX via pandoc.\n+ \"\"\"Convert a markdown string to ReST via pandoc.\n \n This function will raise an error if pandoc is not installed.\n Any error messages generated by pandoc are printed to stderr.\n", "issue": "Converting notebooks with spaces in their names to RST gives broken images\nI am using `ipython nbconvert --to rst example1.ipynb` to convert my example notebooks into reStructuredText, for incorporation into my package's Sphinx documentation. This works quite well, unless the filename has a space in it. In this case, any image files from my notebooks are lost when I run Sphinx's `make html`. \n\nMy guess is that the problem seems is in the generated rst file, where the `.. image` command may need to be escaped or quoted somehow to work with spaces in the filename.\n\nI note that a similar issue was reported and resolved for latex output in issue #3774, however the solution was specific to latex.\n\n", "before_files": [{"content": "\"\"\"Markdown filters\nThis file contains a collection of utility filters for dealing with \nmarkdown within Jinja templates.\n\"\"\"\n#-----------------------------------------------------------------------------\n# Copyright (c) 2013, the IPython Development Team.\n#\n# Distributed under the terms of the Modified BSD License.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\nfrom __future__ import print_function\n\n# Stdlib imports\nimport os\nimport subprocess\nimport warnings\nfrom io import TextIOWrapper, BytesIO\n\n# IPython imports\nfrom IPython.nbconvert.utils.pandoc import pandoc\nfrom IPython.nbconvert.utils.exceptions import ConversionException\nfrom IPython.utils.process import get_output_error_code\nfrom IPython.utils.py3compat import cast_bytes\nfrom IPython.utils.version import check_version\n\n#-----------------------------------------------------------------------------\n# Functions\n#-----------------------------------------------------------------------------\nmarked = os.path.join(os.path.dirname(__file__), \"marked.js\")\n_node = None\n\n__all__ = [\n 'markdown2html',\n 'markdown2html_pandoc',\n 'markdown2html_marked',\n 'markdown2latex',\n 'markdown2rst',\n]\n\nclass NodeJSMissing(ConversionException):\n \"\"\"Exception raised when node.js is missing.\"\"\"\n pass\n\ndef markdown2latex(source):\n \"\"\"Convert a markdown string to LaTeX via pandoc.\n\n This function will raise an error if pandoc is not installed.\n Any error messages generated by pandoc are printed to stderr.\n\n Parameters\n ----------\n source : string\n Input string, assumed to be valid markdown.\n\n Returns\n -------\n out : string\n Output as returned by pandoc.\n \"\"\"\n return pandoc(source, 'markdown', 'latex')\n\ndef markdown2html(source):\n \"\"\"Convert a markdown string to HTML\"\"\"\n global _node\n if _node is None:\n # prefer md2html via marked if node.js >= 0.9.12 is available\n # node is called nodejs on debian, so try that first\n _node = 'nodejs'\n if not _verify_node(_node):\n _node = 'node'\n if not _verify_node(_node):\n warnings.warn( \"Node.js 0.9.12 or later wasn't found.\\n\" +\n \"Nbconvert will try to use Pandoc instead.\")\n _node = False\n if _node:\n return markdown2html_marked(source)\n else:\n return markdown2html_pandoc(source)\n\ndef markdown2html_pandoc(source):\n \"\"\"Convert a markdown string to HTML via pandoc\"\"\"\n return pandoc(source, 'markdown', 'html', extra_args=['--mathjax'])\n\ndef markdown2html_marked(source, encoding='utf-8'):\n \"\"\"Convert a markdown string to HTML via marked\"\"\"\n command = [_node, marked]\n try:\n p = subprocess.Popen(command,\n stdin=subprocess.PIPE, stdout=subprocess.PIPE\n )\n except OSError as e:\n raise NodeJSMissing(\n \"The command '%s' returned an error: %s.\\n\" % (\" \".join(command), e) +\n \"Please check that Node.js is installed.\"\n )\n out, _ = p.communicate(cast_bytes(source, encoding))\n out = TextIOWrapper(BytesIO(out), encoding, 'replace').read()\n return out.rstrip('\\n')\n\ndef markdown2rst(source):\n \"\"\"Convert a markdown string to LaTeX via pandoc.\n\n This function will raise an error if pandoc is not installed.\n Any error messages generated by pandoc are printed to stderr.\n\n Parameters\n ----------\n source : string\n Input string, assumed to be valid markdown.\n\n Returns\n -------\n out : string\n Output as returned by pandoc.\n \"\"\"\n return pandoc(source, 'markdown', 'rst')\n\ndef _verify_node(cmd):\n \"\"\"Verify that the node command exists and is at least the minimum supported\n version of node.\n\n Parameters\n ----------\n cmd : string\n Node command to verify (i.e 'node').\"\"\"\n try:\n out, err, return_code = get_output_error_code([cmd, '--version'])\n except OSError:\n # Command not found\n return False\n if return_code:\n # Command error\n return False\n return check_version(out.lstrip('v'), '0.9.12')\n", "path": "IPython/nbconvert/filters/markdown.py"}], "after_files": [{"content": "\"\"\"Markdown filters\nThis file contains a collection of utility filters for dealing with \nmarkdown within Jinja templates.\n\"\"\"\n#-----------------------------------------------------------------------------\n# Copyright (c) 2013, the IPython Development Team.\n#\n# Distributed under the terms of the Modified BSD License.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\nfrom __future__ import print_function\n\n# Stdlib imports\nimport os\nimport subprocess\nimport warnings\nfrom io import TextIOWrapper, BytesIO\n\n# IPython imports\nfrom IPython.nbconvert.utils.pandoc import pandoc\nfrom IPython.nbconvert.utils.exceptions import ConversionException\nfrom IPython.utils.process import get_output_error_code\nfrom IPython.utils.py3compat import cast_bytes\nfrom IPython.utils.version import check_version\n\n#-----------------------------------------------------------------------------\n# Functions\n#-----------------------------------------------------------------------------\nmarked = os.path.join(os.path.dirname(__file__), \"marked.js\")\n_node = None\n\n__all__ = [\n 'markdown2html',\n 'markdown2html_pandoc',\n 'markdown2html_marked',\n 'markdown2latex',\n 'markdown2rst',\n]\n\nclass NodeJSMissing(ConversionException):\n \"\"\"Exception raised when node.js is missing.\"\"\"\n pass\n\ndef markdown2latex(source):\n \"\"\"Convert a markdown string to LaTeX via pandoc.\n\n This function will raise an error if pandoc is not installed.\n Any error messages generated by pandoc are printed to stderr.\n\n Parameters\n ----------\n source : string\n Input string, assumed to be valid markdown.\n\n Returns\n -------\n out : string\n Output as returned by pandoc.\n \"\"\"\n return pandoc(source, 'markdown', 'latex')\n\ndef markdown2html(source):\n \"\"\"Convert a markdown string to HTML\"\"\"\n global _node\n if _node is None:\n # prefer md2html via marked if node.js >= 0.9.12 is available\n # node is called nodejs on debian, so try that first\n _node = 'nodejs'\n if not _verify_node(_node):\n _node = 'node'\n if not _verify_node(_node):\n warnings.warn( \"Node.js 0.9.12 or later wasn't found.\\n\" +\n \"Nbconvert will try to use Pandoc instead.\")\n _node = False\n if _node:\n return markdown2html_marked(source)\n else:\n return markdown2html_pandoc(source)\n\ndef markdown2html_pandoc(source):\n \"\"\"Convert a markdown string to HTML via pandoc\"\"\"\n return pandoc(source, 'markdown', 'html', extra_args=['--mathjax'])\n\ndef markdown2html_marked(source, encoding='utf-8'):\n \"\"\"Convert a markdown string to HTML via marked\"\"\"\n command = [_node, marked]\n try:\n p = subprocess.Popen(command,\n stdin=subprocess.PIPE, stdout=subprocess.PIPE\n )\n except OSError as e:\n raise NodeJSMissing(\n \"The command '%s' returned an error: %s.\\n\" % (\" \".join(command), e) +\n \"Please check that Node.js is installed.\"\n )\n out, _ = p.communicate(cast_bytes(source, encoding))\n out = TextIOWrapper(BytesIO(out), encoding, 'replace').read()\n return out.rstrip('\\n')\n\ndef markdown2rst(source):\n \"\"\"Convert a markdown string to ReST via pandoc.\n\n This function will raise an error if pandoc is not installed.\n Any error messages generated by pandoc are printed to stderr.\n\n Parameters\n ----------\n source : string\n Input string, assumed to be valid markdown.\n\n Returns\n -------\n out : string\n Output as returned by pandoc.\n \"\"\"\n return pandoc(source, 'markdown', 'rst')\n\ndef _verify_node(cmd):\n \"\"\"Verify that the node command exists and is at least the minimum supported\n version of node.\n\n Parameters\n ----------\n cmd : string\n Node command to verify (i.e 'node').\"\"\"\n try:\n out, err, return_code = get_output_error_code([cmd, '--version'])\n except OSError:\n # Command not found\n return False\n if return_code:\n # Command error\n return False\n return check_version(out.lstrip('v'), '0.9.12')\n", "path": "IPython/nbconvert/filters/markdown.py"}]}
| 1,677 | 135 |
gh_patches_debug_16961
|
rasdani/github-patches
|
git_diff
|
ietf-tools__datatracker-5977
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add "totals" to "view feedback" page
### Description
It would be useful to add totals for each column in the "view feedback" page.
### Code of Conduct
- [X] I agree to follow the [IETF's Code of Conduct](https://github.com/ietf-tools/.github/blob/main/CODE_OF_CONDUCT.md)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ietf/nomcom/templatetags/nomcom_tags.py`
Content:
```
1 # Copyright The IETF Trust 2013-2019, All Rights Reserved
2 import os
3 import tempfile
4 import re
5
6 from django import template
7 from django.conf import settings
8 from django.template.defaultfilters import linebreaksbr, force_escape
9 from django.utils.encoding import force_str, DjangoUnicodeDecodeError
10 from django.utils.safestring import mark_safe
11
12 import debug # pyflakes:ignore
13
14 from ietf.nomcom.utils import get_nomcom_by_year, retrieve_nomcom_private_key
15 from ietf.person.models import Person
16 from ietf.utils.log import log
17 from ietf.utils.pipe import pipe
18
19
20 register = template.Library()
21
22
23 @register.filter
24 def is_chair_or_advisor(user, year):
25 if not user or not year:
26 return False
27 nomcom = get_nomcom_by_year(year=year)
28 return nomcom.group.has_role(user, ["chair","advisor"])
29
30
31 @register.filter
32 def has_publickey(nomcom):
33 return nomcom and nomcom.public_key and True or False
34
35 @register.filter
36 def lookup(container,key):
37 return container and container.get(key,None)
38
39 @register.filter
40 def formatted_email(address):
41 person = None
42 addrmatch = re.search('<([^>]+)>',address)
43 if addrmatch:
44 addr = addrmatch.group(1)
45 else:
46 addr = address
47 if addr:
48 persons = Person.objects.filter(email__address__in=[addr])
49 person = persons and persons[0] or None
50 if person and person.name:
51 return "%s <%s>" % (person.plain_name(), addr)
52 else:
53 return address
54
55
56 @register.simple_tag
57 def decrypt(string, request, year, plain=False):
58 try:
59 key = retrieve_nomcom_private_key(request, year)
60 except UnicodeError:
61 return f"-*- Encrypted text [Error retrieving private key, contact the secretariat ({settings.SECRETARIAT_SUPPORT_EMAIL})]"
62 if not key:
63 return '-*- Encrypted text [No private key provided] -*-'
64
65 encrypted_file = tempfile.NamedTemporaryFile(delete=False)
66 encrypted_file.write(string)
67 encrypted_file.close()
68
69 command = "%s smime -decrypt -in %s -inkey /dev/stdin"
70 code, out, error = pipe(command % (settings.OPENSSL_COMMAND,
71 encrypted_file.name), key)
72 try:
73 out = force_str(out)
74 except DjangoUnicodeDecodeError:
75 pass
76 if code != 0:
77 log("openssl error: %s:\n Error %s: %s" %(command, code, error))
78
79 os.unlink(encrypted_file.name)
80
81 if error:
82 return '-*- Encrypted text [Your private key is invalid] -*-'
83
84 if not plain:
85 return force_escape(linebreaksbr(out))
86 return mark_safe(force_escape(out))
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ietf/nomcom/templatetags/nomcom_tags.py b/ietf/nomcom/templatetags/nomcom_tags.py
--- a/ietf/nomcom/templatetags/nomcom_tags.py
+++ b/ietf/nomcom/templatetags/nomcom_tags.py
@@ -1,8 +1,10 @@
-# Copyright The IETF Trust 2013-2019, All Rights Reserved
+# Copyright The IETF Trust 2013-2023, All Rights Reserved
import os
import tempfile
import re
+from collections import defaultdict
+
from django import template
from django.conf import settings
from django.template.defaultfilters import linebreaksbr, force_escape
@@ -84,3 +86,11 @@
if not plain:
return force_escape(linebreaksbr(out))
return mark_safe(force_escape(out))
+
[email protected]
+def feedback_totals(staterank_list):
+ totals = defaultdict(lambda: 0)
+ for fb_dict in staterank_list:
+ for fbtype_name, fbtype_count, _ in fb_dict['feedback']:
+ totals[fbtype_name] += fbtype_count
+ return totals.values()
|
{"golden_diff": "diff --git a/ietf/nomcom/templatetags/nomcom_tags.py b/ietf/nomcom/templatetags/nomcom_tags.py\n--- a/ietf/nomcom/templatetags/nomcom_tags.py\n+++ b/ietf/nomcom/templatetags/nomcom_tags.py\n@@ -1,8 +1,10 @@\n-# Copyright The IETF Trust 2013-2019, All Rights Reserved\n+# Copyright The IETF Trust 2013-2023, All Rights Reserved\n import os\n import tempfile\n import re\n \n+from collections import defaultdict\n+\n from django import template\n from django.conf import settings\n from django.template.defaultfilters import linebreaksbr, force_escape\n@@ -84,3 +86,11 @@\n if not plain:\n return force_escape(linebreaksbr(out))\n return mark_safe(force_escape(out))\n+\[email protected]\n+def feedback_totals(staterank_list):\n+ totals = defaultdict(lambda: 0)\n+ for fb_dict in staterank_list:\n+ for fbtype_name, fbtype_count, _ in fb_dict['feedback']:\n+ totals[fbtype_name] += fbtype_count\n+ return totals.values()\n", "issue": "Add \"totals\" to \"view feedback\" page\n### Description\n\nIt would be useful to add totals for each column in the \"view feedback\" page.\n\n### Code of Conduct\n\n- [X] I agree to follow the [IETF's Code of Conduct](https://github.com/ietf-tools/.github/blob/main/CODE_OF_CONDUCT.md)\n", "before_files": [{"content": "# Copyright The IETF Trust 2013-2019, All Rights Reserved\nimport os\nimport tempfile\nimport re\n\nfrom django import template\nfrom django.conf import settings\nfrom django.template.defaultfilters import linebreaksbr, force_escape\nfrom django.utils.encoding import force_str, DjangoUnicodeDecodeError\nfrom django.utils.safestring import mark_safe\n\nimport debug # pyflakes:ignore\n\nfrom ietf.nomcom.utils import get_nomcom_by_year, retrieve_nomcom_private_key\nfrom ietf.person.models import Person\nfrom ietf.utils.log import log\nfrom ietf.utils.pipe import pipe\n\n\nregister = template.Library()\n\n\[email protected]\ndef is_chair_or_advisor(user, year):\n if not user or not year:\n return False\n nomcom = get_nomcom_by_year(year=year)\n return nomcom.group.has_role(user, [\"chair\",\"advisor\"])\n\n\[email protected]\ndef has_publickey(nomcom):\n return nomcom and nomcom.public_key and True or False\n\[email protected]\ndef lookup(container,key):\n return container and container.get(key,None)\n\[email protected]\ndef formatted_email(address):\n person = None\n addrmatch = re.search('<([^>]+)>',address)\n if addrmatch:\n addr = addrmatch.group(1)\n else:\n addr = address\n if addr:\n persons = Person.objects.filter(email__address__in=[addr])\n person = persons and persons[0] or None\n if person and person.name:\n return \"%s <%s>\" % (person.plain_name(), addr) \n else:\n return address\n\n\[email protected]_tag\ndef decrypt(string, request, year, plain=False):\n try:\n key = retrieve_nomcom_private_key(request, year)\n except UnicodeError:\n return f\"-*- Encrypted text [Error retrieving private key, contact the secretariat ({settings.SECRETARIAT_SUPPORT_EMAIL})]\"\n if not key:\n return '-*- Encrypted text [No private key provided] -*-'\n\n encrypted_file = tempfile.NamedTemporaryFile(delete=False)\n encrypted_file.write(string)\n encrypted_file.close()\n\n command = \"%s smime -decrypt -in %s -inkey /dev/stdin\"\n code, out, error = pipe(command % (settings.OPENSSL_COMMAND,\n encrypted_file.name), key)\n try:\n out = force_str(out)\n except DjangoUnicodeDecodeError:\n pass\n if code != 0:\n log(\"openssl error: %s:\\n Error %s: %s\" %(command, code, error))\n\n os.unlink(encrypted_file.name)\n\n if error:\n return '-*- Encrypted text [Your private key is invalid] -*-'\n\n if not plain:\n return force_escape(linebreaksbr(out))\n return mark_safe(force_escape(out))\n", "path": "ietf/nomcom/templatetags/nomcom_tags.py"}], "after_files": [{"content": "# Copyright The IETF Trust 2013-2023, All Rights Reserved\nimport os\nimport tempfile\nimport re\n\nfrom collections import defaultdict\n\nfrom django import template\nfrom django.conf import settings\nfrom django.template.defaultfilters import linebreaksbr, force_escape\nfrom django.utils.encoding import force_str, DjangoUnicodeDecodeError\nfrom django.utils.safestring import mark_safe\n\nimport debug # pyflakes:ignore\n\nfrom ietf.nomcom.utils import get_nomcom_by_year, retrieve_nomcom_private_key\nfrom ietf.person.models import Person\nfrom ietf.utils.log import log\nfrom ietf.utils.pipe import pipe\n\n\nregister = template.Library()\n\n\[email protected]\ndef is_chair_or_advisor(user, year):\n if not user or not year:\n return False\n nomcom = get_nomcom_by_year(year=year)\n return nomcom.group.has_role(user, [\"chair\",\"advisor\"])\n\n\[email protected]\ndef has_publickey(nomcom):\n return nomcom and nomcom.public_key and True or False\n\[email protected]\ndef lookup(container,key):\n return container and container.get(key,None)\n\[email protected]\ndef formatted_email(address):\n person = None\n addrmatch = re.search('<([^>]+)>',address)\n if addrmatch:\n addr = addrmatch.group(1)\n else:\n addr = address\n if addr:\n persons = Person.objects.filter(email__address__in=[addr])\n person = persons and persons[0] or None\n if person and person.name:\n return \"%s <%s>\" % (person.plain_name(), addr) \n else:\n return address\n\n\[email protected]_tag\ndef decrypt(string, request, year, plain=False):\n try:\n key = retrieve_nomcom_private_key(request, year)\n except UnicodeError:\n return f\"-*- Encrypted text [Error retrieving private key, contact the secretariat ({settings.SECRETARIAT_SUPPORT_EMAIL})]\"\n if not key:\n return '-*- Encrypted text [No private key provided] -*-'\n\n encrypted_file = tempfile.NamedTemporaryFile(delete=False)\n encrypted_file.write(string)\n encrypted_file.close()\n\n command = \"%s smime -decrypt -in %s -inkey /dev/stdin\"\n code, out, error = pipe(command % (settings.OPENSSL_COMMAND,\n encrypted_file.name), key)\n try:\n out = force_str(out)\n except DjangoUnicodeDecodeError:\n pass\n if code != 0:\n log(\"openssl error: %s:\\n Error %s: %s\" %(command, code, error))\n\n os.unlink(encrypted_file.name)\n\n if error:\n return '-*- Encrypted text [Your private key is invalid] -*-'\n\n if not plain:\n return force_escape(linebreaksbr(out))\n return mark_safe(force_escape(out))\n\[email protected]\ndef feedback_totals(staterank_list):\n totals = defaultdict(lambda: 0)\n for fb_dict in staterank_list:\n for fbtype_name, fbtype_count, _ in fb_dict['feedback']:\n totals[fbtype_name] += fbtype_count\n return totals.values()\n", "path": "ietf/nomcom/templatetags/nomcom_tags.py"}]}
| 1,126 | 276 |
gh_patches_debug_23210
|
rasdani/github-patches
|
git_diff
|
mirumee__ariadne-68
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Create shortcut function for GraphQLMiddleware.make_simple_server
Getting started with Ariadne could be made even simpler by providing shortcut function abstracting the `GraphQLMiddleware` away on first contact, thus saving users possible confusion about what they really are doing.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ariadne/utils.py`
Content:
```
1 from graphql import parse
2
3
4 def gql(value: str) -> str:
5 parse(value)
6 return value
7
```
Path: `ariadne/__init__.py`
Content:
```
1 from .executable_schema import make_executable_schema
2 from .resolvers import add_resolve_functions_to_schema, default_resolver, resolve_to
3 from .utils import gql
4 from .wsgi_middleware import GraphQLMiddleware
5
6 __all__ = [
7 "GraphQLMiddleware",
8 "add_resolve_functions_to_schema",
9 "default_resolver",
10 "make_executable_schema",
11 "resolve_to",
12 "gql",
13 ]
14
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ariadne/__init__.py b/ariadne/__init__.py
--- a/ariadne/__init__.py
+++ b/ariadne/__init__.py
@@ -1,6 +1,6 @@
from .executable_schema import make_executable_schema
from .resolvers import add_resolve_functions_to_schema, default_resolver, resolve_to
-from .utils import gql
+from .utils import gql, start_simple_server
from .wsgi_middleware import GraphQLMiddleware
__all__ = [
@@ -10,4 +10,5 @@
"make_executable_schema",
"resolve_to",
"gql",
+ "start_simple_server",
]
diff --git a/ariadne/utils.py b/ariadne/utils.py
--- a/ariadne/utils.py
+++ b/ariadne/utils.py
@@ -1,6 +1,26 @@
+from typing import List, Union
+
from graphql import parse
+from .wsgi_middleware import GraphQLMiddleware
+
def gql(value: str) -> str:
parse(value)
return value
+
+
+def start_simple_server(
+ type_defs: Union[str, List[str]],
+ resolvers: Union[dict, List[dict]],
+ host: str = "127.0.0.1",
+ port: int = 8888,
+):
+ try:
+ print("Simple GraphQL server is running on the http://%s:%s" % (host, port))
+ graphql_server = GraphQLMiddleware.make_simple_server(
+ type_defs, resolvers, host, port
+ )
+ graphql_server.serve_forever()
+ except KeyboardInterrupt:
+ pass
|
{"golden_diff": "diff --git a/ariadne/__init__.py b/ariadne/__init__.py\n--- a/ariadne/__init__.py\n+++ b/ariadne/__init__.py\n@@ -1,6 +1,6 @@\n from .executable_schema import make_executable_schema\n from .resolvers import add_resolve_functions_to_schema, default_resolver, resolve_to\n-from .utils import gql\n+from .utils import gql, start_simple_server\n from .wsgi_middleware import GraphQLMiddleware\n \n __all__ = [\n@@ -10,4 +10,5 @@\n \"make_executable_schema\",\n \"resolve_to\",\n \"gql\",\n+ \"start_simple_server\",\n ]\ndiff --git a/ariadne/utils.py b/ariadne/utils.py\n--- a/ariadne/utils.py\n+++ b/ariadne/utils.py\n@@ -1,6 +1,26 @@\n+from typing import List, Union\n+\n from graphql import parse\n \n+from .wsgi_middleware import GraphQLMiddleware\n+\n \n def gql(value: str) -> str:\n parse(value)\n return value\n+\n+\n+def start_simple_server(\n+ type_defs: Union[str, List[str]],\n+ resolvers: Union[dict, List[dict]],\n+ host: str = \"127.0.0.1\",\n+ port: int = 8888,\n+):\n+ try:\n+ print(\"Simple GraphQL server is running on the http://%s:%s\" % (host, port))\n+ graphql_server = GraphQLMiddleware.make_simple_server(\n+ type_defs, resolvers, host, port\n+ )\n+ graphql_server.serve_forever()\n+ except KeyboardInterrupt:\n+ pass\n", "issue": "Create shortcut function for GraphQLMiddleware.make_simple_server\nGetting started with Ariadne could be made even simpler by providing shortcut function abstracting the `GraphQLMiddleware` away on first contact, thus saving users possible confusion about what they really are doing.\n", "before_files": [{"content": "from graphql import parse\n\n\ndef gql(value: str) -> str:\n parse(value)\n return value\n", "path": "ariadne/utils.py"}, {"content": "from .executable_schema import make_executable_schema\nfrom .resolvers import add_resolve_functions_to_schema, default_resolver, resolve_to\nfrom .utils import gql\nfrom .wsgi_middleware import GraphQLMiddleware\n\n__all__ = [\n \"GraphQLMiddleware\",\n \"add_resolve_functions_to_schema\",\n \"default_resolver\",\n \"make_executable_schema\",\n \"resolve_to\",\n \"gql\",\n]\n", "path": "ariadne/__init__.py"}], "after_files": [{"content": "from typing import List, Union\n\nfrom graphql import parse\n\nfrom .wsgi_middleware import GraphQLMiddleware\n\n\ndef gql(value: str) -> str:\n parse(value)\n return value\n\n\ndef start_simple_server(\n type_defs: Union[str, List[str]],\n resolvers: Union[dict, List[dict]],\n host: str = \"127.0.0.1\",\n port: int = 8888,\n):\n try:\n print(\"Simple GraphQL server is running on the http://%s:%s\" % (host, port))\n graphql_server = GraphQLMiddleware.make_simple_server(\n type_defs, resolvers, host, port\n )\n graphql_server.serve_forever()\n except KeyboardInterrupt:\n pass\n", "path": "ariadne/utils.py"}, {"content": "from .executable_schema import make_executable_schema\nfrom .resolvers import add_resolve_functions_to_schema, default_resolver, resolve_to\nfrom .utils import gql, start_simple_server\nfrom .wsgi_middleware import GraphQLMiddleware\n\n__all__ = [\n \"GraphQLMiddleware\",\n \"add_resolve_functions_to_schema\",\n \"default_resolver\",\n \"make_executable_schema\",\n \"resolve_to\",\n \"gql\",\n \"start_simple_server\",\n]\n", "path": "ariadne/__init__.py"}]}
| 461 | 375 |
gh_patches_debug_16861
|
rasdani/github-patches
|
git_diff
|
hpcaitech__ColossalAI-3656
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[DOC]: Unnecessary step to reformat questions
### 📚 The doc issue
The current documentation contains unnecessary step to reformat questions from FastChat's format to our internal format.
[tensor] fix some unittests
[tensor] fix some unittests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `applications/Chat/evaluate/format_questions.py`
Content:
```
1 import argparse
2 import os
3 import json
4 import copy
5
6 from utils import jdump, get_json_list
7
8
9 def format_questions(args):
10 questions = get_json_list(args.questions_path)
11 keys=questions[0].keys()
12
13 formatted_questions=copy.deepcopy(questions)
14 for i in range(len(formatted_questions)):
15 formatted_questions[i]['instruction']=questions[i]['text']
16 formatted_questions[i]['input']=""
17 formatted_questions[i]['output']=""
18 formatted_questions[i]['id']=questions[i]['question_id']
19 for key in keys:
20 if key=="category":
21 continue
22 del formatted_questions[i][key]
23
24 jdump(formatted_questions, args.save_path)
25
26 if __name__ == '__main__':
27 parser = argparse.ArgumentParser()
28 parser.add_argument('--questions_path', type=str, default='table/question.jsonl')
29 parser.add_argument('--save_path', type=str, default="table/questions.json")
30 args = parser.parse_args()
31 format_questions(args)
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/applications/Chat/evaluate/format_questions.py b/applications/Chat/evaluate/format_questions.py
deleted file mode 100644
--- a/applications/Chat/evaluate/format_questions.py
+++ /dev/null
@@ -1,31 +0,0 @@
-import argparse
-import os
-import json
-import copy
-
-from utils import jdump, get_json_list
-
-
-def format_questions(args):
- questions = get_json_list(args.questions_path)
- keys=questions[0].keys()
-
- formatted_questions=copy.deepcopy(questions)
- for i in range(len(formatted_questions)):
- formatted_questions[i]['instruction']=questions[i]['text']
- formatted_questions[i]['input']=""
- formatted_questions[i]['output']=""
- formatted_questions[i]['id']=questions[i]['question_id']
- for key in keys:
- if key=="category":
- continue
- del formatted_questions[i][key]
-
- jdump(formatted_questions, args.save_path)
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--questions_path', type=str, default='table/question.jsonl')
- parser.add_argument('--save_path', type=str, default="table/questions.json")
- args = parser.parse_args()
- format_questions(args)
\ No newline at end of file
|
{"golden_diff": "diff --git a/applications/Chat/evaluate/format_questions.py b/applications/Chat/evaluate/format_questions.py\ndeleted file mode 100644\n--- a/applications/Chat/evaluate/format_questions.py\n+++ /dev/null\n@@ -1,31 +0,0 @@\n-import argparse\n-import os\n-import json\n-import copy\n-\n-from utils import jdump, get_json_list\n-\n-\n-def format_questions(args):\n- questions = get_json_list(args.questions_path)\n- keys=questions[0].keys()\n- \n- formatted_questions=copy.deepcopy(questions)\n- for i in range(len(formatted_questions)):\n- formatted_questions[i]['instruction']=questions[i]['text']\n- formatted_questions[i]['input']=\"\"\n- formatted_questions[i]['output']=\"\"\n- formatted_questions[i]['id']=questions[i]['question_id']\n- for key in keys:\n- if key==\"category\":\n- continue\n- del formatted_questions[i][key]\n- \n- jdump(formatted_questions, args.save_path)\n-\n-if __name__ == '__main__':\n- parser = argparse.ArgumentParser()\n- parser.add_argument('--questions_path', type=str, default='table/question.jsonl')\n- parser.add_argument('--save_path', type=str, default=\"table/questions.json\")\n- args = parser.parse_args()\n- format_questions(args)\n\\ No newline at end of file\n", "issue": "[DOC]: Unnecessary step to reformat questions\n### \ud83d\udcda The doc issue\n\nThe current documentation contains unnecessary step to reformat questions from FastChat's format to our internal format.\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "import argparse\nimport os\nimport json\nimport copy\n\nfrom utils import jdump, get_json_list\n\n\ndef format_questions(args):\n questions = get_json_list(args.questions_path)\n keys=questions[0].keys()\n \n formatted_questions=copy.deepcopy(questions)\n for i in range(len(formatted_questions)):\n formatted_questions[i]['instruction']=questions[i]['text']\n formatted_questions[i]['input']=\"\"\n formatted_questions[i]['output']=\"\"\n formatted_questions[i]['id']=questions[i]['question_id']\n for key in keys:\n if key==\"category\":\n continue\n del formatted_questions[i][key]\n \n jdump(formatted_questions, args.save_path)\n\nif __name__ == '__main__':\n parser = argparse.ArgumentParser()\n parser.add_argument('--questions_path', type=str, default='table/question.jsonl')\n parser.add_argument('--save_path', type=str, default=\"table/questions.json\")\n args = parser.parse_args()\n format_questions(args)", "path": "applications/Chat/evaluate/format_questions.py"}], "after_files": [{"content": null, "path": "applications/Chat/evaluate/format_questions.py"}]}
| 576 | 298 |
gh_patches_debug_6038
|
rasdani/github-patches
|
git_diff
|
wemake-services__wemake-python-styleguide-16
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Forbid `handler` as a variable name
We need to add `handler` to our variable blacklist.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wemake_python_styleguide/constants.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 BAD_FUNCTIONS = frozenset((
4 # Code generation:
5 'eval',
6 'exec',
7 'compile',
8
9 # Magic:
10 'globals',
11 'locals',
12 'vars',
13 'dir',
14
15 # IO:
16 'input',
17 'help',
18
19 # Attribute access:
20 'hasattr',
21 'delattr',
22 ))
23
24 BAD_IMPORT_FUNCTIONS = frozenset((
25 '__import__',
26 ))
27
28 BAD_MODULE_METADATA_VARIABLES = frozenset((
29 '__author__',
30 ))
31
32 BAD_VARIABLE_NAMES = frozenset((
33 'data',
34 'result',
35 'results',
36 'item',
37 'items',
38 'value',
39 'values',
40 'val',
41 'vals',
42 'var',
43 'vars',
44 'content',
45 'contents',
46 'info',
47 ))
48
49 NESTED_CLASSES_WHITELIST = frozenset((
50 'Meta',
51 ))
52
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/wemake_python_styleguide/constants.py b/wemake_python_styleguide/constants.py
--- a/wemake_python_styleguide/constants.py
+++ b/wemake_python_styleguide/constants.py
@@ -19,6 +19,10 @@
# Attribute access:
'hasattr',
'delattr',
+
+ # Too generic:
+ 'handler',
+ 'handle',
))
BAD_IMPORT_FUNCTIONS = frozenset((
@@ -44,6 +48,7 @@
'content',
'contents',
'info',
+ 'handler',
))
NESTED_CLASSES_WHITELIST = frozenset((
|
{"golden_diff": "diff --git a/wemake_python_styleguide/constants.py b/wemake_python_styleguide/constants.py\n--- a/wemake_python_styleguide/constants.py\n+++ b/wemake_python_styleguide/constants.py\n@@ -19,6 +19,10 @@\n # Attribute access:\n 'hasattr',\n 'delattr',\n+\n+ # Too generic:\n+ 'handler',\n+ 'handle',\n ))\n \n BAD_IMPORT_FUNCTIONS = frozenset((\n@@ -44,6 +48,7 @@\n 'content',\n 'contents',\n 'info',\n+ 'handler',\n ))\n \n NESTED_CLASSES_WHITELIST = frozenset((\n", "issue": "Forbid `handler` as a variable name\nWe need to add `handler` to our variable blacklist.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nBAD_FUNCTIONS = frozenset((\n # Code generation:\n 'eval',\n 'exec',\n 'compile',\n\n # Magic:\n 'globals',\n 'locals',\n 'vars',\n 'dir',\n\n # IO:\n 'input',\n 'help',\n\n # Attribute access:\n 'hasattr',\n 'delattr',\n))\n\nBAD_IMPORT_FUNCTIONS = frozenset((\n '__import__',\n))\n\nBAD_MODULE_METADATA_VARIABLES = frozenset((\n '__author__',\n))\n\nBAD_VARIABLE_NAMES = frozenset((\n 'data',\n 'result',\n 'results',\n 'item',\n 'items',\n 'value',\n 'values',\n 'val',\n 'vals',\n 'var',\n 'vars',\n 'content',\n 'contents',\n 'info',\n))\n\nNESTED_CLASSES_WHITELIST = frozenset((\n 'Meta',\n))\n", "path": "wemake_python_styleguide/constants.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nBAD_FUNCTIONS = frozenset((\n # Code generation:\n 'eval',\n 'exec',\n 'compile',\n\n # Magic:\n 'globals',\n 'locals',\n 'vars',\n 'dir',\n\n # IO:\n 'input',\n 'help',\n\n # Attribute access:\n 'hasattr',\n 'delattr',\n\n # Too generic:\n 'handler',\n 'handle',\n))\n\nBAD_IMPORT_FUNCTIONS = frozenset((\n '__import__',\n))\n\nBAD_MODULE_METADATA_VARIABLES = frozenset((\n '__author__',\n))\n\nBAD_VARIABLE_NAMES = frozenset((\n 'data',\n 'result',\n 'results',\n 'item',\n 'items',\n 'value',\n 'values',\n 'val',\n 'vals',\n 'var',\n 'vars',\n 'content',\n 'contents',\n 'info',\n 'handler',\n))\n\nNESTED_CLASSES_WHITELIST = frozenset((\n 'Meta',\n))\n", "path": "wemake_python_styleguide/constants.py"}]}
| 580 | 143 |
gh_patches_debug_24388
|
rasdani/github-patches
|
git_diff
|
pulp__pulpcore-4190
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Reclaim space for repository fails with Cannot delete some instances of model 'Artifact' because they are referenced through protected foreign keys: 'ContentArtifact.artifact'."
**Version**
3.16, but probably all versions
**Describe the bug**
- Reclaim space for repository fails with the following error.
~~~
Task paused with error: "("Cannot delete some instances of model 'Artifact' because they are referenced through protected foreign keys: 'ContentArtifact.artifact'.", {<ContentArtifact: pk=452959ad-c045-4e85-bf9f-6651ba37f57d>})"
~~~
**To Reproduce**
See BZ
**Additional context**
https://bugzilla.redhat.com/show_bug.cgi?id=2169322
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pulpcore/app/tasks/reclaim_space.py`
Content:
```
1 from pulpcore.app.models import (
2 Artifact,
3 Content,
4 ContentArtifact,
5 ProgressReport,
6 PublishedMetadata,
7 Repository,
8 RepositoryVersion,
9 )
10 from pulpcore.app.util import get_domain
11
12
13 def reclaim_space(repo_pks, keeplist_rv_pks=None, force=False):
14 """
15 This task frees-up disk space by removing Artifact files from the filesystem for Content
16 exclusive to the list of provided repos.
17
18 Note: content marked as `proctected` will be excluded from the reclaim disk space.
19
20 Kwargs:
21 repo_pks (list): A list of repo pks the disk reclaim space is performed on.
22 keeplist_rv_pks (list): A list of repo version pks that will be excluded from the reclaim
23 disk space.
24 force (bool): If True, uploaded content will be taken into account.
25
26 """
27 reclaimed_repos = Repository.objects.filter(pk__in=repo_pks)
28 for repo in reclaimed_repos:
29 repo.invalidate_cache(everything=True)
30
31 domain = get_domain()
32 rest_of_repos = Repository.objects.filter(pulp_domain=domain).exclude(pk__in=repo_pks)
33 c_keep_qs = Content.objects.filter(repositories__in=rest_of_repos)
34 c_reclaim_qs = Content.objects.filter(repositories__in=repo_pks)
35 c_reclaim_qs = c_reclaim_qs.exclude(
36 pk__in=c_keep_qs, pulp_type=PublishedMetadata.get_pulp_type()
37 )
38
39 if keeplist_rv_pks:
40 rv_qs = RepositoryVersion.objects.filter(pk__in=keeplist_rv_pks)
41 rv_content = Content.objects.none()
42 for rv in rv_qs.iterator():
43 rv_content |= rv.content
44 c_reclaim_qs = c_reclaim_qs.exclude(pk__in=rv_content)
45
46 content_distinct = c_reclaim_qs.distinct("pulp_type")
47 unprotected = []
48 for content in content_distinct:
49 if not content.cast().PROTECTED_FROM_RECLAIM:
50 unprotected.append(content.pulp_type)
51
52 ca_qs = ContentArtifact.objects.select_related("content", "artifact").filter(
53 content__in=c_reclaim_qs.values("pk"), artifact__isnull=False
54 )
55 if not force:
56 ca_qs = ca_qs.filter(remoteartifact__isnull=False)
57 artifact_pks = set()
58 ca_to_update = []
59 for ca in ca_qs.iterator():
60 if ca.content.pulp_type in unprotected:
61 artifact_pks.add(ca.artifact.pk)
62 ca.artifact = None
63 ca_to_update.append(ca)
64
65 ContentArtifact.objects.bulk_update(objs=ca_to_update, fields=["artifact"], batch_size=1000)
66 artifacts_to_delete = Artifact.objects.filter(pk__in=artifact_pks)
67 progress_bar = ProgressReport(
68 message="Reclaim disk space",
69 total=artifacts_to_delete.count(),
70 code="reclaim-space.artifact",
71 done=0,
72 state="running",
73 )
74 progress_bar.save()
75
76 counter = 0
77 interval = 100
78 for artifact in artifacts_to_delete.iterator():
79 # we need to manually call delete() because it cleans up the file on the filesystem
80 artifact.delete()
81 progress_bar.done += 1
82 counter += 1
83
84 if counter >= interval:
85 progress_bar.save()
86 counter = 0
87
88 progress_bar.state = "completed"
89 progress_bar.save()
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pulpcore/app/tasks/reclaim_space.py b/pulpcore/app/tasks/reclaim_space.py
--- a/pulpcore/app/tasks/reclaim_space.py
+++ b/pulpcore/app/tasks/reclaim_space.py
@@ -1,3 +1,7 @@
+from logging import getLogger
+
+from django.db.models.deletion import ProtectedError
+
from pulpcore.app.models import (
Artifact,
Content,
@@ -9,6 +13,8 @@
)
from pulpcore.app.util import get_domain
+log = getLogger(__name__)
+
def reclaim_space(repo_pks, keeplist_rv_pks=None, force=False):
"""
@@ -76,10 +82,16 @@
counter = 0
interval = 100
for artifact in artifacts_to_delete.iterator():
- # we need to manually call delete() because it cleans up the file on the filesystem
- artifact.delete()
- progress_bar.done += 1
- counter += 1
+ try:
+ # we need to manually call delete() because it cleans up the file on the filesystem
+ artifact.delete()
+ except ProtectedError as e:
+ # Rarely artifact could be shared between to different content units.
+ # Just log and skip the artifact deletion in this case
+ log.info(e)
+ else:
+ progress_bar.done += 1
+ counter += 1
if counter >= interval:
progress_bar.save()
|
{"golden_diff": "diff --git a/pulpcore/app/tasks/reclaim_space.py b/pulpcore/app/tasks/reclaim_space.py\n--- a/pulpcore/app/tasks/reclaim_space.py\n+++ b/pulpcore/app/tasks/reclaim_space.py\n@@ -1,3 +1,7 @@\n+from logging import getLogger\n+\n+from django.db.models.deletion import ProtectedError\n+\n from pulpcore.app.models import (\n Artifact,\n Content,\n@@ -9,6 +13,8 @@\n )\n from pulpcore.app.util import get_domain\n \n+log = getLogger(__name__)\n+\n \n def reclaim_space(repo_pks, keeplist_rv_pks=None, force=False):\n \"\"\"\n@@ -76,10 +82,16 @@\n counter = 0\n interval = 100\n for artifact in artifacts_to_delete.iterator():\n- # we need to manually call delete() because it cleans up the file on the filesystem\n- artifact.delete()\n- progress_bar.done += 1\n- counter += 1\n+ try:\n+ # we need to manually call delete() because it cleans up the file on the filesystem\n+ artifact.delete()\n+ except ProtectedError as e:\n+ # Rarely artifact could be shared between to different content units.\n+ # Just log and skip the artifact deletion in this case\n+ log.info(e)\n+ else:\n+ progress_bar.done += 1\n+ counter += 1\n \n if counter >= interval:\n progress_bar.save()\n", "issue": "Reclaim space for repository fails with Cannot delete some instances of model 'Artifact' because they are referenced through protected foreign keys: 'ContentArtifact.artifact'.\"\n**Version**\r\n3.16, but probably all versions\r\n\r\n**Describe the bug**\r\n\r\n- Reclaim space for repository fails with the following error.\r\n\r\n ~~~\r\n Task paused with error: \"(\"Cannot delete some instances of model 'Artifact' because they are referenced through protected foreign keys: 'ContentArtifact.artifact'.\", {<ContentArtifact: pk=452959ad-c045-4e85-bf9f-6651ba37f57d>})\"\r\n ~~~\r\n\r\n**To Reproduce**\r\nSee BZ\r\n\r\n**Additional context**\r\nhttps://bugzilla.redhat.com/show_bug.cgi?id=2169322\r\n\n", "before_files": [{"content": "from pulpcore.app.models import (\n Artifact,\n Content,\n ContentArtifact,\n ProgressReport,\n PublishedMetadata,\n Repository,\n RepositoryVersion,\n)\nfrom pulpcore.app.util import get_domain\n\n\ndef reclaim_space(repo_pks, keeplist_rv_pks=None, force=False):\n \"\"\"\n This task frees-up disk space by removing Artifact files from the filesystem for Content\n exclusive to the list of provided repos.\n\n Note: content marked as `proctected` will be excluded from the reclaim disk space.\n\n Kwargs:\n repo_pks (list): A list of repo pks the disk reclaim space is performed on.\n keeplist_rv_pks (list): A list of repo version pks that will be excluded from the reclaim\n disk space.\n force (bool): If True, uploaded content will be taken into account.\n\n \"\"\"\n reclaimed_repos = Repository.objects.filter(pk__in=repo_pks)\n for repo in reclaimed_repos:\n repo.invalidate_cache(everything=True)\n\n domain = get_domain()\n rest_of_repos = Repository.objects.filter(pulp_domain=domain).exclude(pk__in=repo_pks)\n c_keep_qs = Content.objects.filter(repositories__in=rest_of_repos)\n c_reclaim_qs = Content.objects.filter(repositories__in=repo_pks)\n c_reclaim_qs = c_reclaim_qs.exclude(\n pk__in=c_keep_qs, pulp_type=PublishedMetadata.get_pulp_type()\n )\n\n if keeplist_rv_pks:\n rv_qs = RepositoryVersion.objects.filter(pk__in=keeplist_rv_pks)\n rv_content = Content.objects.none()\n for rv in rv_qs.iterator():\n rv_content |= rv.content\n c_reclaim_qs = c_reclaim_qs.exclude(pk__in=rv_content)\n\n content_distinct = c_reclaim_qs.distinct(\"pulp_type\")\n unprotected = []\n for content in content_distinct:\n if not content.cast().PROTECTED_FROM_RECLAIM:\n unprotected.append(content.pulp_type)\n\n ca_qs = ContentArtifact.objects.select_related(\"content\", \"artifact\").filter(\n content__in=c_reclaim_qs.values(\"pk\"), artifact__isnull=False\n )\n if not force:\n ca_qs = ca_qs.filter(remoteartifact__isnull=False)\n artifact_pks = set()\n ca_to_update = []\n for ca in ca_qs.iterator():\n if ca.content.pulp_type in unprotected:\n artifact_pks.add(ca.artifact.pk)\n ca.artifact = None\n ca_to_update.append(ca)\n\n ContentArtifact.objects.bulk_update(objs=ca_to_update, fields=[\"artifact\"], batch_size=1000)\n artifacts_to_delete = Artifact.objects.filter(pk__in=artifact_pks)\n progress_bar = ProgressReport(\n message=\"Reclaim disk space\",\n total=artifacts_to_delete.count(),\n code=\"reclaim-space.artifact\",\n done=0,\n state=\"running\",\n )\n progress_bar.save()\n\n counter = 0\n interval = 100\n for artifact in artifacts_to_delete.iterator():\n # we need to manually call delete() because it cleans up the file on the filesystem\n artifact.delete()\n progress_bar.done += 1\n counter += 1\n\n if counter >= interval:\n progress_bar.save()\n counter = 0\n\n progress_bar.state = \"completed\"\n progress_bar.save()\n", "path": "pulpcore/app/tasks/reclaim_space.py"}], "after_files": [{"content": "from logging import getLogger\n\nfrom django.db.models.deletion import ProtectedError\n\nfrom pulpcore.app.models import (\n Artifact,\n Content,\n ContentArtifact,\n ProgressReport,\n PublishedMetadata,\n Repository,\n RepositoryVersion,\n)\nfrom pulpcore.app.util import get_domain\n\nlog = getLogger(__name__)\n\n\ndef reclaim_space(repo_pks, keeplist_rv_pks=None, force=False):\n \"\"\"\n This task frees-up disk space by removing Artifact files from the filesystem for Content\n exclusive to the list of provided repos.\n\n Note: content marked as `proctected` will be excluded from the reclaim disk space.\n\n Kwargs:\n repo_pks (list): A list of repo pks the disk reclaim space is performed on.\n keeplist_rv_pks (list): A list of repo version pks that will be excluded from the reclaim\n disk space.\n force (bool): If True, uploaded content will be taken into account.\n\n \"\"\"\n reclaimed_repos = Repository.objects.filter(pk__in=repo_pks)\n for repo in reclaimed_repos:\n repo.invalidate_cache(everything=True)\n\n domain = get_domain()\n rest_of_repos = Repository.objects.filter(pulp_domain=domain).exclude(pk__in=repo_pks)\n c_keep_qs = Content.objects.filter(repositories__in=rest_of_repos)\n c_reclaim_qs = Content.objects.filter(repositories__in=repo_pks)\n c_reclaim_qs = c_reclaim_qs.exclude(\n pk__in=c_keep_qs, pulp_type=PublishedMetadata.get_pulp_type()\n )\n\n if keeplist_rv_pks:\n rv_qs = RepositoryVersion.objects.filter(pk__in=keeplist_rv_pks)\n rv_content = Content.objects.none()\n for rv in rv_qs.iterator():\n rv_content |= rv.content\n c_reclaim_qs = c_reclaim_qs.exclude(pk__in=rv_content)\n\n content_distinct = c_reclaim_qs.distinct(\"pulp_type\")\n unprotected = []\n for content in content_distinct:\n if not content.cast().PROTECTED_FROM_RECLAIM:\n unprotected.append(content.pulp_type)\n\n ca_qs = ContentArtifact.objects.select_related(\"content\", \"artifact\").filter(\n content__in=c_reclaim_qs.values(\"pk\"), artifact__isnull=False\n )\n if not force:\n ca_qs = ca_qs.filter(remoteartifact__isnull=False)\n artifact_pks = set()\n ca_to_update = []\n for ca in ca_qs.iterator():\n if ca.content.pulp_type in unprotected:\n artifact_pks.add(ca.artifact.pk)\n ca.artifact = None\n ca_to_update.append(ca)\n\n ContentArtifact.objects.bulk_update(objs=ca_to_update, fields=[\"artifact\"], batch_size=1000)\n artifacts_to_delete = Artifact.objects.filter(pk__in=artifact_pks)\n progress_bar = ProgressReport(\n message=\"Reclaim disk space\",\n total=artifacts_to_delete.count(),\n code=\"reclaim-space.artifact\",\n done=0,\n state=\"running\",\n )\n progress_bar.save()\n\n counter = 0\n interval = 100\n for artifact in artifacts_to_delete.iterator():\n try:\n # we need to manually call delete() because it cleans up the file on the filesystem\n artifact.delete()\n except ProtectedError as e:\n # Rarely artifact could be shared between to different content units.\n # Just log and skip the artifact deletion in this case\n log.info(e)\n else:\n progress_bar.done += 1\n counter += 1\n\n if counter >= interval:\n progress_bar.save()\n counter = 0\n\n progress_bar.state = \"completed\"\n progress_bar.save()\n", "path": "pulpcore/app/tasks/reclaim_space.py"}]}
| 1,349 | 322 |
gh_patches_debug_35421
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-center-index-20134
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[doxygen] Model iconv dependency as a conan package
Specify library name and version: **doxygen/1.9.2**
This resolves an issue where `iconv` wasn't being appropriately modelled as a conan dependency in the doxygen recipe. This lead to unresolved symbol errors on Macos builds with conan 2. This was previously part of https://github.com/conan-io/conan-center-index/pull/18415, but is being split to a separate PR to separate concerns and make review easier.
Closes #19903
---
- [x] I've read the [contributing guidelines](https://github.com/conan-io/conan-center-index/blob/master/CONTRIBUTING.md).
- [x] I've used a [recent](https://github.com/conan-io/conan/releases/latest) Conan client version close to the [currently deployed](https://github.com/conan-io/conan-center-index/blob/master/.c3i/config_v1.yml#L6).
- [x] I've tried at least one configuration locally with the [conan-center hook](https://github.com/conan-io/hooks.git) activated.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/doxygen/all/conanfile.py`
Content:
```
1 from conan import ConanFile
2 from conan.errors import ConanInvalidConfiguration
3 from conan.tools.cmake import CMake, CMakeDeps, CMakeToolchain, cmake_layout
4 from conan.tools.files import apply_conandata_patches, copy, export_conandata_patches, get
5 from conan.tools.microsoft import check_min_vs, is_msvc_static_runtime
6 from conan.tools.scm import Version
7 import os
8
9 required_conan_version = ">=1.52.0"
10
11
12 class DoxygenConan(ConanFile):
13 name = "doxygen"
14 description = "A documentation system for C++, C, Java, IDL and PHP --- Note: Dot is disabled in this package"
15 topics = ("installer", "devtool", "documentation")
16 homepage = "https://github.com/doxygen/doxygen"
17 license = "GPL-2.0-or-later"
18 url = "https://github.com/conan-io/conan-center-index"
19 package_type = "application"
20 settings = "os", "arch", "compiler", "build_type"
21 options = {
22 "enable_parse": [True, False],
23 "enable_search": [True, False],
24 }
25 default_options = {
26 "enable_parse": True,
27 "enable_search": True,
28 }
29
30 @property
31 def _settings_build(self):
32 return getattr(self, "settings_build", self.settings)
33
34 @property
35 def _minimum_compiler_version(self):
36 if Version(self.version) <= "1.9.1":
37 return {
38 "gcc": "5",
39 }
40 return {
41 "gcc": "7", # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66297
42 "Visual Studio": "15",
43 "msvc": "191",
44 }
45
46 def export_sources(self):
47 export_conandata_patches(self)
48
49 def layout(self):
50 cmake_layout(self, src_folder="src")
51
52 def requirements(self):
53 if self.options.enable_search:
54 self.requires("xapian-core/1.4.19")
55 self.requires("zlib/1.2.13")
56
57 def package_id(self):
58 del self.info.settings.compiler
59
60 def compatibility(self):
61 return [{"settings": [("build_type", "Release")]}]
62
63 def validate(self):
64 minimum_compiler_version = self._minimum_compiler_version.get(str(self.settings.compiler))
65 if minimum_compiler_version and Version(self.settings.compiler.version) < minimum_compiler_version:
66 raise ConanInvalidConfiguration(f"Compiler version too old. At least {minimum_compiler_version} is required.")
67 if Version(self.version) == "1.8.18":
68 check_min_vs(self, "191")
69
70 def build_requirements(self):
71 if self._settings_build.os == "Windows":
72 self.tool_requires("winflexbison/2.5.24")
73 else:
74 self.tool_requires("flex/2.6.4")
75 self.tool_requires("bison/3.8.2")
76
77 def source(self):
78 get(self, **self.conan_data["sources"][self.version], strip_root=True)
79
80 def generate(self):
81 tc = CMakeToolchain(self)
82 tc.variables["build_parse"] = self.options.enable_parse
83 tc.variables["build_search"] = self.options.enable_search
84 tc.variables["use_libc++"] = self.settings.compiler.get_safe("libcxx") == "libc++"
85 tc.variables["win_static"] = is_msvc_static_runtime(self)
86 tc.generate()
87
88 deps = CMakeDeps(self)
89 deps.generate()
90
91 def build(self):
92 apply_conandata_patches(self)
93 cmake = CMake(self)
94 cmake.configure()
95 cmake.build()
96
97 def package(self):
98 copy(self, "LICENSE", src=self.source_folder, dst=os.path.join(self.package_folder, "licenses"))
99 cmake = CMake(self)
100 cmake.install()
101
102 def package_info(self):
103 self.cpp_info.set_property("cmake_find_mode", "none")
104 self.cpp_info.libdirs = []
105 self.cpp_info.includedirs = []
106
107 # TODO: to remove in conan v2
108 self.env_info.PATH.append(os.path.join(self.package_folder, "bin"))
109
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/recipes/doxygen/all/conanfile.py b/recipes/doxygen/all/conanfile.py
--- a/recipes/doxygen/all/conanfile.py
+++ b/recipes/doxygen/all/conanfile.py
@@ -21,10 +21,12 @@
options = {
"enable_parse": [True, False],
"enable_search": [True, False],
+ "enable_app": [True, False],
}
default_options = {
"enable_parse": True,
"enable_search": True,
+ "enable_app": False,
}
@property
@@ -52,10 +54,10 @@
def requirements(self):
if self.options.enable_search:
self.requires("xapian-core/1.4.19")
- self.requires("zlib/1.2.13")
-
- def package_id(self):
- del self.info.settings.compiler
+ self.requires("zlib/[>=1.2.11 <2]")
+ if self.options.enable_app or self.options.enable_parse:
+ # INFO: Doxygen uses upper case CMake variables to link/include IConv, so we are using patches for targets.
+ self.requires("libiconv/1.17")
def compatibility(self):
return [{"settings": [("build_type", "Release")]}]
@@ -81,6 +83,7 @@
tc = CMakeToolchain(self)
tc.variables["build_parse"] = self.options.enable_parse
tc.variables["build_search"] = self.options.enable_search
+ tc.variables["build_app"] = self.options.enable_app
tc.variables["use_libc++"] = self.settings.compiler.get_safe("libcxx") == "libc++"
tc.variables["win_static"] = is_msvc_static_runtime(self)
tc.generate()
@@ -103,6 +106,8 @@
self.cpp_info.set_property("cmake_find_mode", "none")
self.cpp_info.libdirs = []
self.cpp_info.includedirs = []
+ if self.settings.os in ["Linux", "FreeBSD"]:
+ self.cpp_info.system_libs = ["pthread", "m"]
# TODO: to remove in conan v2
self.env_info.PATH.append(os.path.join(self.package_folder, "bin"))
|
{"golden_diff": "diff --git a/recipes/doxygen/all/conanfile.py b/recipes/doxygen/all/conanfile.py\n--- a/recipes/doxygen/all/conanfile.py\n+++ b/recipes/doxygen/all/conanfile.py\n@@ -21,10 +21,12 @@\n options = {\n \"enable_parse\": [True, False],\n \"enable_search\": [True, False],\n+ \"enable_app\": [True, False],\n }\n default_options = {\n \"enable_parse\": True,\n \"enable_search\": True,\n+ \"enable_app\": False,\n }\n \n @property\n@@ -52,10 +54,10 @@\n def requirements(self):\n if self.options.enable_search:\n self.requires(\"xapian-core/1.4.19\")\n- self.requires(\"zlib/1.2.13\")\n-\n- def package_id(self):\n- del self.info.settings.compiler\n+ self.requires(\"zlib/[>=1.2.11 <2]\")\n+ if self.options.enable_app or self.options.enable_parse:\n+ # INFO: Doxygen uses upper case CMake variables to link/include IConv, so we are using patches for targets.\n+ self.requires(\"libiconv/1.17\")\n \n def compatibility(self):\n return [{\"settings\": [(\"build_type\", \"Release\")]}]\n@@ -81,6 +83,7 @@\n tc = CMakeToolchain(self)\n tc.variables[\"build_parse\"] = self.options.enable_parse\n tc.variables[\"build_search\"] = self.options.enable_search\n+ tc.variables[\"build_app\"] = self.options.enable_app\n tc.variables[\"use_libc++\"] = self.settings.compiler.get_safe(\"libcxx\") == \"libc++\"\n tc.variables[\"win_static\"] = is_msvc_static_runtime(self)\n tc.generate()\n@@ -103,6 +106,8 @@\n self.cpp_info.set_property(\"cmake_find_mode\", \"none\")\n self.cpp_info.libdirs = []\n self.cpp_info.includedirs = []\n+ if self.settings.os in [\"Linux\", \"FreeBSD\"]:\n+ self.cpp_info.system_libs = [\"pthread\", \"m\"]\n \n # TODO: to remove in conan v2\n self.env_info.PATH.append(os.path.join(self.package_folder, \"bin\"))\n", "issue": "[doxygen] Model iconv dependency as a conan package\nSpecify library name and version: **doxygen/1.9.2**\r\n\r\nThis resolves an issue where `iconv` wasn't being appropriately modelled as a conan dependency in the doxygen recipe. This lead to unresolved symbol errors on Macos builds with conan 2. This was previously part of https://github.com/conan-io/conan-center-index/pull/18415, but is being split to a separate PR to separate concerns and make review easier.\r\n\r\nCloses #19903 \r\n\r\n---\r\n\r\n- [x] I've read the [contributing guidelines](https://github.com/conan-io/conan-center-index/blob/master/CONTRIBUTING.md).\r\n- [x] I've used a [recent](https://github.com/conan-io/conan/releases/latest) Conan client version close to the [currently deployed](https://github.com/conan-io/conan-center-index/blob/master/.c3i/config_v1.yml#L6).\r\n- [x] I've tried at least one configuration locally with the [conan-center hook](https://github.com/conan-io/hooks.git) activated.\r\n\n", "before_files": [{"content": "from conan import ConanFile\nfrom conan.errors import ConanInvalidConfiguration\nfrom conan.tools.cmake import CMake, CMakeDeps, CMakeToolchain, cmake_layout\nfrom conan.tools.files import apply_conandata_patches, copy, export_conandata_patches, get\nfrom conan.tools.microsoft import check_min_vs, is_msvc_static_runtime\nfrom conan.tools.scm import Version\nimport os\n\nrequired_conan_version = \">=1.52.0\"\n\n\nclass DoxygenConan(ConanFile):\n name = \"doxygen\"\n description = \"A documentation system for C++, C, Java, IDL and PHP --- Note: Dot is disabled in this package\"\n topics = (\"installer\", \"devtool\", \"documentation\")\n homepage = \"https://github.com/doxygen/doxygen\"\n license = \"GPL-2.0-or-later\"\n url = \"https://github.com/conan-io/conan-center-index\"\n package_type = \"application\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"enable_parse\": [True, False],\n \"enable_search\": [True, False],\n }\n default_options = {\n \"enable_parse\": True,\n \"enable_search\": True,\n }\n\n @property\n def _settings_build(self):\n return getattr(self, \"settings_build\", self.settings)\n\n @property\n def _minimum_compiler_version(self):\n if Version(self.version) <= \"1.9.1\":\n return {\n \"gcc\": \"5\",\n }\n return {\n \"gcc\": \"7\", # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66297\n \"Visual Studio\": \"15\",\n \"msvc\": \"191\",\n }\n\n def export_sources(self):\n export_conandata_patches(self)\n\n def layout(self):\n cmake_layout(self, src_folder=\"src\")\n\n def requirements(self):\n if self.options.enable_search:\n self.requires(\"xapian-core/1.4.19\")\n self.requires(\"zlib/1.2.13\")\n\n def package_id(self):\n del self.info.settings.compiler\n\n def compatibility(self):\n return [{\"settings\": [(\"build_type\", \"Release\")]}]\n\n def validate(self):\n minimum_compiler_version = self._minimum_compiler_version.get(str(self.settings.compiler))\n if minimum_compiler_version and Version(self.settings.compiler.version) < minimum_compiler_version:\n raise ConanInvalidConfiguration(f\"Compiler version too old. At least {minimum_compiler_version} is required.\")\n if Version(self.version) == \"1.8.18\":\n check_min_vs(self, \"191\")\n\n def build_requirements(self):\n if self._settings_build.os == \"Windows\":\n self.tool_requires(\"winflexbison/2.5.24\")\n else:\n self.tool_requires(\"flex/2.6.4\")\n self.tool_requires(\"bison/3.8.2\")\n\n def source(self):\n get(self, **self.conan_data[\"sources\"][self.version], strip_root=True)\n\n def generate(self):\n tc = CMakeToolchain(self)\n tc.variables[\"build_parse\"] = self.options.enable_parse\n tc.variables[\"build_search\"] = self.options.enable_search\n tc.variables[\"use_libc++\"] = self.settings.compiler.get_safe(\"libcxx\") == \"libc++\"\n tc.variables[\"win_static\"] = is_msvc_static_runtime(self)\n tc.generate()\n\n deps = CMakeDeps(self)\n deps.generate()\n\n def build(self):\n apply_conandata_patches(self)\n cmake = CMake(self)\n cmake.configure()\n cmake.build()\n\n def package(self):\n copy(self, \"LICENSE\", src=self.source_folder, dst=os.path.join(self.package_folder, \"licenses\"))\n cmake = CMake(self)\n cmake.install()\n\n def package_info(self):\n self.cpp_info.set_property(\"cmake_find_mode\", \"none\")\n self.cpp_info.libdirs = []\n self.cpp_info.includedirs = []\n\n # TODO: to remove in conan v2\n self.env_info.PATH.append(os.path.join(self.package_folder, \"bin\"))\n", "path": "recipes/doxygen/all/conanfile.py"}], "after_files": [{"content": "from conan import ConanFile\nfrom conan.errors import ConanInvalidConfiguration\nfrom conan.tools.cmake import CMake, CMakeDeps, CMakeToolchain, cmake_layout\nfrom conan.tools.files import apply_conandata_patches, copy, export_conandata_patches, get\nfrom conan.tools.microsoft import check_min_vs, is_msvc_static_runtime\nfrom conan.tools.scm import Version\nimport os\n\nrequired_conan_version = \">=1.52.0\"\n\n\nclass DoxygenConan(ConanFile):\n name = \"doxygen\"\n description = \"A documentation system for C++, C, Java, IDL and PHP --- Note: Dot is disabled in this package\"\n topics = (\"installer\", \"devtool\", \"documentation\")\n homepage = \"https://github.com/doxygen/doxygen\"\n license = \"GPL-2.0-or-later\"\n url = \"https://github.com/conan-io/conan-center-index\"\n package_type = \"application\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"enable_parse\": [True, False],\n \"enable_search\": [True, False],\n \"enable_app\": [True, False],\n }\n default_options = {\n \"enable_parse\": True,\n \"enable_search\": True,\n \"enable_app\": False,\n }\n\n @property\n def _settings_build(self):\n return getattr(self, \"settings_build\", self.settings)\n\n @property\n def _minimum_compiler_version(self):\n if Version(self.version) <= \"1.9.1\":\n return {\n \"gcc\": \"5\",\n }\n return {\n \"gcc\": \"7\", # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66297\n \"Visual Studio\": \"15\",\n \"msvc\": \"191\",\n }\n\n def export_sources(self):\n export_conandata_patches(self)\n\n def layout(self):\n cmake_layout(self, src_folder=\"src\")\n\n def requirements(self):\n if self.options.enable_search:\n self.requires(\"xapian-core/1.4.19\")\n self.requires(\"zlib/[>=1.2.11 <2]\")\n if self.options.enable_app or self.options.enable_parse:\n # INFO: Doxygen uses upper case CMake variables to link/include IConv, so we are using patches for targets.\n self.requires(\"libiconv/1.17\")\n\n def compatibility(self):\n return [{\"settings\": [(\"build_type\", \"Release\")]}]\n\n def validate(self):\n minimum_compiler_version = self._minimum_compiler_version.get(str(self.settings.compiler))\n if minimum_compiler_version and Version(self.settings.compiler.version) < minimum_compiler_version:\n raise ConanInvalidConfiguration(f\"Compiler version too old. At least {minimum_compiler_version} is required.\")\n if Version(self.version) == \"1.8.18\":\n check_min_vs(self, \"191\")\n\n def build_requirements(self):\n if self._settings_build.os == \"Windows\":\n self.tool_requires(\"winflexbison/2.5.24\")\n else:\n self.tool_requires(\"flex/2.6.4\")\n self.tool_requires(\"bison/3.8.2\")\n\n def source(self):\n get(self, **self.conan_data[\"sources\"][self.version], strip_root=True)\n\n def generate(self):\n tc = CMakeToolchain(self)\n tc.variables[\"build_parse\"] = self.options.enable_parse\n tc.variables[\"build_search\"] = self.options.enable_search\n tc.variables[\"build_app\"] = self.options.enable_app\n tc.variables[\"use_libc++\"] = self.settings.compiler.get_safe(\"libcxx\") == \"libc++\"\n tc.variables[\"win_static\"] = is_msvc_static_runtime(self)\n tc.generate()\n\n deps = CMakeDeps(self)\n deps.generate()\n\n def build(self):\n apply_conandata_patches(self)\n cmake = CMake(self)\n cmake.configure()\n cmake.build()\n\n def package(self):\n copy(self, \"LICENSE\", src=self.source_folder, dst=os.path.join(self.package_folder, \"licenses\"))\n cmake = CMake(self)\n cmake.install()\n\n def package_info(self):\n self.cpp_info.set_property(\"cmake_find_mode\", \"none\")\n self.cpp_info.libdirs = []\n self.cpp_info.includedirs = []\n if self.settings.os in [\"Linux\", \"FreeBSD\"]:\n self.cpp_info.system_libs = [\"pthread\", \"m\"]\n\n # TODO: to remove in conan v2\n self.env_info.PATH.append(os.path.join(self.package_folder, \"bin\"))\n", "path": "recipes/doxygen/all/conanfile.py"}]}
| 1,644 | 507 |
gh_patches_debug_310
|
rasdani/github-patches
|
git_diff
|
streamlit__streamlit-7454
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
A header with Japanese text has no anchor link.
### Summary
I found that a header with Japanese text has no anchor link.
### Steps to reproduce
Code snippet:
```
import streamlit as st
st.header("セクション")
```
1. Run code snippet above.
2. Check if the header has anchor link or not.
**Expected behavior:**
The header ("セクション") has anchor link.
**Actual behavior:**
The header ("セクション") has no anchor link.
### Is this a regression?
No
### Debug info
- Streamlit version: Streamlit, version 1.10.0
- Python version: Python 3.8.10
- Using Conda
- OS version: Ubuntu 20.04.4 LTS
- Browser version: Chrome / Version 104.0.5112.101 (Official Build) (x86_64)
### Additional information
A header with Korean text or Chinese text also has no anchor link.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `e2e/scripts/st_title.py`
Content:
```
1 # Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import streamlit as st
16
17 st.title("This title is awesome!")
18 st.title("This title is awesome too!", anchor="awesome-title")
19
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/e2e/scripts/st_title.py b/e2e/scripts/st_title.py
--- a/e2e/scripts/st_title.py
+++ b/e2e/scripts/st_title.py
@@ -16,3 +16,6 @@
st.title("This title is awesome!")
st.title("This title is awesome too!", anchor="awesome-title")
+
+st.title("日本語タイトル")
+st.title("その他の邦題", anchor="アンカー")
|
{"golden_diff": "diff --git a/e2e/scripts/st_title.py b/e2e/scripts/st_title.py\n--- a/e2e/scripts/st_title.py\n+++ b/e2e/scripts/st_title.py\n@@ -16,3 +16,6 @@\n \n st.title(\"This title is awesome!\")\n st.title(\"This title is awesome too!\", anchor=\"awesome-title\")\n+\n+st.title(\"\u65e5\u672c\u8a9e\u30bf\u30a4\u30c8\u30eb\")\n+st.title(\"\u305d\u306e\u4ed6\u306e\u90a6\u984c\", anchor=\"\u30a2\u30f3\u30ab\u30fc\")\n", "issue": "A header with Japanese text has no anchor link.\n### Summary\r\n\r\nI found that a header with Japanese text has no anchor link.\r\n\r\n### Steps to reproduce\r\n\r\nCode snippet:\r\n\r\n```\r\nimport streamlit as st\r\nst.header(\"\u30bb\u30af\u30b7\u30e7\u30f3\")\r\n```\r\n\r\n1. Run code snippet above.\r\n2. Check if the header has anchor link or not.\r\n\r\n**Expected behavior:**\r\n\r\nThe header (\"\u30bb\u30af\u30b7\u30e7\u30f3\") has anchor link.\r\n\r\n**Actual behavior:**\r\n\r\nThe header (\"\u30bb\u30af\u30b7\u30e7\u30f3\") has no anchor link.\r\n\r\n### Is this a regression?\r\n\r\nNo\r\n\r\n### Debug info\r\n\r\n- Streamlit version: Streamlit, version 1.10.0\r\n- Python version: Python 3.8.10\r\n- Using Conda\r\n- OS version: Ubuntu 20.04.4 LTS\r\n- Browser version: Chrome / Version 104.0.5112.101 (Official Build) (x86_64)\r\n\r\n### Additional information\r\n\r\nA header with Korean text or Chinese text also has no anchor link.\r\n\n", "before_files": [{"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport streamlit as st\n\nst.title(\"This title is awesome!\")\nst.title(\"This title is awesome too!\", anchor=\"awesome-title\")\n", "path": "e2e/scripts/st_title.py"}], "after_files": [{"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport streamlit as st\n\nst.title(\"This title is awesome!\")\nst.title(\"This title is awesome too!\", anchor=\"awesome-title\")\n\nst.title(\"\u65e5\u672c\u8a9e\u30bf\u30a4\u30c8\u30eb\")\nst.title(\"\u305d\u306e\u4ed6\u306e\u90a6\u984c\", anchor=\"\u30a2\u30f3\u30ab\u30fc\")\n", "path": "e2e/scripts/st_title.py"}]}
| 690 | 98 |
gh_patches_debug_41324
|
rasdani/github-patches
|
git_diff
|
tough-dev-school__education-backend-885
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Админка: по-умолчанию показывать только включенные промокоды
У нас накопилась огромная гора промокодов, и теперь при первом входе в админку сложно ориентироваться. Нужно сделать, чтобы по-умолчанию в админке показывались только включенные промокоды.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/app/admin/filters.py`
Content:
```
1 from django.contrib import admin
2 from django.utils.translation import gettext_lazy as _
3
4
5 class BooleanFilter(admin.SimpleListFilter):
6 """
7 Abstract base class for simple boolean filter in admin. You should define only
8 `title`, unique `parameter_name` and two methods: `t` and `f`, returning a queryset
9 when filter is set to True and False respectively:
10 class HasClassesFilter(BooleanFilter):
11 title = _('Has classes')
12 parameter_name = 'has_classes'
13 def t(self, request, queryset):
14 return queryset.filter(classes__isnull=False).distinct('pk')
15 def n(self, request, queryset):
16 return queryset.filter(classes__isnull=True)
17 """
18 def lookups(self, request, model_admin):
19 return (
20 ('t', _('Yes')),
21 ('f', _('No')),
22 )
23
24 def queryset(self, request, queryset):
25 if not self.value():
26 return queryset
27 else:
28 if self.value() == 't':
29 return self.t(request, queryset)
30 else:
31 return self.f(request, queryset)
32
```
Path: `src/orders/admin/promocodes/admin.py`
Content:
```
1 from django.urls import reverse
2 from django.utils.safestring import mark_safe
3 from django.utils.translation import gettext_lazy as _
4
5 from app.admin import ModelAdmin, admin
6 from orders.admin.promocodes import actions
7 from orders.models import PromoCode
8
9
10 @admin.register(PromoCode)
11 class PromoCodeAdmin(ModelAdmin):
12 list_display = [
13 'id',
14 'name',
15 'discount_percent',
16 'discount_value',
17 'order_count',
18 'comment',
19 'active',
20 ]
21
22 list_editable = [
23 'active',
24 ]
25
26 list_filter = [
27 'active',
28 ]
29
30 actions = [actions.deactivate]
31
32 def get_queryset(self, request):
33 return super().get_queryset(request) \
34 .with_order_count()
35
36 @mark_safe
37 @admin.display(description=_('Order count'), ordering='order_count')
38 def order_count(self, obj=None):
39 if hasattr(obj, 'order_count') and obj.order_count:
40 orders_url = reverse('admin:orders_order_changelist')
41 return f'<a href="{orders_url}?is_paid=t&promocode_id={obj.id}">{obj.order_count}</a>'
42
43 return '—'
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/app/admin/filters.py b/src/app/admin/filters.py
--- a/src/app/admin/filters.py
+++ b/src/app/admin/filters.py
@@ -12,7 +12,7 @@
parameter_name = 'has_classes'
def t(self, request, queryset):
return queryset.filter(classes__isnull=False).distinct('pk')
- def n(self, request, queryset):
+ def f(self, request, queryset):
return queryset.filter(classes__isnull=True)
"""
def lookups(self, request, model_admin):
@@ -24,8 +24,22 @@
def queryset(self, request, queryset):
if not self.value():
return queryset
- else:
- if self.value() == 't':
- return self.t(request, queryset)
- else:
- return self.f(request, queryset)
+
+ if self.value() == 't':
+ return self.t(request, queryset)
+
+ return self.f(request, queryset)
+
+
+class DefaultTrueBooleanFilter(BooleanFilter):
+ def queryset(self, request, queryset):
+ if not self.value() or self.value() == 't':
+ return self.t(request, queryset)
+
+ return self.f(request, queryset)
+
+
+__all__ = [
+ 'BooleanFilter',
+ 'DefaultTrueBooleanFilter',
+]
diff --git a/src/orders/admin/promocodes/admin.py b/src/orders/admin/promocodes/admin.py
--- a/src/orders/admin/promocodes/admin.py
+++ b/src/orders/admin/promocodes/admin.py
@@ -1,31 +1,45 @@
-from django.urls import reverse
-from django.utils.safestring import mark_safe
from django.utils.translation import gettext_lazy as _
from app.admin import ModelAdmin, admin
+from app.admin.filters import DefaultTrueBooleanFilter
from orders.admin.promocodes import actions
from orders.models import PromoCode
+class PromodeActiveFilter(DefaultTrueBooleanFilter):
+ title = _('Active')
+ parameter_name = 'is_active'
+
+ def t(self, request, queryset):
+ return queryset.filter(active=True)
+
+ def f(self, request, queryset):
+ return queryset.filter(active=False)
+
+
@admin.register(PromoCode)
class PromoCodeAdmin(ModelAdmin):
- list_display = [
+ list_display = (
'id',
'name',
- 'discount_percent',
- 'discount_value',
+ 'discount',
'order_count',
'comment',
'active',
- ]
+ )
list_editable = [
'active',
]
- list_filter = [
- 'active',
- ]
+ list_filter = (
+ PromodeActiveFilter,
+ )
+
+ list_display_links = (
+ 'id',
+ 'name',
+ )
actions = [actions.deactivate]
@@ -33,11 +47,19 @@
return super().get_queryset(request) \
.with_order_count()
- @mark_safe
@admin.display(description=_('Order count'), ordering='order_count')
- def order_count(self, obj=None):
+ def order_count(self, obj: PromoCode | None = None) -> str:
if hasattr(obj, 'order_count') and obj.order_count:
- orders_url = reverse('admin:orders_order_changelist')
- return f'<a href="{orders_url}?is_paid=t&promocode_id={obj.id}">{obj.order_count}</a>'
+ return str(obj.order_count)
return '—'
+
+ @admin.display(description=_('Discount'), ordering='discount_percent')
+ def discount(self, obj: PromoCode | None = None) -> str:
+ if not obj:
+ return '—'
+
+ if obj.discount_value is not None:
+ return f'{obj.discount_value} ₽'
+
+ return f'{obj.discount_percent} %'
|
{"golden_diff": "diff --git a/src/app/admin/filters.py b/src/app/admin/filters.py\n--- a/src/app/admin/filters.py\n+++ b/src/app/admin/filters.py\n@@ -12,7 +12,7 @@\n parameter_name = 'has_classes'\n def t(self, request, queryset):\n return queryset.filter(classes__isnull=False).distinct('pk')\n- def n(self, request, queryset):\n+ def f(self, request, queryset):\n return queryset.filter(classes__isnull=True)\n \"\"\"\n def lookups(self, request, model_admin):\n@@ -24,8 +24,22 @@\n def queryset(self, request, queryset):\n if not self.value():\n return queryset\n- else:\n- if self.value() == 't':\n- return self.t(request, queryset)\n- else:\n- return self.f(request, queryset)\n+\n+ if self.value() == 't':\n+ return self.t(request, queryset)\n+\n+ return self.f(request, queryset)\n+\n+\n+class DefaultTrueBooleanFilter(BooleanFilter):\n+ def queryset(self, request, queryset):\n+ if not self.value() or self.value() == 't':\n+ return self.t(request, queryset)\n+\n+ return self.f(request, queryset)\n+\n+\n+__all__ = [\n+ 'BooleanFilter',\n+ 'DefaultTrueBooleanFilter',\n+]\ndiff --git a/src/orders/admin/promocodes/admin.py b/src/orders/admin/promocodes/admin.py\n--- a/src/orders/admin/promocodes/admin.py\n+++ b/src/orders/admin/promocodes/admin.py\n@@ -1,31 +1,45 @@\n-from django.urls import reverse\n-from django.utils.safestring import mark_safe\n from django.utils.translation import gettext_lazy as _\n \n from app.admin import ModelAdmin, admin\n+from app.admin.filters import DefaultTrueBooleanFilter\n from orders.admin.promocodes import actions\n from orders.models import PromoCode\n \n \n+class PromodeActiveFilter(DefaultTrueBooleanFilter):\n+ title = _('Active')\n+ parameter_name = 'is_active'\n+\n+ def t(self, request, queryset):\n+ return queryset.filter(active=True)\n+\n+ def f(self, request, queryset):\n+ return queryset.filter(active=False)\n+\n+\n @admin.register(PromoCode)\n class PromoCodeAdmin(ModelAdmin):\n- list_display = [\n+ list_display = (\n 'id',\n 'name',\n- 'discount_percent',\n- 'discount_value',\n+ 'discount',\n 'order_count',\n 'comment',\n 'active',\n- ]\n+ )\n \n list_editable = [\n 'active',\n ]\n \n- list_filter = [\n- 'active',\n- ]\n+ list_filter = (\n+ PromodeActiveFilter,\n+ )\n+\n+ list_display_links = (\n+ 'id',\n+ 'name',\n+ )\n \n actions = [actions.deactivate]\n \n@@ -33,11 +47,19 @@\n return super().get_queryset(request) \\\n .with_order_count()\n \n- @mark_safe\n @admin.display(description=_('Order count'), ordering='order_count')\n- def order_count(self, obj=None):\n+ def order_count(self, obj: PromoCode | None = None) -> str:\n if hasattr(obj, 'order_count') and obj.order_count:\n- orders_url = reverse('admin:orders_order_changelist')\n- return f'<a href=\"{orders_url}?is_paid=t&promocode_id={obj.id}\">{obj.order_count}</a>'\n+ return str(obj.order_count)\n \n return '\u2014'\n+\n+ @admin.display(description=_('Discount'), ordering='discount_percent')\n+ def discount(self, obj: PromoCode | None = None) -> str:\n+ if not obj:\n+ return '\u2014'\n+\n+ if obj.discount_value is not None:\n+ return f'{obj.discount_value} \u20bd'\n+\n+ return f'{obj.discount_percent} %'\n", "issue": "\u0410\u0434\u043c\u0438\u043d\u043a\u0430: \u043f\u043e-\u0443\u043c\u043e\u043b\u0447\u0430\u043d\u0438\u044e \u043f\u043e\u043a\u0430\u0437\u044b\u0432\u0430\u0442\u044c \u0442\u043e\u043b\u044c\u043a\u043e \u0432\u043a\u043b\u044e\u0447\u0435\u043d\u043d\u044b\u0435 \u043f\u0440\u043e\u043c\u043e\u043a\u043e\u0434\u044b\n\u0423 \u043d\u0430\u0441 \u043d\u0430\u043a\u043e\u043f\u0438\u043b\u0430\u0441\u044c \u043e\u0433\u0440\u043e\u043c\u043d\u0430\u044f \u0433\u043e\u0440\u0430 \u043f\u0440\u043e\u043c\u043e\u043a\u043e\u0434\u043e\u0432, \u0438 \u0442\u0435\u043f\u0435\u0440\u044c \u043f\u0440\u0438 \u043f\u0435\u0440\u0432\u043e\u043c \u0432\u0445\u043e\u0434\u0435 \u0432 \u0430\u0434\u043c\u0438\u043d\u043a\u0443 \u0441\u043b\u043e\u0436\u043d\u043e \u043e\u0440\u0438\u0435\u043d\u0442\u0438\u0440\u043e\u0432\u0430\u0442\u044c\u0441\u044f. \u041d\u0443\u0436\u043d\u043e \u0441\u0434\u0435\u043b\u0430\u0442\u044c, \u0447\u0442\u043e\u0431\u044b \u043f\u043e-\u0443\u043c\u043e\u043b\u0447\u0430\u043d\u0438\u044e \u0432 \u0430\u0434\u043c\u0438\u043d\u043a\u0435 \u043f\u043e\u043a\u0430\u0437\u044b\u0432\u0430\u043b\u0438\u0441\u044c \u0442\u043e\u043b\u044c\u043a\u043e \u0432\u043a\u043b\u044e\u0447\u0435\u043d\u043d\u044b\u0435 \u043f\u0440\u043e\u043c\u043e\u043a\u043e\u0434\u044b.\n", "before_files": [{"content": "from django.contrib import admin\nfrom django.utils.translation import gettext_lazy as _\n\n\nclass BooleanFilter(admin.SimpleListFilter):\n \"\"\"\n Abstract base class for simple boolean filter in admin. You should define only\n `title`, unique `parameter_name` and two methods: `t` and `f`, returning a queryset\n when filter is set to True and False respectively:\n class HasClassesFilter(BooleanFilter):\n title = _('Has classes')\n parameter_name = 'has_classes'\n def t(self, request, queryset):\n return queryset.filter(classes__isnull=False).distinct('pk')\n def n(self, request, queryset):\n return queryset.filter(classes__isnull=True)\n \"\"\"\n def lookups(self, request, model_admin):\n return (\n ('t', _('Yes')),\n ('f', _('No')),\n )\n\n def queryset(self, request, queryset):\n if not self.value():\n return queryset\n else:\n if self.value() == 't':\n return self.t(request, queryset)\n else:\n return self.f(request, queryset)\n", "path": "src/app/admin/filters.py"}, {"content": "from django.urls import reverse\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import gettext_lazy as _\n\nfrom app.admin import ModelAdmin, admin\nfrom orders.admin.promocodes import actions\nfrom orders.models import PromoCode\n\n\[email protected](PromoCode)\nclass PromoCodeAdmin(ModelAdmin):\n list_display = [\n 'id',\n 'name',\n 'discount_percent',\n 'discount_value',\n 'order_count',\n 'comment',\n 'active',\n ]\n\n list_editable = [\n 'active',\n ]\n\n list_filter = [\n 'active',\n ]\n\n actions = [actions.deactivate]\n\n def get_queryset(self, request):\n return super().get_queryset(request) \\\n .with_order_count()\n\n @mark_safe\n @admin.display(description=_('Order count'), ordering='order_count')\n def order_count(self, obj=None):\n if hasattr(obj, 'order_count') and obj.order_count:\n orders_url = reverse('admin:orders_order_changelist')\n return f'<a href=\"{orders_url}?is_paid=t&promocode_id={obj.id}\">{obj.order_count}</a>'\n\n return '\u2014'\n", "path": "src/orders/admin/promocodes/admin.py"}], "after_files": [{"content": "from django.contrib import admin\nfrom django.utils.translation import gettext_lazy as _\n\n\nclass BooleanFilter(admin.SimpleListFilter):\n \"\"\"\n Abstract base class for simple boolean filter in admin. You should define only\n `title`, unique `parameter_name` and two methods: `t` and `f`, returning a queryset\n when filter is set to True and False respectively:\n class HasClassesFilter(BooleanFilter):\n title = _('Has classes')\n parameter_name = 'has_classes'\n def t(self, request, queryset):\n return queryset.filter(classes__isnull=False).distinct('pk')\n def f(self, request, queryset):\n return queryset.filter(classes__isnull=True)\n \"\"\"\n def lookups(self, request, model_admin):\n return (\n ('t', _('Yes')),\n ('f', _('No')),\n )\n\n def queryset(self, request, queryset):\n if not self.value():\n return queryset\n\n if self.value() == 't':\n return self.t(request, queryset)\n\n return self.f(request, queryset)\n\n\nclass DefaultTrueBooleanFilter(BooleanFilter):\n def queryset(self, request, queryset):\n if not self.value() or self.value() == 't':\n return self.t(request, queryset)\n\n return self.f(request, queryset)\n\n\n__all__ = [\n 'BooleanFilter',\n 'DefaultTrueBooleanFilter',\n]\n", "path": "src/app/admin/filters.py"}, {"content": "from django.utils.translation import gettext_lazy as _\n\nfrom app.admin import ModelAdmin, admin\nfrom app.admin.filters import DefaultTrueBooleanFilter\nfrom orders.admin.promocodes import actions\nfrom orders.models import PromoCode\n\n\nclass PromodeActiveFilter(DefaultTrueBooleanFilter):\n title = _('Active')\n parameter_name = 'is_active'\n\n def t(self, request, queryset):\n return queryset.filter(active=True)\n\n def f(self, request, queryset):\n return queryset.filter(active=False)\n\n\[email protected](PromoCode)\nclass PromoCodeAdmin(ModelAdmin):\n list_display = (\n 'id',\n 'name',\n 'discount',\n 'order_count',\n 'comment',\n 'active',\n )\n\n list_editable = [\n 'active',\n ]\n\n list_filter = (\n PromodeActiveFilter,\n )\n\n list_display_links = (\n 'id',\n 'name',\n )\n\n actions = [actions.deactivate]\n\n def get_queryset(self, request):\n return super().get_queryset(request) \\\n .with_order_count()\n\n @admin.display(description=_('Order count'), ordering='order_count')\n def order_count(self, obj: PromoCode | None = None) -> str:\n if hasattr(obj, 'order_count') and obj.order_count:\n return str(obj.order_count)\n\n return '\u2014'\n\n @admin.display(description=_('Discount'), ordering='discount_percent')\n def discount(self, obj: PromoCode | None = None) -> str:\n if not obj:\n return '\u2014'\n\n if obj.discount_value is not None:\n return f'{obj.discount_value} \u20bd'\n\n return f'{obj.discount_percent} %'\n", "path": "src/orders/admin/promocodes/admin.py"}]}
| 979 | 862 |
gh_patches_debug_56268
|
rasdani/github-patches
|
git_diff
|
readthedocs__readthedocs.org-4910
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Validate profile fields on form
Related code
https://github.com/rtfd/readthedocs.org/blob/164800694a25d769234c6e7019c483f347fe9226/readthedocs/core/forms.py#L20-L46
This will raise an exception if the length is greater than the model
Sentry issue https://sentry.io/read-the-docs/readthedocs-org/issues/666774301/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `readthedocs/core/forms.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """Forms for core app."""
3
4 from __future__ import (
5 absolute_import, division, print_function, unicode_literals)
6
7 import logging
8 from builtins import object
9
10 from django import forms
11 from django.contrib.auth.models import User
12 from django.forms.fields import CharField
13 from django.utils.translation import ugettext_lazy as _
14
15 from .models import UserProfile
16
17 log = logging.getLogger(__name__)
18
19
20 class UserProfileForm(forms.ModelForm):
21 first_name = CharField(label=_('First name'), required=False)
22 last_name = CharField(label=_('Last name'), required=False)
23
24 class Meta(object):
25 model = UserProfile
26 # Don't allow users edit someone else's user page
27 fields = ['first_name', 'last_name', 'homepage']
28
29 def __init__(self, *args, **kwargs):
30 super(UserProfileForm, self).__init__(*args, **kwargs)
31 try:
32 self.fields['first_name'].initial = self.instance.user.first_name
33 self.fields['last_name'].initial = self.instance.user.last_name
34 except AttributeError:
35 pass
36
37 def save(self, commit=True):
38 first_name = self.cleaned_data.pop('first_name', None)
39 last_name = self.cleaned_data.pop('last_name', None)
40 profile = super(UserProfileForm, self).save(commit=commit)
41 if commit:
42 user = profile.user
43 user.first_name = first_name
44 user.last_name = last_name
45 user.save()
46 return profile
47
48
49 class UserDeleteForm(forms.ModelForm):
50 username = CharField(
51 label=_('Username'),
52 help_text=_('Please type your username to confirm.'),
53 )
54
55 class Meta(object):
56 model = User
57 fields = ['username']
58
59 def clean_username(self):
60 data = self.cleaned_data['username']
61
62 if self.instance.username != data:
63 raise forms.ValidationError(_('Username does not match!'))
64
65 return data
66
67
68 class UserAdvertisingForm(forms.ModelForm):
69 class Meta(object):
70 model = UserProfile
71 fields = ['allow_ads']
72
73
74 class FacetField(forms.MultipleChoiceField):
75
76 """
77 For filtering searches on a facet.
78
79 Has validation for the format of facet values.
80 """
81
82 def valid_value(self, value):
83 """
84 Although this is a choice field, no choices need to be supplied.
85
86 Instead, we just validate that the value is in the correct format for
87 facet filtering (facet_name:value)
88 """
89 if ':' not in value:
90 return False
91 return True
92
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/readthedocs/core/forms.py b/readthedocs/core/forms.py
--- a/readthedocs/core/forms.py
+++ b/readthedocs/core/forms.py
@@ -18,8 +18,8 @@
class UserProfileForm(forms.ModelForm):
- first_name = CharField(label=_('First name'), required=False)
- last_name = CharField(label=_('Last name'), required=False)
+ first_name = CharField(label=_('First name'), required=False, max_length=30)
+ last_name = CharField(label=_('Last name'), required=False, max_length=30)
class Meta(object):
model = UserProfile
|
{"golden_diff": "diff --git a/readthedocs/core/forms.py b/readthedocs/core/forms.py\n--- a/readthedocs/core/forms.py\n+++ b/readthedocs/core/forms.py\n@@ -18,8 +18,8 @@\n \n \n class UserProfileForm(forms.ModelForm):\n- first_name = CharField(label=_('First name'), required=False)\n- last_name = CharField(label=_('Last name'), required=False)\n+ first_name = CharField(label=_('First name'), required=False, max_length=30)\n+ last_name = CharField(label=_('Last name'), required=False, max_length=30)\n \n class Meta(object):\n model = UserProfile\n", "issue": "Validate profile fields on form\nRelated code\r\n\r\nhttps://github.com/rtfd/readthedocs.org/blob/164800694a25d769234c6e7019c483f347fe9226/readthedocs/core/forms.py#L20-L46\r\n\r\nThis will raise an exception if the length is greater than the model\r\n\r\nSentry issue https://sentry.io/read-the-docs/readthedocs-org/issues/666774301/\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Forms for core app.\"\"\"\n\nfrom __future__ import (\n absolute_import, division, print_function, unicode_literals)\n\nimport logging\nfrom builtins import object\n\nfrom django import forms\nfrom django.contrib.auth.models import User\nfrom django.forms.fields import CharField\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom .models import UserProfile\n\nlog = logging.getLogger(__name__)\n\n\nclass UserProfileForm(forms.ModelForm):\n first_name = CharField(label=_('First name'), required=False)\n last_name = CharField(label=_('Last name'), required=False)\n\n class Meta(object):\n model = UserProfile\n # Don't allow users edit someone else's user page\n fields = ['first_name', 'last_name', 'homepage']\n\n def __init__(self, *args, **kwargs):\n super(UserProfileForm, self).__init__(*args, **kwargs)\n try:\n self.fields['first_name'].initial = self.instance.user.first_name\n self.fields['last_name'].initial = self.instance.user.last_name\n except AttributeError:\n pass\n\n def save(self, commit=True):\n first_name = self.cleaned_data.pop('first_name', None)\n last_name = self.cleaned_data.pop('last_name', None)\n profile = super(UserProfileForm, self).save(commit=commit)\n if commit:\n user = profile.user\n user.first_name = first_name\n user.last_name = last_name\n user.save()\n return profile\n\n\nclass UserDeleteForm(forms.ModelForm):\n username = CharField(\n label=_('Username'),\n help_text=_('Please type your username to confirm.'),\n )\n\n class Meta(object):\n model = User\n fields = ['username']\n\n def clean_username(self):\n data = self.cleaned_data['username']\n\n if self.instance.username != data:\n raise forms.ValidationError(_('Username does not match!'))\n\n return data\n\n\nclass UserAdvertisingForm(forms.ModelForm):\n class Meta(object):\n model = UserProfile\n fields = ['allow_ads']\n\n\nclass FacetField(forms.MultipleChoiceField):\n\n \"\"\"\n For filtering searches on a facet.\n\n Has validation for the format of facet values.\n \"\"\"\n\n def valid_value(self, value):\n \"\"\"\n Although this is a choice field, no choices need to be supplied.\n\n Instead, we just validate that the value is in the correct format for\n facet filtering (facet_name:value)\n \"\"\"\n if ':' not in value:\n return False\n return True\n", "path": "readthedocs/core/forms.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Forms for core app.\"\"\"\n\nfrom __future__ import (\n absolute_import, division, print_function, unicode_literals)\n\nimport logging\nfrom builtins import object\n\nfrom django import forms\nfrom django.contrib.auth.models import User\nfrom django.forms.fields import CharField\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom .models import UserProfile\n\nlog = logging.getLogger(__name__)\n\n\nclass UserProfileForm(forms.ModelForm):\n first_name = CharField(label=_('First name'), required=False, max_length=30)\n last_name = CharField(label=_('Last name'), required=False, max_length=30)\n\n class Meta(object):\n model = UserProfile\n # Don't allow users edit someone else's user page\n fields = ['first_name', 'last_name', 'homepage']\n\n def __init__(self, *args, **kwargs):\n super(UserProfileForm, self).__init__(*args, **kwargs)\n try:\n self.fields['first_name'].initial = self.instance.user.first_name\n self.fields['last_name'].initial = self.instance.user.last_name\n except AttributeError:\n pass\n\n def save(self, commit=True):\n first_name = self.cleaned_data.pop('first_name', None)\n last_name = self.cleaned_data.pop('last_name', None)\n profile = super(UserProfileForm, self).save(commit=commit)\n if commit:\n user = profile.user\n user.first_name = first_name\n user.last_name = last_name\n user.save()\n return profile\n\n\nclass UserDeleteForm(forms.ModelForm):\n username = CharField(\n label=_('Username'),\n help_text=_('Please type your username to confirm.'),\n )\n\n class Meta(object):\n model = User\n fields = ['username']\n\n def clean_username(self):\n data = self.cleaned_data['username']\n\n if self.instance.username != data:\n raise forms.ValidationError(_('Username does not match!'))\n\n return data\n\n\nclass UserAdvertisingForm(forms.ModelForm):\n class Meta(object):\n model = UserProfile\n fields = ['allow_ads']\n\n\nclass FacetField(forms.MultipleChoiceField):\n\n \"\"\"\n For filtering searches on a facet.\n\n Has validation for the format of facet values.\n \"\"\"\n\n def valid_value(self, value):\n \"\"\"\n Although this is a choice field, no choices need to be supplied.\n\n Instead, we just validate that the value is in the correct format for\n facet filtering (facet_name:value)\n \"\"\"\n if ':' not in value:\n return False\n return True\n", "path": "readthedocs/core/forms.py"}]}
| 1,092 | 138 |
gh_patches_debug_34769
|
rasdani/github-patches
|
git_diff
|
napari__napari-3016
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Opacity slider label should be between 0 and 1
## 🐛 Bug
Opacity slider label should be between 0 and 1 not 0 and 100. This will remove need for normalization on slider https://github.com/napari/napari/blob/aade148d8e5cb339bb2981ab4d1081ae5d2747e0/napari/_qt/layer_controls/qt_layer_controls_base.py#L79

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `napari/_qt/layer_controls/qt_layer_controls_base.py`
Content:
```
1 from qtpy.QtCore import Qt
2 from qtpy.QtWidgets import QComboBox, QFrame, QGridLayout
3 from superqt import QLabeledSlider as QSlider
4
5 from ...layers.base._base_constants import BLENDING_TRANSLATIONS
6 from ...utils.events import disconnect_events
7
8
9 class QtLayerControls(QFrame):
10 """Superclass for all the other LayerControl classes.
11
12 This class is never directly instantiated anywhere.
13
14 Parameters
15 ----------
16 layer : napari.layers.Layer
17 An instance of a napari layer.
18
19 Attributes
20 ----------
21 blendComboBox : qtpy.QtWidgets.QComboBox
22 Drowpdown widget to select blending mode of layer.
23 grid_layout : qtpy.QtWidgets.QGridLayout
24 Layout of Qt widget controls for the layer.
25 layer : napari.layers.Layer
26 An instance of a napari layer.
27 opacitySlider : qtpy.QtWidgets.QSlider
28 Slider controlling opacity of the layer.
29 """
30
31 def __init__(self, layer):
32 super().__init__()
33
34 self.layer = layer
35 self.layer.events.blending.connect(self._on_blending_change)
36 self.layer.events.opacity.connect(self._on_opacity_change)
37
38 self.setAttribute(Qt.WA_DeleteOnClose)
39
40 self.setObjectName('layer')
41 self.setMouseTracking(True)
42
43 self.grid_layout = QGridLayout(self)
44 self.grid_layout.setContentsMargins(0, 0, 0, 0)
45 self.grid_layout.setSpacing(2)
46 self.grid_layout.setColumnMinimumWidth(0, 86)
47 self.grid_layout.setColumnStretch(1, 1)
48 self.setLayout(self.grid_layout)
49
50 sld = QSlider(Qt.Horizontal, parent=self)
51 sld.setFocusPolicy(Qt.NoFocus)
52 sld.setMinimum(0)
53 sld.setMaximum(100)
54 sld.setSingleStep(1)
55 sld.valueChanged.connect(self.changeOpacity)
56 self.opacitySlider = sld
57 self._on_opacity_change()
58
59 blend_comboBox = QComboBox(self)
60 for index, (data, text) in enumerate(BLENDING_TRANSLATIONS.items()):
61 data = data.value
62 blend_comboBox.addItem(text, data)
63 if data == self.layer.blending:
64 blend_comboBox.setCurrentIndex(index)
65
66 blend_comboBox.activated[str].connect(self.changeBlending)
67 self.blendComboBox = blend_comboBox
68
69 def changeOpacity(self, value):
70 """Change opacity value on the layer model.
71
72 Parameters
73 ----------
74 value : float
75 Opacity value for shapes.
76 Input range 0 - 100 (transparent to fully opaque).
77 """
78 with self.layer.events.blocker(self._on_opacity_change):
79 self.layer.opacity = value / 100
80
81 def changeBlending(self, text):
82 """Change blending mode on the layer model.
83
84 Parameters
85 ----------
86 text : str
87 Name of blending mode, eg: 'translucent', 'additive', 'opaque'.
88 """
89 self.layer.blending = self.blendComboBox.currentData()
90
91 def _on_opacity_change(self, event=None):
92 """Receive layer model opacity change event and update opacity slider.
93
94 Parameters
95 ----------
96 event : napari.utils.event.Event, optional
97 The napari event that triggered this method, by default None.
98 """
99 with self.layer.events.opacity.blocker():
100 self.opacitySlider.setValue(int(self.layer.opacity * 100))
101
102 def _on_blending_change(self, event=None):
103 """Receive layer model blending mode change event and update slider.
104
105 Parameters
106 ----------
107 event : napari.utils.event.Event, optional
108 The napari event that triggered this method, by default None.
109 """
110 with self.layer.events.blending.blocker():
111 self.blendComboBox.setCurrentIndex(
112 self.blendComboBox.findData(self.layer.blending)
113 )
114
115 def close(self):
116 """Disconnect events when widget is closing."""
117 disconnect_events(self.layer.events, self)
118 for child in self.children():
119 close_method = getattr(child, 'close', None)
120 if close_method is not None:
121 close_method()
122 super().close()
123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/napari/_qt/layer_controls/qt_layer_controls_base.py b/napari/_qt/layer_controls/qt_layer_controls_base.py
--- a/napari/_qt/layer_controls/qt_layer_controls_base.py
+++ b/napari/_qt/layer_controls/qt_layer_controls_base.py
@@ -1,6 +1,6 @@
from qtpy.QtCore import Qt
from qtpy.QtWidgets import QComboBox, QFrame, QGridLayout
-from superqt import QLabeledSlider as QSlider
+from superqt import QLabeledDoubleSlider
from ...layers.base._base_constants import BLENDING_TRANSLATIONS
from ...utils.events import disconnect_events
@@ -47,11 +47,11 @@
self.grid_layout.setColumnStretch(1, 1)
self.setLayout(self.grid_layout)
- sld = QSlider(Qt.Horizontal, parent=self)
+ sld = QLabeledDoubleSlider(Qt.Horizontal, parent=self)
sld.setFocusPolicy(Qt.NoFocus)
sld.setMinimum(0)
- sld.setMaximum(100)
- sld.setSingleStep(1)
+ sld.setMaximum(1)
+ sld.setSingleStep(0.01)
sld.valueChanged.connect(self.changeOpacity)
self.opacitySlider = sld
self._on_opacity_change()
@@ -76,7 +76,7 @@
Input range 0 - 100 (transparent to fully opaque).
"""
with self.layer.events.blocker(self._on_opacity_change):
- self.layer.opacity = value / 100
+ self.layer.opacity = value
def changeBlending(self, text):
"""Change blending mode on the layer model.
@@ -97,7 +97,7 @@
The napari event that triggered this method, by default None.
"""
with self.layer.events.opacity.blocker():
- self.opacitySlider.setValue(int(self.layer.opacity * 100))
+ self.opacitySlider.setValue(self.layer.opacity)
def _on_blending_change(self, event=None):
"""Receive layer model blending mode change event and update slider.
|
{"golden_diff": "diff --git a/napari/_qt/layer_controls/qt_layer_controls_base.py b/napari/_qt/layer_controls/qt_layer_controls_base.py\n--- a/napari/_qt/layer_controls/qt_layer_controls_base.py\n+++ b/napari/_qt/layer_controls/qt_layer_controls_base.py\n@@ -1,6 +1,6 @@\n from qtpy.QtCore import Qt\n from qtpy.QtWidgets import QComboBox, QFrame, QGridLayout\n-from superqt import QLabeledSlider as QSlider\n+from superqt import QLabeledDoubleSlider\n \n from ...layers.base._base_constants import BLENDING_TRANSLATIONS\n from ...utils.events import disconnect_events\n@@ -47,11 +47,11 @@\n self.grid_layout.setColumnStretch(1, 1)\n self.setLayout(self.grid_layout)\n \n- sld = QSlider(Qt.Horizontal, parent=self)\n+ sld = QLabeledDoubleSlider(Qt.Horizontal, parent=self)\n sld.setFocusPolicy(Qt.NoFocus)\n sld.setMinimum(0)\n- sld.setMaximum(100)\n- sld.setSingleStep(1)\n+ sld.setMaximum(1)\n+ sld.setSingleStep(0.01)\n sld.valueChanged.connect(self.changeOpacity)\n self.opacitySlider = sld\n self._on_opacity_change()\n@@ -76,7 +76,7 @@\n Input range 0 - 100 (transparent to fully opaque).\n \"\"\"\n with self.layer.events.blocker(self._on_opacity_change):\n- self.layer.opacity = value / 100\n+ self.layer.opacity = value\n \n def changeBlending(self, text):\n \"\"\"Change blending mode on the layer model.\n@@ -97,7 +97,7 @@\n The napari event that triggered this method, by default None.\n \"\"\"\n with self.layer.events.opacity.blocker():\n- self.opacitySlider.setValue(int(self.layer.opacity * 100))\n+ self.opacitySlider.setValue(self.layer.opacity)\n \n def _on_blending_change(self, event=None):\n \"\"\"Receive layer model blending mode change event and update slider.\n", "issue": "Opacity slider label should be between 0 and 1\n## \ud83d\udc1b Bug\r\n\r\nOpacity slider label should be between 0 and 1 not 0 and 100. This will remove need for normalization on slider https://github.com/napari/napari/blob/aade148d8e5cb339bb2981ab4d1081ae5d2747e0/napari/_qt/layer_controls/qt_layer_controls_base.py#L79\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "from qtpy.QtCore import Qt\nfrom qtpy.QtWidgets import QComboBox, QFrame, QGridLayout\nfrom superqt import QLabeledSlider as QSlider\n\nfrom ...layers.base._base_constants import BLENDING_TRANSLATIONS\nfrom ...utils.events import disconnect_events\n\n\nclass QtLayerControls(QFrame):\n \"\"\"Superclass for all the other LayerControl classes.\n\n This class is never directly instantiated anywhere.\n\n Parameters\n ----------\n layer : napari.layers.Layer\n An instance of a napari layer.\n\n Attributes\n ----------\n blendComboBox : qtpy.QtWidgets.QComboBox\n Drowpdown widget to select blending mode of layer.\n grid_layout : qtpy.QtWidgets.QGridLayout\n Layout of Qt widget controls for the layer.\n layer : napari.layers.Layer\n An instance of a napari layer.\n opacitySlider : qtpy.QtWidgets.QSlider\n Slider controlling opacity of the layer.\n \"\"\"\n\n def __init__(self, layer):\n super().__init__()\n\n self.layer = layer\n self.layer.events.blending.connect(self._on_blending_change)\n self.layer.events.opacity.connect(self._on_opacity_change)\n\n self.setAttribute(Qt.WA_DeleteOnClose)\n\n self.setObjectName('layer')\n self.setMouseTracking(True)\n\n self.grid_layout = QGridLayout(self)\n self.grid_layout.setContentsMargins(0, 0, 0, 0)\n self.grid_layout.setSpacing(2)\n self.grid_layout.setColumnMinimumWidth(0, 86)\n self.grid_layout.setColumnStretch(1, 1)\n self.setLayout(self.grid_layout)\n\n sld = QSlider(Qt.Horizontal, parent=self)\n sld.setFocusPolicy(Qt.NoFocus)\n sld.setMinimum(0)\n sld.setMaximum(100)\n sld.setSingleStep(1)\n sld.valueChanged.connect(self.changeOpacity)\n self.opacitySlider = sld\n self._on_opacity_change()\n\n blend_comboBox = QComboBox(self)\n for index, (data, text) in enumerate(BLENDING_TRANSLATIONS.items()):\n data = data.value\n blend_comboBox.addItem(text, data)\n if data == self.layer.blending:\n blend_comboBox.setCurrentIndex(index)\n\n blend_comboBox.activated[str].connect(self.changeBlending)\n self.blendComboBox = blend_comboBox\n\n def changeOpacity(self, value):\n \"\"\"Change opacity value on the layer model.\n\n Parameters\n ----------\n value : float\n Opacity value for shapes.\n Input range 0 - 100 (transparent to fully opaque).\n \"\"\"\n with self.layer.events.blocker(self._on_opacity_change):\n self.layer.opacity = value / 100\n\n def changeBlending(self, text):\n \"\"\"Change blending mode on the layer model.\n\n Parameters\n ----------\n text : str\n Name of blending mode, eg: 'translucent', 'additive', 'opaque'.\n \"\"\"\n self.layer.blending = self.blendComboBox.currentData()\n\n def _on_opacity_change(self, event=None):\n \"\"\"Receive layer model opacity change event and update opacity slider.\n\n Parameters\n ----------\n event : napari.utils.event.Event, optional\n The napari event that triggered this method, by default None.\n \"\"\"\n with self.layer.events.opacity.blocker():\n self.opacitySlider.setValue(int(self.layer.opacity * 100))\n\n def _on_blending_change(self, event=None):\n \"\"\"Receive layer model blending mode change event and update slider.\n\n Parameters\n ----------\n event : napari.utils.event.Event, optional\n The napari event that triggered this method, by default None.\n \"\"\"\n with self.layer.events.blending.blocker():\n self.blendComboBox.setCurrentIndex(\n self.blendComboBox.findData(self.layer.blending)\n )\n\n def close(self):\n \"\"\"Disconnect events when widget is closing.\"\"\"\n disconnect_events(self.layer.events, self)\n for child in self.children():\n close_method = getattr(child, 'close', None)\n if close_method is not None:\n close_method()\n super().close()\n", "path": "napari/_qt/layer_controls/qt_layer_controls_base.py"}], "after_files": [{"content": "from qtpy.QtCore import Qt\nfrom qtpy.QtWidgets import QComboBox, QFrame, QGridLayout\nfrom superqt import QLabeledDoubleSlider\n\nfrom ...layers.base._base_constants import BLENDING_TRANSLATIONS\nfrom ...utils.events import disconnect_events\n\n\nclass QtLayerControls(QFrame):\n \"\"\"Superclass for all the other LayerControl classes.\n\n This class is never directly instantiated anywhere.\n\n Parameters\n ----------\n layer : napari.layers.Layer\n An instance of a napari layer.\n\n Attributes\n ----------\n blendComboBox : qtpy.QtWidgets.QComboBox\n Drowpdown widget to select blending mode of layer.\n grid_layout : qtpy.QtWidgets.QGridLayout\n Layout of Qt widget controls for the layer.\n layer : napari.layers.Layer\n An instance of a napari layer.\n opacitySlider : qtpy.QtWidgets.QSlider\n Slider controlling opacity of the layer.\n \"\"\"\n\n def __init__(self, layer):\n super().__init__()\n\n self.layer = layer\n self.layer.events.blending.connect(self._on_blending_change)\n self.layer.events.opacity.connect(self._on_opacity_change)\n\n self.setAttribute(Qt.WA_DeleteOnClose)\n\n self.setObjectName('layer')\n self.setMouseTracking(True)\n\n self.grid_layout = QGridLayout(self)\n self.grid_layout.setContentsMargins(0, 0, 0, 0)\n self.grid_layout.setSpacing(2)\n self.grid_layout.setColumnMinimumWidth(0, 86)\n self.grid_layout.setColumnStretch(1, 1)\n self.setLayout(self.grid_layout)\n\n sld = QLabeledDoubleSlider(Qt.Horizontal, parent=self)\n sld.setFocusPolicy(Qt.NoFocus)\n sld.setMinimum(0)\n sld.setMaximum(1)\n sld.setSingleStep(0.01)\n sld.valueChanged.connect(self.changeOpacity)\n self.opacitySlider = sld\n self._on_opacity_change()\n\n blend_comboBox = QComboBox(self)\n for index, (data, text) in enumerate(BLENDING_TRANSLATIONS.items()):\n data = data.value\n blend_comboBox.addItem(text, data)\n if data == self.layer.blending:\n blend_comboBox.setCurrentIndex(index)\n\n blend_comboBox.activated[str].connect(self.changeBlending)\n self.blendComboBox = blend_comboBox\n\n def changeOpacity(self, value):\n \"\"\"Change opacity value on the layer model.\n\n Parameters\n ----------\n value : float\n Opacity value for shapes.\n Input range 0 - 100 (transparent to fully opaque).\n \"\"\"\n with self.layer.events.blocker(self._on_opacity_change):\n self.layer.opacity = value\n\n def changeBlending(self, text):\n \"\"\"Change blending mode on the layer model.\n\n Parameters\n ----------\n text : str\n Name of blending mode, eg: 'translucent', 'additive', 'opaque'.\n \"\"\"\n self.layer.blending = self.blendComboBox.currentData()\n\n def _on_opacity_change(self, event=None):\n \"\"\"Receive layer model opacity change event and update opacity slider.\n\n Parameters\n ----------\n event : napari.utils.event.Event, optional\n The napari event that triggered this method, by default None.\n \"\"\"\n with self.layer.events.opacity.blocker():\n self.opacitySlider.setValue(self.layer.opacity)\n\n def _on_blending_change(self, event=None):\n \"\"\"Receive layer model blending mode change event and update slider.\n\n Parameters\n ----------\n event : napari.utils.event.Event, optional\n The napari event that triggered this method, by default None.\n \"\"\"\n with self.layer.events.blending.blocker():\n self.blendComboBox.setCurrentIndex(\n self.blendComboBox.findData(self.layer.blending)\n )\n\n def close(self):\n \"\"\"Disconnect events when widget is closing.\"\"\"\n disconnect_events(self.layer.events, self)\n for child in self.children():\n close_method = getattr(child, 'close', None)\n if close_method is not None:\n close_method()\n super().close()\n", "path": "napari/_qt/layer_controls/qt_layer_controls_base.py"}]}
| 1,586 | 461 |
gh_patches_debug_31134
|
rasdani/github-patches
|
git_diff
|
pyload__pyload-1535
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Plugin DlProtectCom doesn't work
Trying to download http://www.dl-protect.com/2C964B88 gives the rror 'NoneType' object has no attribute 'group' 0.00 B
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `module/plugins/crypter/DlProtectCom.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 import re
4 import time
5
6 from base64 import urlsafe_b64encode
7
8 from module.plugins.internal.SimpleCrypter import SimpleCrypter, create_getInfo
9
10
11 class DlProtectCom(SimpleCrypter):
12 __name__ = "DlProtectCom"
13 __type__ = "crypter"
14 __version__ = "0.03"
15
16 __pattern__ = r'https?://(?:www\.)?dl-protect\.com/((en|fr)/)?\w+'
17 __config__ = [("use_premium" , "bool", "Use premium account if available" , True),
18 ("use_subfolder" , "bool", "Save package to subfolder" , True),
19 ("subfolder_per_pack", "bool", "Create a subfolder for each package", True)]
20
21 __description__ = """Dl-protect.com decrypter plugin"""
22 __license__ = "GPLv3"
23 __authors__ = [("Walter Purcaro", "[email protected]")]
24
25
26 COOKIES = [("dl-protect.com", "l", "en")]
27
28 OFFLINE_PATTERN = r'Unfortunately, the link you are looking for is not found'
29
30
31 def getLinks(self):
32 # Direct link with redirect
33 if not re.match(r"https?://(?:www\.)?dl-protect\.com/.+", self.req.http.lastEffectiveURL):
34 return [self.req.http.lastEffectiveURL]
35
36 post_req = {'key' : re.search(r'name="key" value="(.+?)"', self.html).group(1),
37 'submitform': ""}
38
39 if "Please click on continue to see the content" in self.html:
40 post_req['submitform'] = "Continue"
41 self.wait(2)
42
43 else:
44 mstime = int(round(time.time() * 1000))
45 b64time = "_" + urlsafe_b64encode(str(mstime)).replace("=", "%3D")
46
47 post_req.update({'i' : b64time,
48 'submitform': "Decrypt+link"})
49
50 if "Password :" in self.html:
51 post_req['pwd'] = self.getPassword()
52
53 if "Security Code" in self.html:
54 captcha_id = re.search(r'/captcha\.php\?uid=(.+?)"', self.html).group(1)
55 captcha_url = "http://www.dl-protect.com/captcha.php?uid=" + captcha_id
56 captcha_code = self.decryptCaptcha(captcha_url, imgtype="gif")
57
58 post_req['secure'] = captcha_code
59
60 self.html = self.load(self.pyfile.url, post=post_req)
61
62 for errmsg in ("The password is incorrect", "The security code is incorrect"):
63 if errmsg in self.html:
64 self.fail(_(errmsg[1:]))
65
66 return re.findall(r'<a href="([^/].+?)" target="_blank">', self.html)
67
68
69 getInfo = create_getInfo(DlProtectCom)
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/module/plugins/crypter/DlProtectCom.py b/module/plugins/crypter/DlProtectCom.py
--- a/module/plugins/crypter/DlProtectCom.py
+++ b/module/plugins/crypter/DlProtectCom.py
@@ -11,7 +11,7 @@
class DlProtectCom(SimpleCrypter):
__name__ = "DlProtectCom"
__type__ = "crypter"
- __version__ = "0.03"
+ __version__ = "0.04"
__pattern__ = r'https?://(?:www\.)?dl-protect\.com/((en|fr)/)?\w+'
__config__ = [("use_premium" , "bool", "Use premium account if available" , True),
@@ -36,7 +36,7 @@
post_req = {'key' : re.search(r'name="key" value="(.+?)"', self.html).group(1),
'submitform': ""}
- if "Please click on continue to see the content" in self.html:
+ if "Please click on continue to see the links" in self.html:
post_req['submitform'] = "Continue"
self.wait(2)
@@ -51,11 +51,10 @@
post_req['pwd'] = self.getPassword()
if "Security Code" in self.html:
- captcha_id = re.search(r'/captcha\.php\?uid=(.+?)"', self.html).group(1)
- captcha_url = "http://www.dl-protect.com/captcha.php?uid=" + captcha_id
- captcha_code = self.decryptCaptcha(captcha_url, imgtype="gif")
-
- post_req['secure'] = captcha_code
+ m = re.search(r'/captcha\.php\?key=(.+?)"', self.html)
+ if m:
+ captcha_code = self.decryptCaptcha("http://www.dl-protect.com/captcha.php?key=" + m.group(1), imgtype="gif")
+ post_req['secure'] = captcha_code
self.html = self.load(self.pyfile.url, post=post_req)
|
{"golden_diff": "diff --git a/module/plugins/crypter/DlProtectCom.py b/module/plugins/crypter/DlProtectCom.py\n--- a/module/plugins/crypter/DlProtectCom.py\n+++ b/module/plugins/crypter/DlProtectCom.py\n@@ -11,7 +11,7 @@\n class DlProtectCom(SimpleCrypter):\n __name__ = \"DlProtectCom\"\n __type__ = \"crypter\"\n- __version__ = \"0.03\"\n+ __version__ = \"0.04\"\n \n __pattern__ = r'https?://(?:www\\.)?dl-protect\\.com/((en|fr)/)?\\w+'\n __config__ = [(\"use_premium\" , \"bool\", \"Use premium account if available\" , True),\n@@ -36,7 +36,7 @@\n post_req = {'key' : re.search(r'name=\"key\" value=\"(.+?)\"', self.html).group(1),\n 'submitform': \"\"}\n \n- if \"Please click on continue to see the content\" in self.html:\n+ if \"Please click on continue to see the links\" in self.html:\n post_req['submitform'] = \"Continue\"\n self.wait(2)\n \n@@ -51,11 +51,10 @@\n post_req['pwd'] = self.getPassword()\n \n if \"Security Code\" in self.html:\n- captcha_id = re.search(r'/captcha\\.php\\?uid=(.+?)\"', self.html).group(1)\n- captcha_url = \"http://www.dl-protect.com/captcha.php?uid=\" + captcha_id\n- captcha_code = self.decryptCaptcha(captcha_url, imgtype=\"gif\")\n-\n- post_req['secure'] = captcha_code\n+ m = re.search(r'/captcha\\.php\\?key=(.+?)\"', self.html)\n+ if m:\n+ captcha_code = self.decryptCaptcha(\"http://www.dl-protect.com/captcha.php?key=\" + m.group(1), imgtype=\"gif\")\n+ post_req['secure'] = captcha_code\n \n self.html = self.load(self.pyfile.url, post=post_req)\n", "issue": "Plugin DlProtectCom doesn't work\nTrying to download http://www.dl-protect.com/2C964B88 gives the rror 'NoneType' object has no attribute 'group' 0.00 B\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport re\nimport time\n\nfrom base64 import urlsafe_b64encode\n\nfrom module.plugins.internal.SimpleCrypter import SimpleCrypter, create_getInfo\n\n\nclass DlProtectCom(SimpleCrypter):\n __name__ = \"DlProtectCom\"\n __type__ = \"crypter\"\n __version__ = \"0.03\"\n\n __pattern__ = r'https?://(?:www\\.)?dl-protect\\.com/((en|fr)/)?\\w+'\n __config__ = [(\"use_premium\" , \"bool\", \"Use premium account if available\" , True),\n (\"use_subfolder\" , \"bool\", \"Save package to subfolder\" , True),\n (\"subfolder_per_pack\", \"bool\", \"Create a subfolder for each package\", True)]\n\n __description__ = \"\"\"Dl-protect.com decrypter plugin\"\"\"\n __license__ = \"GPLv3\"\n __authors__ = [(\"Walter Purcaro\", \"[email protected]\")]\n\n\n COOKIES = [(\"dl-protect.com\", \"l\", \"en\")]\n\n OFFLINE_PATTERN = r'Unfortunately, the link you are looking for is not found'\n\n\n def getLinks(self):\n # Direct link with redirect\n if not re.match(r\"https?://(?:www\\.)?dl-protect\\.com/.+\", self.req.http.lastEffectiveURL):\n return [self.req.http.lastEffectiveURL]\n\n post_req = {'key' : re.search(r'name=\"key\" value=\"(.+?)\"', self.html).group(1),\n 'submitform': \"\"}\n\n if \"Please click on continue to see the content\" in self.html:\n post_req['submitform'] = \"Continue\"\n self.wait(2)\n\n else:\n mstime = int(round(time.time() * 1000))\n b64time = \"_\" + urlsafe_b64encode(str(mstime)).replace(\"=\", \"%3D\")\n\n post_req.update({'i' : b64time,\n 'submitform': \"Decrypt+link\"})\n\n if \"Password :\" in self.html:\n post_req['pwd'] = self.getPassword()\n\n if \"Security Code\" in self.html:\n captcha_id = re.search(r'/captcha\\.php\\?uid=(.+?)\"', self.html).group(1)\n captcha_url = \"http://www.dl-protect.com/captcha.php?uid=\" + captcha_id\n captcha_code = self.decryptCaptcha(captcha_url, imgtype=\"gif\")\n\n post_req['secure'] = captcha_code\n\n self.html = self.load(self.pyfile.url, post=post_req)\n\n for errmsg in (\"The password is incorrect\", \"The security code is incorrect\"):\n if errmsg in self.html:\n self.fail(_(errmsg[1:]))\n\n return re.findall(r'<a href=\"([^/].+?)\" target=\"_blank\">', self.html)\n\n\ngetInfo = create_getInfo(DlProtectCom)\n", "path": "module/plugins/crypter/DlProtectCom.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport re\nimport time\n\nfrom base64 import urlsafe_b64encode\n\nfrom module.plugins.internal.SimpleCrypter import SimpleCrypter, create_getInfo\n\n\nclass DlProtectCom(SimpleCrypter):\n __name__ = \"DlProtectCom\"\n __type__ = \"crypter\"\n __version__ = \"0.04\"\n\n __pattern__ = r'https?://(?:www\\.)?dl-protect\\.com/((en|fr)/)?\\w+'\n __config__ = [(\"use_premium\" , \"bool\", \"Use premium account if available\" , True),\n (\"use_subfolder\" , \"bool\", \"Save package to subfolder\" , True),\n (\"subfolder_per_pack\", \"bool\", \"Create a subfolder for each package\", True)]\n\n __description__ = \"\"\"Dl-protect.com decrypter plugin\"\"\"\n __license__ = \"GPLv3\"\n __authors__ = [(\"Walter Purcaro\", \"[email protected]\")]\n\n\n COOKIES = [(\"dl-protect.com\", \"l\", \"en\")]\n\n OFFLINE_PATTERN = r'Unfortunately, the link you are looking for is not found'\n\n\n def getLinks(self):\n # Direct link with redirect\n if not re.match(r\"https?://(?:www\\.)?dl-protect\\.com/.+\", self.req.http.lastEffectiveURL):\n return [self.req.http.lastEffectiveURL]\n\n post_req = {'key' : re.search(r'name=\"key\" value=\"(.+?)\"', self.html).group(1),\n 'submitform': \"\"}\n\n if \"Please click on continue to see the links\" in self.html:\n post_req['submitform'] = \"Continue\"\n self.wait(2)\n\n else:\n mstime = int(round(time.time() * 1000))\n b64time = \"_\" + urlsafe_b64encode(str(mstime)).replace(\"=\", \"%3D\")\n\n post_req.update({'i' : b64time,\n 'submitform': \"Decrypt+link\"})\n\n if \"Password :\" in self.html:\n post_req['pwd'] = self.getPassword()\n\n if \"Security Code\" in self.html:\n m = re.search(r'/captcha\\.php\\?key=(.+?)\"', self.html)\n if m:\n captcha_code = self.decryptCaptcha(\"http://www.dl-protect.com/captcha.php?key=\" + m.group(1), imgtype=\"gif\")\n post_req['secure'] = captcha_code\n\n self.html = self.load(self.pyfile.url, post=post_req)\n\n for errmsg in (\"The password is incorrect\", \"The security code is incorrect\"):\n if errmsg in self.html:\n self.fail(_(errmsg[1:]))\n\n return re.findall(r'<a href=\"([^/].+?)\" target=\"_blank\">', self.html)\n\n\ngetInfo = create_getInfo(DlProtectCom)\n", "path": "module/plugins/crypter/DlProtectCom.py"}]}
| 1,114 | 484 |
gh_patches_debug_13065
|
rasdani/github-patches
|
git_diff
|
openai__gym-2646
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug Report] AttributeError: 'Discrete' object has no attribute 'start'
**Describe the bug**
Change in https://github.com/openai/gym/pull/2470 introduced a bug when loading pre-trained agents with previous version of gym.
Fix is probably similar to https://github.com/DLR-RM/stable-baselines3/issues/573 ...
**Code example**
from RL Zoo CI
See https://github.com/DLR-RM/rl-baselines3-zoo/pull/210 (note: the CI now passes because I downgraded to gym 0.21)
and
https://github.com/DLR-RM/rl-baselines3-zoo/runs/5305883843?check_suite_focus=true
```
python enjoy --algo qrdqn --env Acrobot-v1
```
traceback:
```
Loading rl-trained-agents/qrdqn/Acrobot-v1_1/Acrobot-v1.zip
----------------------------- Captured stderr call -----------------------------
/opt/hostedtoolcache/Python/3.9.10/x64/lib/python3.9/site-packages/stable_baselines3/common/save_util.py:166: UserWarning: Could not deserialize object exploration_schedule. Consider using `custom_objects` argument to replace this object.
warnings.warn(
Traceback (most recent call last):
File "/home/runner/work/rl-baselines3-zoo/rl-baselines3-zoo/enjoy.py", line 248, in <module>
main()
File "/home/runner/work/rl-baselines3-zoo/rl-baselines3-zoo/enjoy.py", line 178, in main
model = ALGOS[algo].load(model_path, env=env, custom_objects=custom_objects, **kwargs)
File "/opt/hostedtoolcache/Python/3.9.10/x64/lib/python3.9/site-packages/stable_baselines3/common/base_class.py", line 709, in load
check_for_correct_spaces(env, data["observation_space"], data["action_space"])
File "/opt/hostedtoolcache/Python/3.9.10/x64/lib/python3.9/site-packages/stable_baselines3/common/utils.py", line 224, in check_for_correct_spaces
if action_space != env.action_space:
File "/opt/hostedtoolcache/Python/3.9.10/x64/lib/python3.9/site-packages/gym/spaces/discrete.py", line 50, in __eq__
and self.start == other.start
AttributeError: 'Discrete' object has no attribute 'start'
```
**System Info**
Gym 0.22
**Additional context**
Add any other context about the problem here.
### Checklist
- [x] I have checked that there is no similar [issue](https://github.com/openai/gym/issues) in the repo (**required**)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gym/spaces/discrete.py`
Content:
```
1 from typing import Optional
2
3 import numpy as np
4 from .space import Space
5
6
7 class Discrete(Space[int]):
8 r"""A discrete space in :math:`\{ 0, 1, \\dots, n-1 \}`.
9
10 A start value can be optionally specified to shift the range
11 to :math:`\{ a, a+1, \\dots, a+n-1 \}`.
12
13 Example::
14
15 >>> Discrete(2)
16 >>> Discrete(3, start=-1) # {-1, 0, 1}
17
18 """
19
20 def __init__(self, n: int, seed: Optional[int] = None, start: int = 0):
21 assert n > 0, "n (counts) have to be positive"
22 assert isinstance(start, (int, np.integer))
23 self.n = int(n)
24 self.start = int(start)
25 super().__init__((), np.int64, seed)
26
27 def sample(self) -> int:
28 return self.start + self.np_random.randint(self.n)
29
30 def contains(self, x) -> bool:
31 if isinstance(x, int):
32 as_int = x
33 elif isinstance(x, (np.generic, np.ndarray)) and (
34 x.dtype.char in np.typecodes["AllInteger"] and x.shape == ()
35 ):
36 as_int = int(x) # type: ignore
37 else:
38 return False
39 return self.start <= as_int < self.start + self.n
40
41 def __repr__(self) -> str:
42 if self.start != 0:
43 return "Discrete(%d, start=%d)" % (self.n, self.start)
44 return "Discrete(%d)" % self.n
45
46 def __eq__(self, other) -> bool:
47 return (
48 isinstance(other, Discrete)
49 and self.n == other.n
50 and self.start == other.start
51 )
52
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/gym/spaces/discrete.py b/gym/spaces/discrete.py
--- a/gym/spaces/discrete.py
+++ b/gym/spaces/discrete.py
@@ -12,7 +12,7 @@
Example::
- >>> Discrete(2)
+ >>> Discrete(2) # {0, 1}
>>> Discrete(3, start=-1) # {-1, 0, 1}
"""
@@ -49,3 +49,17 @@
and self.n == other.n
and self.start == other.start
)
+
+ def __setstate__(self, state):
+ super().__setstate__(state)
+
+ # Don't mutate the original state
+ state = dict(state)
+
+ # Allow for loading of legacy states.
+ # See https://github.com/openai/gym/pull/2470
+ if "start" not in state:
+ state["start"] = 0
+
+ # Update our state
+ self.__dict__.update(state)
|
{"golden_diff": "diff --git a/gym/spaces/discrete.py b/gym/spaces/discrete.py\n--- a/gym/spaces/discrete.py\n+++ b/gym/spaces/discrete.py\n@@ -12,7 +12,7 @@\n \n Example::\n \n- >>> Discrete(2)\n+ >>> Discrete(2) # {0, 1}\n >>> Discrete(3, start=-1) # {-1, 0, 1}\n \n \"\"\"\n@@ -49,3 +49,17 @@\n and self.n == other.n\n and self.start == other.start\n )\n+\n+ def __setstate__(self, state):\n+ super().__setstate__(state)\n+\n+ # Don't mutate the original state\n+ state = dict(state)\n+\n+ # Allow for loading of legacy states.\n+ # See https://github.com/openai/gym/pull/2470\n+ if \"start\" not in state:\n+ state[\"start\"] = 0\n+\n+ # Update our state\n+ self.__dict__.update(state)\n", "issue": "[Bug Report] AttributeError: 'Discrete' object has no attribute 'start'\n**Describe the bug**\r\nChange in https://github.com/openai/gym/pull/2470 introduced a bug when loading pre-trained agents with previous version of gym.\r\n\r\nFix is probably similar to https://github.com/DLR-RM/stable-baselines3/issues/573 ...\r\n\r\n\r\n**Code example**\r\nfrom RL Zoo CI\r\n\r\nSee https://github.com/DLR-RM/rl-baselines3-zoo/pull/210 (note: the CI now passes because I downgraded to gym 0.21)\r\nand\r\nhttps://github.com/DLR-RM/rl-baselines3-zoo/runs/5305883843?check_suite_focus=true\r\n\r\n```\r\npython enjoy --algo qrdqn --env Acrobot-v1\r\n```\r\n\r\ntraceback:\r\n```\r\n Loading rl-trained-agents/qrdqn/Acrobot-v1_1/Acrobot-v1.zip\r\n----------------------------- Captured stderr call -----------------------------\r\n/opt/hostedtoolcache/Python/3.9.10/x64/lib/python3.9/site-packages/stable_baselines3/common/save_util.py:166: UserWarning: Could not deserialize object exploration_schedule. Consider using `custom_objects` argument to replace this object.\r\n warnings.warn(\r\nTraceback (most recent call last):\r\n File \"/home/runner/work/rl-baselines3-zoo/rl-baselines3-zoo/enjoy.py\", line 248, in <module>\r\n main()\r\n File \"/home/runner/work/rl-baselines3-zoo/rl-baselines3-zoo/enjoy.py\", line 178, in main\r\n model = ALGOS[algo].load(model_path, env=env, custom_objects=custom_objects, **kwargs)\r\n File \"/opt/hostedtoolcache/Python/3.9.10/x64/lib/python3.9/site-packages/stable_baselines3/common/base_class.py\", line 709, in load\r\n check_for_correct_spaces(env, data[\"observation_space\"], data[\"action_space\"])\r\n File \"/opt/hostedtoolcache/Python/3.9.10/x64/lib/python3.9/site-packages/stable_baselines3/common/utils.py\", line 224, in check_for_correct_spaces\r\n if action_space != env.action_space:\r\n File \"/opt/hostedtoolcache/Python/3.9.10/x64/lib/python3.9/site-packages/gym/spaces/discrete.py\", line 50, in __eq__\r\n and self.start == other.start\r\nAttributeError: 'Discrete' object has no attribute 'start'\r\n```\r\n\r\n**System Info**\r\nGym 0.22\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n\r\n### Checklist\r\n\r\n- [x] I have checked that there is no similar [issue](https://github.com/openai/gym/issues) in the repo (**required**)\r\n\n", "before_files": [{"content": "from typing import Optional\n\nimport numpy as np\nfrom .space import Space\n\n\nclass Discrete(Space[int]):\n r\"\"\"A discrete space in :math:`\\{ 0, 1, \\\\dots, n-1 \\}`.\n\n A start value can be optionally specified to shift the range\n to :math:`\\{ a, a+1, \\\\dots, a+n-1 \\}`.\n\n Example::\n\n >>> Discrete(2)\n >>> Discrete(3, start=-1) # {-1, 0, 1}\n\n \"\"\"\n\n def __init__(self, n: int, seed: Optional[int] = None, start: int = 0):\n assert n > 0, \"n (counts) have to be positive\"\n assert isinstance(start, (int, np.integer))\n self.n = int(n)\n self.start = int(start)\n super().__init__((), np.int64, seed)\n\n def sample(self) -> int:\n return self.start + self.np_random.randint(self.n)\n\n def contains(self, x) -> bool:\n if isinstance(x, int):\n as_int = x\n elif isinstance(x, (np.generic, np.ndarray)) and (\n x.dtype.char in np.typecodes[\"AllInteger\"] and x.shape == ()\n ):\n as_int = int(x) # type: ignore\n else:\n return False\n return self.start <= as_int < self.start + self.n\n\n def __repr__(self) -> str:\n if self.start != 0:\n return \"Discrete(%d, start=%d)\" % (self.n, self.start)\n return \"Discrete(%d)\" % self.n\n\n def __eq__(self, other) -> bool:\n return (\n isinstance(other, Discrete)\n and self.n == other.n\n and self.start == other.start\n )\n", "path": "gym/spaces/discrete.py"}], "after_files": [{"content": "from typing import Optional\n\nimport numpy as np\nfrom .space import Space\n\n\nclass Discrete(Space[int]):\n r\"\"\"A discrete space in :math:`\\{ 0, 1, \\\\dots, n-1 \\}`.\n\n A start value can be optionally specified to shift the range\n to :math:`\\{ a, a+1, \\\\dots, a+n-1 \\}`.\n\n Example::\n\n >>> Discrete(2) # {0, 1}\n >>> Discrete(3, start=-1) # {-1, 0, 1}\n\n \"\"\"\n\n def __init__(self, n: int, seed: Optional[int] = None, start: int = 0):\n assert n > 0, \"n (counts) have to be positive\"\n assert isinstance(start, (int, np.integer))\n self.n = int(n)\n self.start = int(start)\n super().__init__((), np.int64, seed)\n\n def sample(self) -> int:\n return self.start + self.np_random.randint(self.n)\n\n def contains(self, x) -> bool:\n if isinstance(x, int):\n as_int = x\n elif isinstance(x, (np.generic, np.ndarray)) and (\n x.dtype.char in np.typecodes[\"AllInteger\"] and x.shape == ()\n ):\n as_int = int(x) # type: ignore\n else:\n return False\n return self.start <= as_int < self.start + self.n\n\n def __repr__(self) -> str:\n if self.start != 0:\n return \"Discrete(%d, start=%d)\" % (self.n, self.start)\n return \"Discrete(%d)\" % self.n\n\n def __eq__(self, other) -> bool:\n return (\n isinstance(other, Discrete)\n and self.n == other.n\n and self.start == other.start\n )\n\n def __setstate__(self, state):\n super().__setstate__(state)\n\n # Don't mutate the original state\n state = dict(state)\n\n # Allow for loading of legacy states.\n # See https://github.com/openai/gym/pull/2470\n if \"start\" not in state:\n state[\"start\"] = 0\n\n # Update our state\n self.__dict__.update(state)\n", "path": "gym/spaces/discrete.py"}]}
| 1,417 | 244 |
gh_patches_debug_34565
|
rasdani/github-patches
|
git_diff
|
pallets__click-1328
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
complex example - misleading name for context
The name `Context` and `pass_context` are misleading in the complex example, since the `Context` defined in the example shares a name with the click `Context`. Maybe a different name such as "Environment" or "Options" would be more appropriate.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/complex/complex/commands/cmd_status.py`
Content:
```
1 import click
2 from complex.cli import pass_context
3
4
5 @click.command('status', short_help='Shows file changes.')
6 @pass_context
7 def cli(ctx):
8 """Shows file changes in the current working directory."""
9 ctx.log('Changed files: none')
10 ctx.vlog('bla bla bla, debug info')
11
```
Path: `examples/complex/complex/cli.py`
Content:
```
1 import os
2 import sys
3 import click
4
5
6 CONTEXT_SETTINGS = dict(auto_envvar_prefix='COMPLEX')
7
8
9 class Context(object):
10
11 def __init__(self):
12 self.verbose = False
13 self.home = os.getcwd()
14
15 def log(self, msg, *args):
16 """Logs a message to stderr."""
17 if args:
18 msg %= args
19 click.echo(msg, file=sys.stderr)
20
21 def vlog(self, msg, *args):
22 """Logs a message to stderr only if verbose is enabled."""
23 if self.verbose:
24 self.log(msg, *args)
25
26
27 pass_context = click.make_pass_decorator(Context, ensure=True)
28 cmd_folder = os.path.abspath(os.path.join(os.path.dirname(__file__),
29 'commands'))
30
31
32 class ComplexCLI(click.MultiCommand):
33
34 def list_commands(self, ctx):
35 rv = []
36 for filename in os.listdir(cmd_folder):
37 if filename.endswith('.py') and \
38 filename.startswith('cmd_'):
39 rv.append(filename[4:-3])
40 rv.sort()
41 return rv
42
43 def get_command(self, ctx, name):
44 try:
45 if sys.version_info[0] == 2:
46 name = name.encode('ascii', 'replace')
47 mod = __import__('complex.commands.cmd_' + name,
48 None, None, ['cli'])
49 except ImportError:
50 return
51 return mod.cli
52
53
54 @click.command(cls=ComplexCLI, context_settings=CONTEXT_SETTINGS)
55 @click.option('--home', type=click.Path(exists=True, file_okay=False,
56 resolve_path=True),
57 help='Changes the folder to operate on.')
58 @click.option('-v', '--verbose', is_flag=True,
59 help='Enables verbose mode.')
60 @pass_context
61 def cli(ctx, verbose, home):
62 """A complex command line interface."""
63 ctx.verbose = verbose
64 if home is not None:
65 ctx.home = home
66
```
Path: `examples/complex/complex/commands/cmd_init.py`
Content:
```
1 import click
2 from complex.cli import pass_context
3
4
5 @click.command('init', short_help='Initializes a repo.')
6 @click.argument('path', required=False, type=click.Path(resolve_path=True))
7 @pass_context
8 def cli(ctx, path):
9 """Initializes a repository."""
10 if path is None:
11 path = ctx.home
12 ctx.log('Initialized the repository in %s',
13 click.format_filename(path))
14
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/complex/complex/cli.py b/examples/complex/complex/cli.py
--- a/examples/complex/complex/cli.py
+++ b/examples/complex/complex/cli.py
@@ -6,7 +6,7 @@
CONTEXT_SETTINGS = dict(auto_envvar_prefix='COMPLEX')
-class Context(object):
+class Environment(object):
def __init__(self):
self.verbose = False
@@ -24,7 +24,7 @@
self.log(msg, *args)
-pass_context = click.make_pass_decorator(Context, ensure=True)
+pass_environment = click.make_pass_decorator(Environment, ensure=True)
cmd_folder = os.path.abspath(os.path.join(os.path.dirname(__file__),
'commands'))
@@ -57,7 +57,7 @@
help='Changes the folder to operate on.')
@click.option('-v', '--verbose', is_flag=True,
help='Enables verbose mode.')
-@pass_context
+@pass_environment
def cli(ctx, verbose, home):
"""A complex command line interface."""
ctx.verbose = verbose
diff --git a/examples/complex/complex/commands/cmd_init.py b/examples/complex/complex/commands/cmd_init.py
--- a/examples/complex/complex/commands/cmd_init.py
+++ b/examples/complex/complex/commands/cmd_init.py
@@ -1,10 +1,10 @@
import click
-from complex.cli import pass_context
+from complex.cli import pass_environment
@click.command('init', short_help='Initializes a repo.')
@click.argument('path', required=False, type=click.Path(resolve_path=True))
-@pass_context
+@pass_environment
def cli(ctx, path):
"""Initializes a repository."""
if path is None:
diff --git a/examples/complex/complex/commands/cmd_status.py b/examples/complex/complex/commands/cmd_status.py
--- a/examples/complex/complex/commands/cmd_status.py
+++ b/examples/complex/complex/commands/cmd_status.py
@@ -1,9 +1,9 @@
import click
-from complex.cli import pass_context
+from complex.cli import pass_environment
@click.command('status', short_help='Shows file changes.')
-@pass_context
+@pass_environment
def cli(ctx):
"""Shows file changes in the current working directory."""
ctx.log('Changed files: none')
|
{"golden_diff": "diff --git a/examples/complex/complex/cli.py b/examples/complex/complex/cli.py\n--- a/examples/complex/complex/cli.py\n+++ b/examples/complex/complex/cli.py\n@@ -6,7 +6,7 @@\n CONTEXT_SETTINGS = dict(auto_envvar_prefix='COMPLEX')\n \n \n-class Context(object):\n+class Environment(object):\n \n def __init__(self):\n self.verbose = False\n@@ -24,7 +24,7 @@\n self.log(msg, *args)\n \n \n-pass_context = click.make_pass_decorator(Context, ensure=True)\n+pass_environment = click.make_pass_decorator(Environment, ensure=True)\n cmd_folder = os.path.abspath(os.path.join(os.path.dirname(__file__),\n 'commands'))\n \n@@ -57,7 +57,7 @@\n help='Changes the folder to operate on.')\n @click.option('-v', '--verbose', is_flag=True,\n help='Enables verbose mode.')\n-@pass_context\n+@pass_environment\n def cli(ctx, verbose, home):\n \"\"\"A complex command line interface.\"\"\"\n ctx.verbose = verbose\ndiff --git a/examples/complex/complex/commands/cmd_init.py b/examples/complex/complex/commands/cmd_init.py\n--- a/examples/complex/complex/commands/cmd_init.py\n+++ b/examples/complex/complex/commands/cmd_init.py\n@@ -1,10 +1,10 @@\n import click\n-from complex.cli import pass_context\n+from complex.cli import pass_environment\n \n \n @click.command('init', short_help='Initializes a repo.')\n @click.argument('path', required=False, type=click.Path(resolve_path=True))\n-@pass_context\n+@pass_environment\n def cli(ctx, path):\n \"\"\"Initializes a repository.\"\"\"\n if path is None:\ndiff --git a/examples/complex/complex/commands/cmd_status.py b/examples/complex/complex/commands/cmd_status.py\n--- a/examples/complex/complex/commands/cmd_status.py\n+++ b/examples/complex/complex/commands/cmd_status.py\n@@ -1,9 +1,9 @@\n import click\n-from complex.cli import pass_context\n+from complex.cli import pass_environment\n \n \n @click.command('status', short_help='Shows file changes.')\n-@pass_context\n+@pass_environment\n def cli(ctx):\n \"\"\"Shows file changes in the current working directory.\"\"\"\n ctx.log('Changed files: none')\n", "issue": "complex example - misleading name for context\nThe name `Context` and `pass_context` are misleading in the complex example, since the `Context` defined in the example shares a name with the click `Context`. Maybe a different name such as \"Environment\" or \"Options\" would be more appropriate.\n", "before_files": [{"content": "import click\nfrom complex.cli import pass_context\n\n\[email protected]('status', short_help='Shows file changes.')\n@pass_context\ndef cli(ctx):\n \"\"\"Shows file changes in the current working directory.\"\"\"\n ctx.log('Changed files: none')\n ctx.vlog('bla bla bla, debug info')\n", "path": "examples/complex/complex/commands/cmd_status.py"}, {"content": "import os\nimport sys\nimport click\n\n\nCONTEXT_SETTINGS = dict(auto_envvar_prefix='COMPLEX')\n\n\nclass Context(object):\n\n def __init__(self):\n self.verbose = False\n self.home = os.getcwd()\n\n def log(self, msg, *args):\n \"\"\"Logs a message to stderr.\"\"\"\n if args:\n msg %= args\n click.echo(msg, file=sys.stderr)\n\n def vlog(self, msg, *args):\n \"\"\"Logs a message to stderr only if verbose is enabled.\"\"\"\n if self.verbose:\n self.log(msg, *args)\n\n\npass_context = click.make_pass_decorator(Context, ensure=True)\ncmd_folder = os.path.abspath(os.path.join(os.path.dirname(__file__),\n 'commands'))\n\n\nclass ComplexCLI(click.MultiCommand):\n\n def list_commands(self, ctx):\n rv = []\n for filename in os.listdir(cmd_folder):\n if filename.endswith('.py') and \\\n filename.startswith('cmd_'):\n rv.append(filename[4:-3])\n rv.sort()\n return rv\n\n def get_command(self, ctx, name):\n try:\n if sys.version_info[0] == 2:\n name = name.encode('ascii', 'replace')\n mod = __import__('complex.commands.cmd_' + name,\n None, None, ['cli'])\n except ImportError:\n return\n return mod.cli\n\n\[email protected](cls=ComplexCLI, context_settings=CONTEXT_SETTINGS)\[email protected]('--home', type=click.Path(exists=True, file_okay=False,\n resolve_path=True),\n help='Changes the folder to operate on.')\[email protected]('-v', '--verbose', is_flag=True,\n help='Enables verbose mode.')\n@pass_context\ndef cli(ctx, verbose, home):\n \"\"\"A complex command line interface.\"\"\"\n ctx.verbose = verbose\n if home is not None:\n ctx.home = home\n", "path": "examples/complex/complex/cli.py"}, {"content": "import click\nfrom complex.cli import pass_context\n\n\[email protected]('init', short_help='Initializes a repo.')\[email protected]('path', required=False, type=click.Path(resolve_path=True))\n@pass_context\ndef cli(ctx, path):\n \"\"\"Initializes a repository.\"\"\"\n if path is None:\n path = ctx.home\n ctx.log('Initialized the repository in %s',\n click.format_filename(path))\n", "path": "examples/complex/complex/commands/cmd_init.py"}], "after_files": [{"content": "import click\nfrom complex.cli import pass_environment\n\n\[email protected]('status', short_help='Shows file changes.')\n@pass_environment\ndef cli(ctx):\n \"\"\"Shows file changes in the current working directory.\"\"\"\n ctx.log('Changed files: none')\n ctx.vlog('bla bla bla, debug info')\n", "path": "examples/complex/complex/commands/cmd_status.py"}, {"content": "import os\nimport sys\nimport click\n\n\nCONTEXT_SETTINGS = dict(auto_envvar_prefix='COMPLEX')\n\n\nclass Environment(object):\n\n def __init__(self):\n self.verbose = False\n self.home = os.getcwd()\n\n def log(self, msg, *args):\n \"\"\"Logs a message to stderr.\"\"\"\n if args:\n msg %= args\n click.echo(msg, file=sys.stderr)\n\n def vlog(self, msg, *args):\n \"\"\"Logs a message to stderr only if verbose is enabled.\"\"\"\n if self.verbose:\n self.log(msg, *args)\n\n\npass_environment = click.make_pass_decorator(Environment, ensure=True)\ncmd_folder = os.path.abspath(os.path.join(os.path.dirname(__file__),\n 'commands'))\n\n\nclass ComplexCLI(click.MultiCommand):\n\n def list_commands(self, ctx):\n rv = []\n for filename in os.listdir(cmd_folder):\n if filename.endswith('.py') and \\\n filename.startswith('cmd_'):\n rv.append(filename[4:-3])\n rv.sort()\n return rv\n\n def get_command(self, ctx, name):\n try:\n if sys.version_info[0] == 2:\n name = name.encode('ascii', 'replace')\n mod = __import__('complex.commands.cmd_' + name,\n None, None, ['cli'])\n except ImportError:\n return\n return mod.cli\n\n\[email protected](cls=ComplexCLI, context_settings=CONTEXT_SETTINGS)\[email protected]('--home', type=click.Path(exists=True, file_okay=False,\n resolve_path=True),\n help='Changes the folder to operate on.')\[email protected]('-v', '--verbose', is_flag=True,\n help='Enables verbose mode.')\n@pass_environment\ndef cli(ctx, verbose, home):\n \"\"\"A complex command line interface.\"\"\"\n ctx.verbose = verbose\n if home is not None:\n ctx.home = home\n", "path": "examples/complex/complex/cli.py"}, {"content": "import click\nfrom complex.cli import pass_environment\n\n\[email protected]('init', short_help='Initializes a repo.')\[email protected]('path', required=False, type=click.Path(resolve_path=True))\n@pass_environment\ndef cli(ctx, path):\n \"\"\"Initializes a repository.\"\"\"\n if path is None:\n path = ctx.home\n ctx.log('Initialized the repository in %s',\n click.format_filename(path))\n", "path": "examples/complex/complex/commands/cmd_init.py"}]}
| 1,076 | 499 |
gh_patches_debug_249
|
rasdani/github-patches
|
git_diff
|
aws__aws-cli-3790
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The aws-cli bundle package uses an insecure version of PyYAML
### awscli version:<br>
`aws-cli/1.16.52 Python/2.7.15 Linux/4.14.77-69.57.amzn1.x86_64 exec-env/AWS_ECS_EC2 botocore/1.12.42`
[NVD entry](https://nvd.nist.gov/vuln/detail/CVE-2017-18342)
This issue was found when vulnerability alerts started appearing in Twistlock in response to scans of Docker images that we are using in several applications. The generic error found in these outlines is as such:<br>
```
Impacted versions: <=3.13
In PyYAML before 4.1, the yaml.load() API could execute arbitrary code. In other words, yaml.safe_load is not used.
```
These images are not natively using PyYAML, so this led us to a Docker `RUN` line in a Dockerfile that executed a script that contains a line of code that executes the installation of the `aws-cli` bundle using the following URL:<br>
`https://s3.amazonaws.com/aws-cli/awscli-bundle.zip`
Unpacking this archive shows a list of package dependencies that includes the vulnerable version of PyYAML:<br>
`awscli-bundle/packages/PyYAML-3.13.tar.gz`
The latest (and actually secure) version of PyYAML appears to be 4.1 according to the developer via the [GitHub repo](https://github.com/yaml/pyyaml).
### Request
Is it possible to have the patched version of PyYAML added to this bundle to avoid this vulnerability?
Thank you!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `awscli/customizations/ecs/filehelpers.py`
Content:
```
1 # Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"). You
4 # may not use this file except in compliance with the License. A copy of
5 # the License is located at
6 #
7 # http://aws.amazon.com/apache2.0/
8 #
9 # or in the "license" file accompanying this file. This file is
10 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific
12 # language governing permissions and limitations under the License.
13
14 import json
15 import yaml
16
17 from awscli.customizations.ecs import exceptions
18
19 MAX_CHAR_LENGTH = 46
20 APP_PREFIX = 'AppECS-'
21 DGP_PREFIX = 'DgpECS-'
22
23
24 def find_required_key(resource_name, obj, key):
25
26 if obj is None:
27 raise exceptions.MissingPropertyError(
28 resource=resource_name, prop_name=key)
29
30 result = _get_case_insensitive_key(obj, key)
31
32 if result is None:
33 raise exceptions.MissingPropertyError(
34 resource=resource_name, prop_name=key)
35 else:
36 return result
37
38
39 def _get_case_insensitive_key(target_obj, target_key):
40 key_to_match = target_key.lower()
41 key_list = target_obj.keys()
42
43 for key in key_list:
44 if key.lower() == key_to_match:
45 return key
46
47
48 def get_app_name(service, cluster, app_value):
49 if app_value is not None:
50 return app_value
51 else:
52 suffix = _get_ecs_suffix(service, cluster)
53 return APP_PREFIX + suffix
54
55
56 def get_cluster_name_from_arn(arn):
57 return arn.split('/')[1]
58
59
60 def get_deploy_group_name(service, cluster, dg_value):
61 if dg_value is not None:
62 return dg_value
63 else:
64 suffix = _get_ecs_suffix(service, cluster)
65 return DGP_PREFIX + suffix
66
67
68 def _get_ecs_suffix(service, cluster):
69 if cluster is None:
70 cluster_name = 'default'
71 else:
72 cluster_name = cluster[:MAX_CHAR_LENGTH]
73
74 return cluster_name + '-' + service[:MAX_CHAR_LENGTH]
75
76
77 def parse_appspec(appspec_str):
78 try:
79 return json.loads(appspec_str)
80 except ValueError:
81 return yaml.load(appspec_str)
82
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/awscli/customizations/ecs/filehelpers.py b/awscli/customizations/ecs/filehelpers.py
--- a/awscli/customizations/ecs/filehelpers.py
+++ b/awscli/customizations/ecs/filehelpers.py
@@ -78,4 +78,4 @@
try:
return json.loads(appspec_str)
except ValueError:
- return yaml.load(appspec_str)
+ return yaml.safe_load(appspec_str)
|
{"golden_diff": "diff --git a/awscli/customizations/ecs/filehelpers.py b/awscli/customizations/ecs/filehelpers.py\n--- a/awscli/customizations/ecs/filehelpers.py\n+++ b/awscli/customizations/ecs/filehelpers.py\n@@ -78,4 +78,4 @@\n try:\n return json.loads(appspec_str)\n except ValueError:\n- return yaml.load(appspec_str)\n+ return yaml.safe_load(appspec_str)\n", "issue": "The aws-cli bundle package uses an insecure version of PyYAML\n### awscli version:<br>\r\n`aws-cli/1.16.52 Python/2.7.15 Linux/4.14.77-69.57.amzn1.x86_64 exec-env/AWS_ECS_EC2 botocore/1.12.42`\r\n\r\n[NVD entry](https://nvd.nist.gov/vuln/detail/CVE-2017-18342)\r\n\r\nThis issue was found when vulnerability alerts started appearing in Twistlock in response to scans of Docker images that we are using in several applications. The generic error found in these outlines is as such:<br>\r\n\r\n```\r\nImpacted versions: <=3.13\r\nIn PyYAML before 4.1, the yaml.load() API could execute arbitrary code. In other words, yaml.safe_load is not used.\r\n```\r\n\r\nThese images are not natively using PyYAML, so this led us to a Docker `RUN` line in a Dockerfile that executed a script that contains a line of code that executes the installation of the `aws-cli` bundle using the following URL:<br>\r\n\r\n`https://s3.amazonaws.com/aws-cli/awscli-bundle.zip`\r\n\r\nUnpacking this archive shows a list of package dependencies that includes the vulnerable version of PyYAML:<br>\r\n\r\n`awscli-bundle/packages/PyYAML-3.13.tar.gz`\r\n\r\nThe latest (and actually secure) version of PyYAML appears to be 4.1 according to the developer via the [GitHub repo](https://github.com/yaml/pyyaml).\r\n\r\n### Request\r\n\r\nIs it possible to have the patched version of PyYAML added to this bundle to avoid this vulnerability?\r\n\r\nThank you!\n", "before_files": [{"content": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\nimport json\nimport yaml\n\nfrom awscli.customizations.ecs import exceptions\n\nMAX_CHAR_LENGTH = 46\nAPP_PREFIX = 'AppECS-'\nDGP_PREFIX = 'DgpECS-'\n\n\ndef find_required_key(resource_name, obj, key):\n\n if obj is None:\n raise exceptions.MissingPropertyError(\n resource=resource_name, prop_name=key)\n\n result = _get_case_insensitive_key(obj, key)\n\n if result is None:\n raise exceptions.MissingPropertyError(\n resource=resource_name, prop_name=key)\n else:\n return result\n\n\ndef _get_case_insensitive_key(target_obj, target_key):\n key_to_match = target_key.lower()\n key_list = target_obj.keys()\n\n for key in key_list:\n if key.lower() == key_to_match:\n return key\n\n\ndef get_app_name(service, cluster, app_value):\n if app_value is not None:\n return app_value\n else:\n suffix = _get_ecs_suffix(service, cluster)\n return APP_PREFIX + suffix\n\n\ndef get_cluster_name_from_arn(arn):\n return arn.split('/')[1]\n\n\ndef get_deploy_group_name(service, cluster, dg_value):\n if dg_value is not None:\n return dg_value\n else:\n suffix = _get_ecs_suffix(service, cluster)\n return DGP_PREFIX + suffix\n\n\ndef _get_ecs_suffix(service, cluster):\n if cluster is None:\n cluster_name = 'default'\n else:\n cluster_name = cluster[:MAX_CHAR_LENGTH]\n\n return cluster_name + '-' + service[:MAX_CHAR_LENGTH]\n\n\ndef parse_appspec(appspec_str):\n try:\n return json.loads(appspec_str)\n except ValueError:\n return yaml.load(appspec_str)\n", "path": "awscli/customizations/ecs/filehelpers.py"}], "after_files": [{"content": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\nimport json\nimport yaml\n\nfrom awscli.customizations.ecs import exceptions\n\nMAX_CHAR_LENGTH = 46\nAPP_PREFIX = 'AppECS-'\nDGP_PREFIX = 'DgpECS-'\n\n\ndef find_required_key(resource_name, obj, key):\n\n if obj is None:\n raise exceptions.MissingPropertyError(\n resource=resource_name, prop_name=key)\n\n result = _get_case_insensitive_key(obj, key)\n\n if result is None:\n raise exceptions.MissingPropertyError(\n resource=resource_name, prop_name=key)\n else:\n return result\n\n\ndef _get_case_insensitive_key(target_obj, target_key):\n key_to_match = target_key.lower()\n key_list = target_obj.keys()\n\n for key in key_list:\n if key.lower() == key_to_match:\n return key\n\n\ndef get_app_name(service, cluster, app_value):\n if app_value is not None:\n return app_value\n else:\n suffix = _get_ecs_suffix(service, cluster)\n return APP_PREFIX + suffix\n\n\ndef get_cluster_name_from_arn(arn):\n return arn.split('/')[1]\n\n\ndef get_deploy_group_name(service, cluster, dg_value):\n if dg_value is not None:\n return dg_value\n else:\n suffix = _get_ecs_suffix(service, cluster)\n return DGP_PREFIX + suffix\n\n\ndef _get_ecs_suffix(service, cluster):\n if cluster is None:\n cluster_name = 'default'\n else:\n cluster_name = cluster[:MAX_CHAR_LENGTH]\n\n return cluster_name + '-' + service[:MAX_CHAR_LENGTH]\n\n\ndef parse_appspec(appspec_str):\n try:\n return json.loads(appspec_str)\n except ValueError:\n return yaml.safe_load(appspec_str)\n", "path": "awscli/customizations/ecs/filehelpers.py"}]}
| 1,318 | 94 |
gh_patches_debug_42153
|
rasdani/github-patches
|
git_diff
|
lhotse-speech__lhotse-5
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
comment
would be nice to have a comment here mentioning that DummySet contains everything.
https://github.com/pzelasko/lhotse/blob/7555df605def57836c9454ae44aac95c504d86b0/lhotse/audio.py#L77
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lhotse/utils.py`
Content:
```
1 from pathlib import Path
2 from typing import Union
3
4 Pathlike = Union[Path, str]
5
6 INT16MAX = 32768
7
8
9 class DummySet:
10 def __contains__(self, item):
11 return True
12
13 def intersection(self, iterable):
14 return True
15
```
Path: `lhotse/audio.py`
Content:
```
1 from dataclasses import dataclass, asdict
2 from subprocess import run, PIPE
3 from typing import List, Optional, Dict, Union
4
5 import librosa
6 import numpy as np
7 import yaml
8
9 from lhotse.utils import Pathlike, INT16MAX, DummySet
10
11 Channels = Union[int, List[int]]
12
13
14 @dataclass
15 class AudioSource:
16 """
17 AudioSource represents audio data that can be retrieved from somewhere.
18 Supported sources of audio are currently:
19 - a file (possibly multi-channel)
20 - a command/unix pipe (single-channel only)
21 - a collection of any of the above (see AudioSourceCollection)
22 """
23 type: str
24 channel_ids: List[int]
25 source: str
26
27 def load_audio(
28 self,
29 offset_seconds: float = 0.0,
30 duration_seconds: Optional[float] = None
31 ) -> np.ndarray:
32 assert self.type in ('file', 'command')
33
34 if self.type == 'file':
35 # TODO(pzelasko): make sure that librosa loads multi-channel audio
36 # in the expected format (n_channels, n_samples)
37 return librosa.load(
38 self.source,
39 sr=None, # 'None' uses the native sampling rate
40 offset=offset_seconds,
41 duration=duration_seconds
42 )[0] # discard returned sampling rate
43
44 # TODO(pzelasko): the following naively assumes we're dealing with raw PCM...
45 # not sure if that's how we should do it
46 # also, how should we support chunking for commands?
47 raw_audio = run(self.source, shell=True, stdout=PIPE).stdout
48 int16_audio = np.frombuffer(raw_audio, dtype=np.int16)
49 return int16_audio / INT16MAX
50
51
52 @dataclass
53 class Recording:
54 """
55 Recording represents an AudioSource along with some metadata.
56 """
57 id: str
58 sources: List[AudioSource]
59 sampling_rate: int
60 num_samples: int
61 duration_seconds: float
62
63 def __post_init__(self):
64 self.sources = [AudioSource(**s) if isinstance(s, dict) else s for s in self.sources]
65
66 @property
67 def num_channels(self):
68 return sum(len(source.channel_ids) for source in self.sources)
69
70 def load_audio(
71 self,
72 channels: Optional[Channels] = None,
73 offset_seconds: float = 0.0,
74 duration_seconds: Optional[float] = None
75 ) -> np.ndarray:
76 if channels is None:
77 channels = DummySet()
78 elif isinstance(channels, int):
79 channels = frozenset([channels])
80 else:
81 channels = frozenset(channels)
82
83 samples_per_source = []
84 for source in self.sources:
85 # Case: source not requested
86 if not channels.intersection(source.channel_ids):
87 continue
88 samples = source.load_audio(offset_seconds=offset_seconds, duration_seconds=duration_seconds)
89
90 # Case: two-channel audio file but only one channel requested
91 # it might not be optimal to load all channels, but IDK if there's anything we can do about it
92 channels_to_remove = [
93 idx for idx, cid in enumerate(source.channel_ids)
94 if cid not in channels
95 ]
96 if channels_to_remove:
97 samples = np.delete(samples, channels_to_remove, axis=0)
98 samples_per_source.append(samples)
99
100 # shapes: single-channel (n_samples); multi-channel (n_channels, n_samples)
101 return np.vstack(samples_per_source) if len(samples_per_source) > 1 else samples_per_source[0]
102
103
104 @dataclass
105 class AudioSet:
106 """
107 AudioSet represents a dataset of recordings. It does not contain any annotation -
108 just the information needed to retrieve a recording (possibly multi-channel, from files
109 or from shell commands and pipes) and some metadata for each of them.
110
111 It also supports (de)serialization to/from YAML and takes care of mapping between
112 rich Python classes and YAML primitives during conversion.
113 """
114 recordings: Dict[str, Recording]
115
116 @staticmethod
117 def from_yaml(path: Pathlike) -> 'AudioSet':
118 with open(path) as f:
119 recordings = (Recording(**raw_rec) for raw_rec in yaml.safe_load(f))
120 return AudioSet(recordings={r.id: r for r in recordings})
121
122 def to_yaml(self, path: Pathlike):
123 with open(path, 'w') as f:
124 yaml.safe_dump([asdict(r) for r in self.recordings.values()], stream=f)
125
126 def load_audio(
127 self,
128 recording_id: str,
129 channels: Optional[Channels] = None,
130 offset_seconds: float = 0.0,
131 duration_seconds: Optional[float] = None
132 ) -> np.ndarray:
133 return self.recordings[recording_id].load_audio(
134 channels=channels,
135 offset_seconds=offset_seconds,
136 duration_seconds=duration_seconds
137 )
138
139 def num_channels(self, recording_id: str) -> int:
140 return self.recordings[recording_id].num_channels
141
142 def sampling_rate(self, recording_id: str) -> int:
143 return self.recordings[recording_id].sampling_rate
144
145 def num_samples(self, recording_id: str) -> int:
146 return self.recordings[recording_id].num_samples
147
148 def duration_seconds(self, recording_id: str) -> float:
149 return self.recordings[recording_id].duration_seconds
150
151 def __iter__(self):
152 return iter(self.recordings.values())
153
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lhotse/audio.py b/lhotse/audio.py
--- a/lhotse/audio.py
+++ b/lhotse/audio.py
@@ -1,4 +1,6 @@
from dataclasses import dataclass, asdict
+from dataclasses import dataclass, asdict
+from io import BytesIO
from subprocess import run, PIPE
from typing import List, Optional, Dict, Union
@@ -6,7 +8,7 @@
import numpy as np
import yaml
-from lhotse.utils import Pathlike, INT16MAX, DummySet
+from lhotse.utils import Pathlike, SetContainingAnything
Channels = Union[int, List[int]]
@@ -16,9 +18,8 @@
"""
AudioSource represents audio data that can be retrieved from somewhere.
Supported sources of audio are currently:
- - a file (possibly multi-channel)
- - a command/unix pipe (single-channel only)
- - a collection of any of the above (see AudioSourceCollection)
+ - a file (formats supported by librosa, possibly multi-channel)
+ - a command/unix pipe (must be WAVE, possibly multi-channel)
"""
type: str
channel_ids: List[int]
@@ -29,24 +30,31 @@
offset_seconds: float = 0.0,
duration_seconds: Optional[float] = None
) -> np.ndarray:
+ """
+ Load the AudioSource (both files and commands) with librosa,
+ accounting for many audio formats and multi-channel inputs.
+ Returns numpy array with shapes: (n_samples) for single-channel,
+ (n_channels, n_samples) for multi-channel.
+ """
assert self.type in ('file', 'command')
- if self.type == 'file':
- # TODO(pzelasko): make sure that librosa loads multi-channel audio
- # in the expected format (n_channels, n_samples)
- return librosa.load(
- self.source,
- sr=None, # 'None' uses the native sampling rate
- offset=offset_seconds,
- duration=duration_seconds
- )[0] # discard returned sampling rate
+ if self.type == 'command':
+ if offset_seconds != 0.0 or duration_seconds is not None:
+ # TODO(pzelasko): How should we support chunking for commands?
+ # We risk being very inefficient when reading many chunks from the same file
+ # without some caching scheme, because we'll be re-running commands.
+ raise ValueError("Reading audio chunks from command AudioSource type is currently not supported.")
+ source = BytesIO(run(self.source, shell=True, stdout=PIPE).stdout)
+ else:
+ source = self.source
- # TODO(pzelasko): the following naively assumes we're dealing with raw PCM...
- # not sure if that's how we should do it
- # also, how should we support chunking for commands?
- raw_audio = run(self.source, shell=True, stdout=PIPE).stdout
- int16_audio = np.frombuffer(raw_audio, dtype=np.int16)
- return int16_audio / INT16MAX
+ return librosa.load(
+ source,
+ sr=None, # 'None' uses the native sampling rate
+ mono=False, # Retain multi-channel if it's there
+ offset=offset_seconds,
+ duration=duration_seconds
+ )[0] # discard returned sampling rate
@dataclass
@@ -74,7 +82,7 @@
duration_seconds: Optional[float] = None
) -> np.ndarray:
if channels is None:
- channels = DummySet()
+ channels = SetContainingAnything()
elif isinstance(channels, int):
channels = frozenset([channels])
else:
diff --git a/lhotse/utils.py b/lhotse/utils.py
--- a/lhotse/utils.py
+++ b/lhotse/utils.py
@@ -6,7 +6,7 @@
INT16MAX = 32768
-class DummySet:
+class SetContainingAnything:
def __contains__(self, item):
return True
|
{"golden_diff": "diff --git a/lhotse/audio.py b/lhotse/audio.py\n--- a/lhotse/audio.py\n+++ b/lhotse/audio.py\n@@ -1,4 +1,6 @@\n from dataclasses import dataclass, asdict\n+from dataclasses import dataclass, asdict\n+from io import BytesIO\n from subprocess import run, PIPE\n from typing import List, Optional, Dict, Union\n \n@@ -6,7 +8,7 @@\n import numpy as np\n import yaml\n \n-from lhotse.utils import Pathlike, INT16MAX, DummySet\n+from lhotse.utils import Pathlike, SetContainingAnything\n \n Channels = Union[int, List[int]]\n \n@@ -16,9 +18,8 @@\n \"\"\"\n AudioSource represents audio data that can be retrieved from somewhere.\n Supported sources of audio are currently:\n- - a file (possibly multi-channel)\n- - a command/unix pipe (single-channel only)\n- - a collection of any of the above (see AudioSourceCollection)\n+ - a file (formats supported by librosa, possibly multi-channel)\n+ - a command/unix pipe (must be WAVE, possibly multi-channel)\n \"\"\"\n type: str\n channel_ids: List[int]\n@@ -29,24 +30,31 @@\n offset_seconds: float = 0.0,\n duration_seconds: Optional[float] = None\n ) -> np.ndarray:\n+ \"\"\"\n+ Load the AudioSource (both files and commands) with librosa,\n+ accounting for many audio formats and multi-channel inputs.\n+ Returns numpy array with shapes: (n_samples) for single-channel,\n+ (n_channels, n_samples) for multi-channel.\n+ \"\"\"\n assert self.type in ('file', 'command')\n \n- if self.type == 'file':\n- # TODO(pzelasko): make sure that librosa loads multi-channel audio\n- # in the expected format (n_channels, n_samples)\n- return librosa.load(\n- self.source,\n- sr=None, # 'None' uses the native sampling rate\n- offset=offset_seconds,\n- duration=duration_seconds\n- )[0] # discard returned sampling rate\n+ if self.type == 'command':\n+ if offset_seconds != 0.0 or duration_seconds is not None:\n+ # TODO(pzelasko): How should we support chunking for commands?\n+ # We risk being very inefficient when reading many chunks from the same file\n+ # without some caching scheme, because we'll be re-running commands.\n+ raise ValueError(\"Reading audio chunks from command AudioSource type is currently not supported.\")\n+ source = BytesIO(run(self.source, shell=True, stdout=PIPE).stdout)\n+ else:\n+ source = self.source\n \n- # TODO(pzelasko): the following naively assumes we're dealing with raw PCM...\n- # not sure if that's how we should do it\n- # also, how should we support chunking for commands?\n- raw_audio = run(self.source, shell=True, stdout=PIPE).stdout\n- int16_audio = np.frombuffer(raw_audio, dtype=np.int16)\n- return int16_audio / INT16MAX\n+ return librosa.load(\n+ source,\n+ sr=None, # 'None' uses the native sampling rate\n+ mono=False, # Retain multi-channel if it's there\n+ offset=offset_seconds,\n+ duration=duration_seconds\n+ )[0] # discard returned sampling rate\n \n \n @dataclass\n@@ -74,7 +82,7 @@\n duration_seconds: Optional[float] = None\n ) -> np.ndarray:\n if channels is None:\n- channels = DummySet()\n+ channels = SetContainingAnything()\n elif isinstance(channels, int):\n channels = frozenset([channels])\n else:\ndiff --git a/lhotse/utils.py b/lhotse/utils.py\n--- a/lhotse/utils.py\n+++ b/lhotse/utils.py\n@@ -6,7 +6,7 @@\n INT16MAX = 32768\n \n \n-class DummySet:\n+class SetContainingAnything:\n def __contains__(self, item):\n return True\n", "issue": "comment\nwould be nice to have a comment here mentioning that DummySet contains everything.\r\n\r\nhttps://github.com/pzelasko/lhotse/blob/7555df605def57836c9454ae44aac95c504d86b0/lhotse/audio.py#L77\n", "before_files": [{"content": "from pathlib import Path\nfrom typing import Union\n\nPathlike = Union[Path, str]\n\nINT16MAX = 32768\n\n\nclass DummySet:\n def __contains__(self, item):\n return True\n\n def intersection(self, iterable):\n return True\n", "path": "lhotse/utils.py"}, {"content": "from dataclasses import dataclass, asdict\nfrom subprocess import run, PIPE\nfrom typing import List, Optional, Dict, Union\n\nimport librosa\nimport numpy as np\nimport yaml\n\nfrom lhotse.utils import Pathlike, INT16MAX, DummySet\n\nChannels = Union[int, List[int]]\n\n\n@dataclass\nclass AudioSource:\n \"\"\"\n AudioSource represents audio data that can be retrieved from somewhere.\n Supported sources of audio are currently:\n - a file (possibly multi-channel)\n - a command/unix pipe (single-channel only)\n - a collection of any of the above (see AudioSourceCollection)\n \"\"\"\n type: str\n channel_ids: List[int]\n source: str\n\n def load_audio(\n self,\n offset_seconds: float = 0.0,\n duration_seconds: Optional[float] = None\n ) -> np.ndarray:\n assert self.type in ('file', 'command')\n\n if self.type == 'file':\n # TODO(pzelasko): make sure that librosa loads multi-channel audio\n # in the expected format (n_channels, n_samples)\n return librosa.load(\n self.source,\n sr=None, # 'None' uses the native sampling rate\n offset=offset_seconds,\n duration=duration_seconds\n )[0] # discard returned sampling rate\n\n # TODO(pzelasko): the following naively assumes we're dealing with raw PCM...\n # not sure if that's how we should do it\n # also, how should we support chunking for commands?\n raw_audio = run(self.source, shell=True, stdout=PIPE).stdout\n int16_audio = np.frombuffer(raw_audio, dtype=np.int16)\n return int16_audio / INT16MAX\n\n\n@dataclass\nclass Recording:\n \"\"\"\n Recording represents an AudioSource along with some metadata.\n \"\"\"\n id: str\n sources: List[AudioSource]\n sampling_rate: int\n num_samples: int\n duration_seconds: float\n\n def __post_init__(self):\n self.sources = [AudioSource(**s) if isinstance(s, dict) else s for s in self.sources]\n\n @property\n def num_channels(self):\n return sum(len(source.channel_ids) for source in self.sources)\n\n def load_audio(\n self,\n channels: Optional[Channels] = None,\n offset_seconds: float = 0.0,\n duration_seconds: Optional[float] = None\n ) -> np.ndarray:\n if channels is None:\n channels = DummySet()\n elif isinstance(channels, int):\n channels = frozenset([channels])\n else:\n channels = frozenset(channels)\n\n samples_per_source = []\n for source in self.sources:\n # Case: source not requested\n if not channels.intersection(source.channel_ids):\n continue\n samples = source.load_audio(offset_seconds=offset_seconds, duration_seconds=duration_seconds)\n\n # Case: two-channel audio file but only one channel requested\n # it might not be optimal to load all channels, but IDK if there's anything we can do about it\n channels_to_remove = [\n idx for idx, cid in enumerate(source.channel_ids)\n if cid not in channels\n ]\n if channels_to_remove:\n samples = np.delete(samples, channels_to_remove, axis=0)\n samples_per_source.append(samples)\n\n # shapes: single-channel (n_samples); multi-channel (n_channels, n_samples)\n return np.vstack(samples_per_source) if len(samples_per_source) > 1 else samples_per_source[0]\n\n\n@dataclass\nclass AudioSet:\n \"\"\"\n AudioSet represents a dataset of recordings. It does not contain any annotation -\n just the information needed to retrieve a recording (possibly multi-channel, from files\n or from shell commands and pipes) and some metadata for each of them.\n\n It also supports (de)serialization to/from YAML and takes care of mapping between\n rich Python classes and YAML primitives during conversion.\n \"\"\"\n recordings: Dict[str, Recording]\n\n @staticmethod\n def from_yaml(path: Pathlike) -> 'AudioSet':\n with open(path) as f:\n recordings = (Recording(**raw_rec) for raw_rec in yaml.safe_load(f))\n return AudioSet(recordings={r.id: r for r in recordings})\n\n def to_yaml(self, path: Pathlike):\n with open(path, 'w') as f:\n yaml.safe_dump([asdict(r) for r in self.recordings.values()], stream=f)\n\n def load_audio(\n self,\n recording_id: str,\n channels: Optional[Channels] = None,\n offset_seconds: float = 0.0,\n duration_seconds: Optional[float] = None\n ) -> np.ndarray:\n return self.recordings[recording_id].load_audio(\n channels=channels,\n offset_seconds=offset_seconds,\n duration_seconds=duration_seconds\n )\n\n def num_channels(self, recording_id: str) -> int:\n return self.recordings[recording_id].num_channels\n\n def sampling_rate(self, recording_id: str) -> int:\n return self.recordings[recording_id].sampling_rate\n\n def num_samples(self, recording_id: str) -> int:\n return self.recordings[recording_id].num_samples\n\n def duration_seconds(self, recording_id: str) -> float:\n return self.recordings[recording_id].duration_seconds\n\n def __iter__(self):\n return iter(self.recordings.values())\n", "path": "lhotse/audio.py"}], "after_files": [{"content": "from pathlib import Path\nfrom typing import Union\n\nPathlike = Union[Path, str]\n\nINT16MAX = 32768\n\n\nclass SetContainingAnything:\n def __contains__(self, item):\n return True\n\n def intersection(self, iterable):\n return True\n", "path": "lhotse/utils.py"}, {"content": "from dataclasses import dataclass, asdict\nfrom dataclasses import dataclass, asdict\nfrom io import BytesIO\nfrom subprocess import run, PIPE\nfrom typing import List, Optional, Dict, Union\n\nimport librosa\nimport numpy as np\nimport yaml\n\nfrom lhotse.utils import Pathlike, SetContainingAnything\n\nChannels = Union[int, List[int]]\n\n\n@dataclass\nclass AudioSource:\n \"\"\"\n AudioSource represents audio data that can be retrieved from somewhere.\n Supported sources of audio are currently:\n - a file (formats supported by librosa, possibly multi-channel)\n - a command/unix pipe (must be WAVE, possibly multi-channel)\n \"\"\"\n type: str\n channel_ids: List[int]\n source: str\n\n def load_audio(\n self,\n offset_seconds: float = 0.0,\n duration_seconds: Optional[float] = None\n ) -> np.ndarray:\n \"\"\"\n Load the AudioSource (both files and commands) with librosa,\n accounting for many audio formats and multi-channel inputs.\n Returns numpy array with shapes: (n_samples) for single-channel,\n (n_channels, n_samples) for multi-channel.\n \"\"\"\n assert self.type in ('file', 'command')\n\n if self.type == 'command':\n if offset_seconds != 0.0 or duration_seconds is not None:\n # TODO(pzelasko): How should we support chunking for commands?\n # We risk being very inefficient when reading many chunks from the same file\n # without some caching scheme, because we'll be re-running commands.\n raise ValueError(\"Reading audio chunks from command AudioSource type is currently not supported.\")\n source = BytesIO(run(self.source, shell=True, stdout=PIPE).stdout)\n else:\n source = self.source\n\n return librosa.load(\n source,\n sr=None, # 'None' uses the native sampling rate\n mono=False, # Retain multi-channel if it's there\n offset=offset_seconds,\n duration=duration_seconds\n )[0] # discard returned sampling rate\n\n\n@dataclass\nclass Recording:\n \"\"\"\n Recording represents an AudioSource along with some metadata.\n \"\"\"\n id: str\n sources: List[AudioSource]\n sampling_rate: int\n num_samples: int\n duration_seconds: float\n\n def __post_init__(self):\n self.sources = [AudioSource(**s) if isinstance(s, dict) else s for s in self.sources]\n\n @property\n def num_channels(self):\n return sum(len(source.channel_ids) for source in self.sources)\n\n def load_audio(\n self,\n channels: Optional[Channels] = None,\n offset_seconds: float = 0.0,\n duration_seconds: Optional[float] = None\n ) -> np.ndarray:\n if channels is None:\n channels = SetContainingAnything()\n elif isinstance(channels, int):\n channels = frozenset([channels])\n else:\n channels = frozenset(channels)\n\n samples_per_source = []\n for source in self.sources:\n # Case: source not requested\n if not channels.intersection(source.channel_ids):\n continue\n samples = source.load_audio(offset_seconds=offset_seconds, duration_seconds=duration_seconds)\n\n # Case: two-channel audio file but only one channel requested\n # it might not be optimal to load all channels, but IDK if there's anything we can do about it\n channels_to_remove = [\n idx for idx, cid in enumerate(source.channel_ids)\n if cid not in channels\n ]\n if channels_to_remove:\n samples = np.delete(samples, channels_to_remove, axis=0)\n samples_per_source.append(samples)\n\n # shapes: single-channel (n_samples); multi-channel (n_channels, n_samples)\n return np.vstack(samples_per_source) if len(samples_per_source) > 1 else samples_per_source[0]\n\n\n@dataclass\nclass AudioSet:\n \"\"\"\n AudioSet represents a dataset of recordings. It does not contain any annotation -\n just the information needed to retrieve a recording (possibly multi-channel, from files\n or from shell commands and pipes) and some metadata for each of them.\n\n It also supports (de)serialization to/from YAML and takes care of mapping between\n rich Python classes and YAML primitives during conversion.\n \"\"\"\n recordings: Dict[str, Recording]\n\n @staticmethod\n def from_yaml(path: Pathlike) -> 'AudioSet':\n with open(path) as f:\n recordings = (Recording(**raw_rec) for raw_rec in yaml.safe_load(f))\n return AudioSet(recordings={r.id: r for r in recordings})\n\n def to_yaml(self, path: Pathlike):\n with open(path, 'w') as f:\n yaml.safe_dump([asdict(r) for r in self.recordings.values()], stream=f)\n\n def load_audio(\n self,\n recording_id: str,\n channels: Optional[Channels] = None,\n offset_seconds: float = 0.0,\n duration_seconds: Optional[float] = None\n ) -> np.ndarray:\n return self.recordings[recording_id].load_audio(\n channels=channels,\n offset_seconds=offset_seconds,\n duration_seconds=duration_seconds\n )\n\n def num_channels(self, recording_id: str) -> int:\n return self.recordings[recording_id].num_channels\n\n def sampling_rate(self, recording_id: str) -> int:\n return self.recordings[recording_id].sampling_rate\n\n def num_samples(self, recording_id: str) -> int:\n return self.recordings[recording_id].num_samples\n\n def duration_seconds(self, recording_id: str) -> float:\n return self.recordings[recording_id].duration_seconds\n\n def __iter__(self):\n return iter(self.recordings.values())\n", "path": "lhotse/audio.py"}]}
| 1,979 | 926 |
gh_patches_debug_5930
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-4564
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BrowserExtensionsFilter: _gCrWeb autofill error in iOS Chrome not ignored
This seems to be a relatively well-known issue on Chrome for iOS. It seems this could be ignored safely by the filter as well.
Source: https://groups.google.com/a/chromium.org/forum/#!topic/chromium-discuss/7VU0_VvC7mE
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/filters/browser_extensions.py`
Content:
```
1 from __future__ import absolute_import
2
3 from .base import Filter
4
5 import re
6
7 EXTENSION_EXC_VALUES = re.compile('|'.join((re.escape(x) for x in (
8 # Random plugins/extensions
9 'top.GLOBALS',
10 # See: http://blog.errorception.com/2012/03/tale-of-unfindable-js-error. html
11 'originalCreateNotification',
12 'canvas.contentDocument',
13 'MyApp_RemoveAllHighlights',
14 'http://tt.epicplay.com',
15 'Can\'t find variable: ZiteReader',
16 'jigsaw is not defined',
17 'ComboSearch is not defined',
18 'http://loading.retry.widdit.com/',
19 'atomicFindClose',
20 # Facebook borked
21 'fb_xd_fragment',
22 # ISP "optimizing" proxy - `Cache-Control: no-transform` seems to
23 # reduce this. (thanks @acdha)
24 # See http://stackoverflow.com/questions/4113268
25 'bmi_SafeAddOnload',
26 'EBCallBackMessageReceived',
27 # See http://toolbar.conduit.com/Developer/HtmlAndGadget/Methods/JSInjection.aspx
28 'conduitPage'
29 ))), re.I)
30
31 EXTENSION_EXC_SOURCES = re.compile('|'.join((
32 # Facebook flakiness
33 r'graph\.facebook\.com'
34 # Facebook blocked
35 r'connect\.facebook\.net\/en_US\/all\.js',
36 # Woopra flakiness
37 r'eatdifferent\.com\.woopra-ns\.com',
38 r'static\.woopra\.com\/js\/woopra\.js',
39 # Chrome extensions
40 r'^chrome(?:-extension)?:\/\/',
41 # Cacaoweb
42 r'127\.0\.0\.1:4001\/isrunning',
43 # Other
44 r'webappstoolbarba\.texthelp\.com\/',
45 r'metrics\.itunes\.apple\.com\.edgesuite\.net\/',
46 )), re.I)
47
48
49 class BrowserExtensionsFilter(Filter):
50 id = 'browser-extensions'
51 name = 'Filter out errors known to be caused by browser extensions'
52 description = 'Certain browser extensions will inject inline scripts and are known to cause errors.'
53
54 def get_exception_value(self, data):
55 try:
56 return data['sentry.interfaces.Exception']['values'][0]['value']
57 except (LookupError, TypeError):
58 return ''
59
60 def get_exception_source(self, data):
61 try:
62 return data['sentry.interfaces.Exception']['values'][0]['stacktrace']['frames'][-1]['abs_path']
63 except (LookupError, TypeError):
64 return ''
65
66 def test(self, data):
67 """
68 Test the exception value to determine if it looks like the error is
69 caused by a common browser extension.
70 """
71 if data.get('platform') != 'javascript':
72 return False
73
74 exc_value = self.get_exception_value(data)
75 if exc_value:
76 if EXTENSION_EXC_VALUES.search(exc_value):
77 return True
78
79 exc_source = self.get_exception_source(data)
80 if exc_source:
81 if EXTENSION_EXC_SOURCES.match(exc_source):
82 return True
83
84 return False
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/sentry/filters/browser_extensions.py b/src/sentry/filters/browser_extensions.py
--- a/src/sentry/filters/browser_extensions.py
+++ b/src/sentry/filters/browser_extensions.py
@@ -24,6 +24,8 @@
# See http://stackoverflow.com/questions/4113268
'bmi_SafeAddOnload',
'EBCallBackMessageReceived',
+ # See https://groups.google.com/a/chromium.org/forum/#!topic/chromium-discuss/7VU0_VvC7mE
+ '_gCrWeb',
# See http://toolbar.conduit.com/Developer/HtmlAndGadget/Methods/JSInjection.aspx
'conduitPage'
))), re.I)
|
{"golden_diff": "diff --git a/src/sentry/filters/browser_extensions.py b/src/sentry/filters/browser_extensions.py\n--- a/src/sentry/filters/browser_extensions.py\n+++ b/src/sentry/filters/browser_extensions.py\n@@ -24,6 +24,8 @@\n # See http://stackoverflow.com/questions/4113268\n 'bmi_SafeAddOnload',\n 'EBCallBackMessageReceived',\n+ # See https://groups.google.com/a/chromium.org/forum/#!topic/chromium-discuss/7VU0_VvC7mE\n+ '_gCrWeb',\n # See http://toolbar.conduit.com/Developer/HtmlAndGadget/Methods/JSInjection.aspx\n 'conduitPage'\n ))), re.I)\n", "issue": "BrowserExtensionsFilter: _gCrWeb autofill error in iOS Chrome not ignored\nThis seems to be a relatively well-known issue on Chrome for iOS. It seems this could be ignored safely by the filter as well.\r\n\r\nSource: https://groups.google.com/a/chromium.org/forum/#!topic/chromium-discuss/7VU0_VvC7mE\n", "before_files": [{"content": "from __future__ import absolute_import\n\nfrom .base import Filter\n\nimport re\n\nEXTENSION_EXC_VALUES = re.compile('|'.join((re.escape(x) for x in (\n # Random plugins/extensions\n 'top.GLOBALS',\n # See: http://blog.errorception.com/2012/03/tale-of-unfindable-js-error. html\n 'originalCreateNotification',\n 'canvas.contentDocument',\n 'MyApp_RemoveAllHighlights',\n 'http://tt.epicplay.com',\n 'Can\\'t find variable: ZiteReader',\n 'jigsaw is not defined',\n 'ComboSearch is not defined',\n 'http://loading.retry.widdit.com/',\n 'atomicFindClose',\n # Facebook borked\n 'fb_xd_fragment',\n # ISP \"optimizing\" proxy - `Cache-Control: no-transform` seems to\n # reduce this. (thanks @acdha)\n # See http://stackoverflow.com/questions/4113268\n 'bmi_SafeAddOnload',\n 'EBCallBackMessageReceived',\n # See http://toolbar.conduit.com/Developer/HtmlAndGadget/Methods/JSInjection.aspx\n 'conduitPage'\n))), re.I)\n\nEXTENSION_EXC_SOURCES = re.compile('|'.join((\n # Facebook flakiness\n r'graph\\.facebook\\.com'\n # Facebook blocked\n r'connect\\.facebook\\.net\\/en_US\\/all\\.js',\n # Woopra flakiness\n r'eatdifferent\\.com\\.woopra-ns\\.com',\n r'static\\.woopra\\.com\\/js\\/woopra\\.js',\n # Chrome extensions\n r'^chrome(?:-extension)?:\\/\\/',\n # Cacaoweb\n r'127\\.0\\.0\\.1:4001\\/isrunning',\n # Other\n r'webappstoolbarba\\.texthelp\\.com\\/',\n r'metrics\\.itunes\\.apple\\.com\\.edgesuite\\.net\\/',\n)), re.I)\n\n\nclass BrowserExtensionsFilter(Filter):\n id = 'browser-extensions'\n name = 'Filter out errors known to be caused by browser extensions'\n description = 'Certain browser extensions will inject inline scripts and are known to cause errors.'\n\n def get_exception_value(self, data):\n try:\n return data['sentry.interfaces.Exception']['values'][0]['value']\n except (LookupError, TypeError):\n return ''\n\n def get_exception_source(self, data):\n try:\n return data['sentry.interfaces.Exception']['values'][0]['stacktrace']['frames'][-1]['abs_path']\n except (LookupError, TypeError):\n return ''\n\n def test(self, data):\n \"\"\"\n Test the exception value to determine if it looks like the error is\n caused by a common browser extension.\n \"\"\"\n if data.get('platform') != 'javascript':\n return False\n\n exc_value = self.get_exception_value(data)\n if exc_value:\n if EXTENSION_EXC_VALUES.search(exc_value):\n return True\n\n exc_source = self.get_exception_source(data)\n if exc_source:\n if EXTENSION_EXC_SOURCES.match(exc_source):\n return True\n\n return False\n", "path": "src/sentry/filters/browser_extensions.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nfrom .base import Filter\n\nimport re\n\nEXTENSION_EXC_VALUES = re.compile('|'.join((re.escape(x) for x in (\n # Random plugins/extensions\n 'top.GLOBALS',\n # See: http://blog.errorception.com/2012/03/tale-of-unfindable-js-error. html\n 'originalCreateNotification',\n 'canvas.contentDocument',\n 'MyApp_RemoveAllHighlights',\n 'http://tt.epicplay.com',\n 'Can\\'t find variable: ZiteReader',\n 'jigsaw is not defined',\n 'ComboSearch is not defined',\n 'http://loading.retry.widdit.com/',\n 'atomicFindClose',\n # Facebook borked\n 'fb_xd_fragment',\n # ISP \"optimizing\" proxy - `Cache-Control: no-transform` seems to\n # reduce this. (thanks @acdha)\n # See http://stackoverflow.com/questions/4113268\n 'bmi_SafeAddOnload',\n 'EBCallBackMessageReceived',\n # See https://groups.google.com/a/chromium.org/forum/#!topic/chromium-discuss/7VU0_VvC7mE\n '_gCrWeb',\n # See http://toolbar.conduit.com/Developer/HtmlAndGadget/Methods/JSInjection.aspx\n 'conduitPage'\n))), re.I)\n\nEXTENSION_EXC_SOURCES = re.compile('|'.join((\n # Facebook flakiness\n r'graph\\.facebook\\.com'\n # Facebook blocked\n r'connect\\.facebook\\.net\\/en_US\\/all\\.js',\n # Woopra flakiness\n r'eatdifferent\\.com\\.woopra-ns\\.com',\n r'static\\.woopra\\.com\\/js\\/woopra\\.js',\n # Chrome extensions\n r'^chrome(?:-extension)?:\\/\\/',\n # Cacaoweb\n r'127\\.0\\.0\\.1:4001\\/isrunning',\n # Other\n r'webappstoolbarba\\.texthelp\\.com\\/',\n r'metrics\\.itunes\\.apple\\.com\\.edgesuite\\.net\\/',\n)), re.I)\n\n\nclass BrowserExtensionsFilter(Filter):\n id = 'browser-extensions'\n name = 'Filter out errors known to be caused by browser extensions'\n description = 'Certain browser extensions will inject inline scripts and are known to cause errors.'\n\n def get_exception_value(self, data):\n try:\n return data['sentry.interfaces.Exception']['values'][0]['value']\n except (LookupError, TypeError):\n return ''\n\n def get_exception_source(self, data):\n try:\n return data['sentry.interfaces.Exception']['values'][0]['stacktrace']['frames'][-1]['abs_path']\n except (LookupError, TypeError):\n return ''\n\n def test(self, data):\n \"\"\"\n Test the exception value to determine if it looks like the error is\n caused by a common browser extension.\n \"\"\"\n if data.get('platform') != 'javascript':\n return False\n\n exc_value = self.get_exception_value(data)\n if exc_value:\n if EXTENSION_EXC_VALUES.search(exc_value):\n return True\n\n exc_source = self.get_exception_source(data)\n if exc_source:\n if EXTENSION_EXC_SOURCES.match(exc_source):\n return True\n\n return False\n", "path": "src/sentry/filters/browser_extensions.py"}]}
| 1,206 | 166 |
gh_patches_debug_24377
|
rasdani/github-patches
|
git_diff
|
facebookresearch__hydra-1375
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Feature Request] [Submitit-Plugin] (Potentially a bug) Impossible to set certain flags in submitit launcher
# 🚀 Feature Request
<!-- A clear and concise description of the feature you are requesting -->
## Motivation
This could be read as a feature request or a bug report, I'm not sure how you want to consider it, I'm going with feature request. Currently theres some flags that you cannot set in the submitit launcher. I practice "gpu centric" scheduling, so I like to specify `mem_per_gpu` and `cpus_per_gpu` and then I can just use `gpus_per_task` to always get the optimal settings.
For example on "mystery cluster" (you know the one), we can use 10 cpus and 64GB RAM per GPU. These settings allow me to only have to change `gpus_per_task` for example if I need to have 2 GPUs for 2 different models, then I'll automatically get 20 cpus and 128GB RAM for each of the tasks without having to change all of the settings. I've already PRed stuff related to this into submitit.
The problem occurs when you try to set, via additional parameters, something like `mem_per_gpu`. You can't set `mem` and `mem_per_gpu`, slurm just crashes when you do that. Similarly, if you try to set `cpus_per_gpu` via `additional_parameters` you'll wind up setting that in addition to `cpus_per_task`.
## Pitch
I tried making a simple patch that fixes this, but it hits issues with the `AutoExecutor` which I never updated in submitit to be aware of the options which conflict with each other. I think in general auto is missing some of the recent work in submitit, and it feels like a semi-hacky workaround to me.
Submitit already has pretty good validation logic so my pitch is to (1) let submitit handle everything by calling the correct executor for the job instead of using Auto. This is going to require a revamp of how the parameters are named/passed however and will likely be a breaking change to the API. (2) It would be nice if we could allow people to pass whatever parameters are supported by submitit without needing to update the hydra schema each time. These are my two major goals, so they sound reasonable/feasible? Point (2) may not be possible I guess.
Other options:
* Use auto but try to include our own validation logic (needs exploration), I don't like this because it requires us to duplicate logic that submitit has already and also it may not even be possible depending on what Auto decides to do (looked iffy on my quick pass)
* Punt this to submitit to revamp the auto executor to do the right stuff and be more flexible. I don't like this because I think the auto executor is more of a convenience hack and more complex/flexible code would be calling the executors directly (this is what I was doing previously with my own submitit hydra wrapper thing). I also think its going to be a more more complex PR.
**Are you willing to open a pull request?** (See [CONTRIBUTING](../../CONTRIBUTING.md))
Yes, but I want feedback first on the best way to go about it
## Additional context
Add any other context or screenshots about the feature request here.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 from dataclasses import dataclass, field
3 from typing import Any, Dict, List, Optional
4
5 from hydra.core.config_store import ConfigStore
6
7
8 @dataclass
9 class BaseQueueConf:
10 """Configuration shared by all executors"""
11
12 submitit_folder: str = "${hydra.sweep.dir}/.submitit/%j"
13
14 # maximum time for the job in minutes
15 timeout_min: int = 60
16 # number of cpus to use for each task
17 cpus_per_task: int = 1
18 # number of gpus to use on each node
19 gpus_per_node: int = 0
20 # number of tasks to spawn on each node
21 tasks_per_node: int = 1
22 # memory to reserve for the job on each node (in GB)
23 mem_gb: int = 4
24 # number of nodes to use for the job
25 nodes: int = 1
26 # name of the job
27 name: str = "${hydra.job.name}"
28
29
30 @dataclass
31 class SlurmQueueConf(BaseQueueConf):
32 """Slurm configuration overrides and specific parameters"""
33
34 _target_: str = (
35 "hydra_plugins.hydra_submitit_launcher.submitit_launcher.SlurmLauncher"
36 )
37
38 # Params are used to configure sbatch, for more info check:
39 # https://github.com/facebookincubator/submitit/blob/master/submitit/slurm/slurm.py
40
41 # Following parameters are slurm specific
42 # More information: https://slurm.schedmd.com/sbatch.html
43 #
44 # slurm partition to use on the cluster
45 partition: Optional[str] = None
46 comment: Optional[str] = None
47 constraint: Optional[str] = None
48 exclude: Optional[str] = None
49
50 # Following parameters are submitit specifics
51 #
52 # USR1 signal delay before timeout
53 signal_delay_s: int = 120
54 # Maximum number of retries on job timeout.
55 # Change this only after you confirmed your code can handle re-submission
56 # by properly resuming from the latest stored checkpoint.
57 # check the following for more info on slurm_max_num_timeout
58 # https://github.com/facebookincubator/submitit/blob/master/docs/checkpointing.md
59 max_num_timeout: int = 0
60 # Useful to add parameters which are not currently available in the plugin.
61 # Eg: {"mail-user": "[email protected]", "mail-type": "BEGIN"}
62 additional_parameters: Dict[str, Any] = field(default_factory=dict)
63 # Maximum number of jobs running in parallel
64 array_parallelism: int = 256
65 # A list of commands to run in sbatch befure running srun
66 setup: Optional[List[str]] = None
67
68
69 @dataclass
70 class LocalQueueConf(BaseQueueConf):
71 _target_: str = (
72 "hydra_plugins.hydra_submitit_launcher.submitit_launcher.LocalLauncher"
73 )
74
75
76 # finally, register two different choices:
77 ConfigStore.instance().store(
78 group="hydra/launcher",
79 name="submitit_local",
80 node=LocalQueueConf(),
81 provider="submitit_launcher",
82 )
83
84
85 ConfigStore.instance().store(
86 group="hydra/launcher",
87 name="submitit_slurm",
88 node=SlurmQueueConf(),
89 provider="submitit_launcher",
90 )
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py b/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py
--- a/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py
+++ b/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py
@@ -14,13 +14,13 @@
# maximum time for the job in minutes
timeout_min: int = 60
# number of cpus to use for each task
- cpus_per_task: int = 1
+ cpus_per_task: Optional[int] = None
# number of gpus to use on each node
- gpus_per_node: int = 0
+ gpus_per_node: Optional[int] = None
# number of tasks to spawn on each node
tasks_per_node: int = 1
# memory to reserve for the job on each node (in GB)
- mem_gb: int = 4
+ mem_gb: Optional[int] = None
# number of nodes to use for the job
nodes: int = 1
# name of the job
@@ -46,6 +46,10 @@
comment: Optional[str] = None
constraint: Optional[str] = None
exclude: Optional[str] = None
+ cpus_per_gpu: Optional[int] = None
+ gpus_per_task: Optional[int] = None
+ mem_per_gpu: Optional[str] = None
+ mem_per_cpu: Optional[str] = None
# Following parameters are submitit specifics
#
|
{"golden_diff": "diff --git a/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py b/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py\n--- a/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py\n+++ b/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py\n@@ -14,13 +14,13 @@\n # maximum time for the job in minutes\n timeout_min: int = 60\n # number of cpus to use for each task\n- cpus_per_task: int = 1\n+ cpus_per_task: Optional[int] = None\n # number of gpus to use on each node\n- gpus_per_node: int = 0\n+ gpus_per_node: Optional[int] = None\n # number of tasks to spawn on each node\n tasks_per_node: int = 1\n # memory to reserve for the job on each node (in GB)\n- mem_gb: int = 4\n+ mem_gb: Optional[int] = None\n # number of nodes to use for the job\n nodes: int = 1\n # name of the job\n@@ -46,6 +46,10 @@\n comment: Optional[str] = None\n constraint: Optional[str] = None\n exclude: Optional[str] = None\n+ cpus_per_gpu: Optional[int] = None\n+ gpus_per_task: Optional[int] = None\n+ mem_per_gpu: Optional[str] = None\n+ mem_per_cpu: Optional[str] = None\n \n # Following parameters are submitit specifics\n #\n", "issue": "[Feature Request] [Submitit-Plugin] (Potentially a bug) Impossible to set certain flags in submitit launcher\n# \ud83d\ude80 Feature Request\r\n\r\n<!-- A clear and concise description of the feature you are requesting -->\r\n\r\n## Motivation\r\n\r\nThis could be read as a feature request or a bug report, I'm not sure how you want to consider it, I'm going with feature request. Currently theres some flags that you cannot set in the submitit launcher. I practice \"gpu centric\" scheduling, so I like to specify `mem_per_gpu` and `cpus_per_gpu` and then I can just use `gpus_per_task` to always get the optimal settings. \r\n\r\nFor example on \"mystery cluster\" (you know the one), we can use 10 cpus and 64GB RAM per GPU. These settings allow me to only have to change `gpus_per_task` for example if I need to have 2 GPUs for 2 different models, then I'll automatically get 20 cpus and 128GB RAM for each of the tasks without having to change all of the settings. I've already PRed stuff related to this into submitit.\r\n\r\nThe problem occurs when you try to set, via additional parameters, something like `mem_per_gpu`. You can't set `mem` and `mem_per_gpu`, slurm just crashes when you do that. Similarly, if you try to set `cpus_per_gpu` via `additional_parameters` you'll wind up setting that in addition to `cpus_per_task`.\r\n\r\n## Pitch\r\n\r\nI tried making a simple patch that fixes this, but it hits issues with the `AutoExecutor` which I never updated in submitit to be aware of the options which conflict with each other. I think in general auto is missing some of the recent work in submitit, and it feels like a semi-hacky workaround to me. \r\n\r\nSubmitit already has pretty good validation logic so my pitch is to (1) let submitit handle everything by calling the correct executor for the job instead of using Auto. This is going to require a revamp of how the parameters are named/passed however and will likely be a breaking change to the API. (2) It would be nice if we could allow people to pass whatever parameters are supported by submitit without needing to update the hydra schema each time. These are my two major goals, so they sound reasonable/feasible? Point (2) may not be possible I guess.\r\n\r\nOther options:\r\n\r\n* Use auto but try to include our own validation logic (needs exploration), I don't like this because it requires us to duplicate logic that submitit has already and also it may not even be possible depending on what Auto decides to do (looked iffy on my quick pass)\r\n* Punt this to submitit to revamp the auto executor to do the right stuff and be more flexible. I don't like this because I think the auto executor is more of a convenience hack and more complex/flexible code would be calling the executors directly (this is what I was doing previously with my own submitit hydra wrapper thing). I also think its going to be a more more complex PR.\r\n\r\n**Are you willing to open a pull request?** (See [CONTRIBUTING](../../CONTRIBUTING.md))\r\n\r\nYes, but I want feedback first on the best way to go about it\r\n\r\n## Additional context\r\nAdd any other context or screenshots about the feature request here.\r\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom dataclasses import dataclass, field\nfrom typing import Any, Dict, List, Optional\n\nfrom hydra.core.config_store import ConfigStore\n\n\n@dataclass\nclass BaseQueueConf:\n \"\"\"Configuration shared by all executors\"\"\"\n\n submitit_folder: str = \"${hydra.sweep.dir}/.submitit/%j\"\n\n # maximum time for the job in minutes\n timeout_min: int = 60\n # number of cpus to use for each task\n cpus_per_task: int = 1\n # number of gpus to use on each node\n gpus_per_node: int = 0\n # number of tasks to spawn on each node\n tasks_per_node: int = 1\n # memory to reserve for the job on each node (in GB)\n mem_gb: int = 4\n # number of nodes to use for the job\n nodes: int = 1\n # name of the job\n name: str = \"${hydra.job.name}\"\n\n\n@dataclass\nclass SlurmQueueConf(BaseQueueConf):\n \"\"\"Slurm configuration overrides and specific parameters\"\"\"\n\n _target_: str = (\n \"hydra_plugins.hydra_submitit_launcher.submitit_launcher.SlurmLauncher\"\n )\n\n # Params are used to configure sbatch, for more info check:\n # https://github.com/facebookincubator/submitit/blob/master/submitit/slurm/slurm.py\n\n # Following parameters are slurm specific\n # More information: https://slurm.schedmd.com/sbatch.html\n #\n # slurm partition to use on the cluster\n partition: Optional[str] = None\n comment: Optional[str] = None\n constraint: Optional[str] = None\n exclude: Optional[str] = None\n\n # Following parameters are submitit specifics\n #\n # USR1 signal delay before timeout\n signal_delay_s: int = 120\n # Maximum number of retries on job timeout.\n # Change this only after you confirmed your code can handle re-submission\n # by properly resuming from the latest stored checkpoint.\n # check the following for more info on slurm_max_num_timeout\n # https://github.com/facebookincubator/submitit/blob/master/docs/checkpointing.md\n max_num_timeout: int = 0\n # Useful to add parameters which are not currently available in the plugin.\n # Eg: {\"mail-user\": \"[email protected]\", \"mail-type\": \"BEGIN\"}\n additional_parameters: Dict[str, Any] = field(default_factory=dict)\n # Maximum number of jobs running in parallel\n array_parallelism: int = 256\n # A list of commands to run in sbatch befure running srun\n setup: Optional[List[str]] = None\n\n\n@dataclass\nclass LocalQueueConf(BaseQueueConf):\n _target_: str = (\n \"hydra_plugins.hydra_submitit_launcher.submitit_launcher.LocalLauncher\"\n )\n\n\n# finally, register two different choices:\nConfigStore.instance().store(\n group=\"hydra/launcher\",\n name=\"submitit_local\",\n node=LocalQueueConf(),\n provider=\"submitit_launcher\",\n)\n\n\nConfigStore.instance().store(\n group=\"hydra/launcher\",\n name=\"submitit_slurm\",\n node=SlurmQueueConf(),\n provider=\"submitit_launcher\",\n)\n", "path": "plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom dataclasses import dataclass, field\nfrom typing import Any, Dict, List, Optional\n\nfrom hydra.core.config_store import ConfigStore\n\n\n@dataclass\nclass BaseQueueConf:\n \"\"\"Configuration shared by all executors\"\"\"\n\n submitit_folder: str = \"${hydra.sweep.dir}/.submitit/%j\"\n\n # maximum time for the job in minutes\n timeout_min: int = 60\n # number of cpus to use for each task\n cpus_per_task: Optional[int] = None\n # number of gpus to use on each node\n gpus_per_node: Optional[int] = None\n # number of tasks to spawn on each node\n tasks_per_node: int = 1\n # memory to reserve for the job on each node (in GB)\n mem_gb: Optional[int] = None\n # number of nodes to use for the job\n nodes: int = 1\n # name of the job\n name: str = \"${hydra.job.name}\"\n\n\n@dataclass\nclass SlurmQueueConf(BaseQueueConf):\n \"\"\"Slurm configuration overrides and specific parameters\"\"\"\n\n _target_: str = (\n \"hydra_plugins.hydra_submitit_launcher.submitit_launcher.SlurmLauncher\"\n )\n\n # Params are used to configure sbatch, for more info check:\n # https://github.com/facebookincubator/submitit/blob/master/submitit/slurm/slurm.py\n\n # Following parameters are slurm specific\n # More information: https://slurm.schedmd.com/sbatch.html\n #\n # slurm partition to use on the cluster\n partition: Optional[str] = None\n comment: Optional[str] = None\n constraint: Optional[str] = None\n exclude: Optional[str] = None\n cpus_per_gpu: Optional[int] = None\n gpus_per_task: Optional[int] = None\n mem_per_gpu: Optional[str] = None\n mem_per_cpu: Optional[str] = None\n\n # Following parameters are submitit specifics\n #\n # USR1 signal delay before timeout\n signal_delay_s: int = 120\n # Maximum number of retries on job timeout.\n # Change this only after you confirmed your code can handle re-submission\n # by properly resuming from the latest stored checkpoint.\n # check the following for more info on slurm_max_num_timeout\n # https://github.com/facebookincubator/submitit/blob/master/docs/checkpointing.md\n max_num_timeout: int = 0\n # Useful to add parameters which are not currently available in the plugin.\n # Eg: {\"mail-user\": \"[email protected]\", \"mail-type\": \"BEGIN\"}\n additional_parameters: Dict[str, Any] = field(default_factory=dict)\n # Maximum number of jobs running in parallel\n array_parallelism: int = 256\n # A list of commands to run in sbatch befure running srun\n setup: Optional[List[str]] = None\n\n\n@dataclass\nclass LocalQueueConf(BaseQueueConf):\n _target_: str = (\n \"hydra_plugins.hydra_submitit_launcher.submitit_launcher.LocalLauncher\"\n )\n\n\n# finally, register two different choices:\nConfigStore.instance().store(\n group=\"hydra/launcher\",\n name=\"submitit_local\",\n node=LocalQueueConf(),\n provider=\"submitit_launcher\",\n)\n\n\nConfigStore.instance().store(\n group=\"hydra/launcher\",\n name=\"submitit_slurm\",\n node=SlurmQueueConf(),\n provider=\"submitit_launcher\",\n)\n", "path": "plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py"}]}
| 1,916 | 380 |
gh_patches_debug_35574
|
rasdani/github-patches
|
git_diff
|
SciTools__cartopy-1837
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Backport NaturalEarth url change in release 0.18 and 0.19 ?
It should be great, if possible, to backport the change of NaturalEarth url (done in this [commit](https://github.com/SciTools/cartopy/commit/af814c31829002acd7c1c49ba52cee530c6ac19a) ) in at least releases 0.18 (==> 0.18.1) and 0.19 (==> 0.19.1), and of course to release a new version with this change.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/cartopy_feature_download.py`
Content:
```
1 #!/usr/bin/env python
2 # Copyright Cartopy Contributors
3 #
4 # This file is part of Cartopy and is released under the LGPL license.
5 # See COPYING and COPYING.LESSER in the root of the repository for full
6 # licensing details.
7 """
8 This module provides a command-line tool for triggering the download of
9 the data used by various Feature instances.
10
11 For detail on how to use this tool, execute it with the `-h` option:
12
13 python download.py -h
14
15 """
16
17 import argparse
18
19 from cartopy import config
20 from cartopy.feature import Feature, GSHHSFeature, NaturalEarthFeature
21 from cartopy.io import Downloader
22
23
24 ALL_SCALES = ('110m', '50m', '10m')
25
26
27 FEATURE_DEFN_GROUPS = {
28 # Only need one GSHHS resolution because they *all* get downloaded
29 # from one file.
30 'gshhs': GSHHSFeature(scale='f'),
31 'physical': (
32 ('physical', 'coastline', ALL_SCALES),
33 ('physical', 'land', ALL_SCALES),
34 ('physical', 'ocean', ALL_SCALES),
35 ('physical', 'rivers_lake_centerlines', ALL_SCALES),
36 ('physical', 'lakes', ALL_SCALES),
37 ('physical', 'geography_regions_polys', ALL_SCALES),
38 ('physical', 'geography_regions_points', ALL_SCALES),
39 ('physical', 'geography_marine_polys', ALL_SCALES),
40 ('physical', 'glaciated_areas', ALL_SCALES)
41 ),
42 'cultural': (
43 ('cultural', 'admin_0_countries', ALL_SCALES),
44 ('cultural', 'admin_0_countries_lakes', ALL_SCALES),
45 ('cultural', 'admin_0_sovereignty', ALL_SCALES),
46 ('cultural', 'admin_0_boundary_lines_land', ALL_SCALES),
47
48 ('cultural', 'urban_areas', ('50m', '10m')),
49
50 ('cultural', 'roads', '10m'),
51 ('cultural', 'roads_north_america', '10m'),
52 ('cultural', 'railroads', '10m'),
53 ('cultural', 'railroads_north_america', '10m'),
54 ),
55 'cultural-extra': (
56 ('cultural', 'admin_0_map_units', '110m'),
57 ('cultural', 'admin_0_scale_rank', '110m'),
58 ('cultural', 'admin_0_tiny_countries', '110m'),
59 ('cultural', 'admin_0_pacific_groupings', '110m'),
60 ('cultural', 'admin_1_states_provinces', '110m'),
61 ('cultural', 'admin_1_states_provinces_lines', '110m'),
62 ),
63 }
64
65
66 def download_features(group_names, dry_run=True):
67 for group_name in group_names:
68 feature_defns = FEATURE_DEFN_GROUPS[group_name]
69 if isinstance(feature_defns, Feature):
70 feature = feature_defns
71 level = list(feature._levels)[0]
72 downloader = Downloader.from_config(('shapefiles', 'gshhs',
73 feature._scale, level))
74 format_dict = {'config': config, 'scale': feature._scale,
75 'level': level}
76 if dry_run:
77 print('URL: {}'.format(downloader.url(format_dict)))
78 else:
79 downloader.path(format_dict)
80 geoms = list(feature.geometries())
81 print('Feature {} length: {}'.format(feature, len(geoms)))
82 else:
83 for category, name, scales in feature_defns:
84 if not isinstance(scales, tuple):
85 scales = (scales,)
86 for scale in scales:
87 downloader = Downloader.from_config(('shapefiles',
88 'natural_earth',
89 scale, category,
90 name))
91 feature = NaturalEarthFeature(category, name, scale)
92 format_dict = {'config': config, 'category': category,
93 'name': name, 'resolution': scale}
94 if dry_run:
95 print('URL: {}'.format(downloader.url(format_dict)))
96 else:
97 downloader.path(format_dict)
98 geoms = list(feature.geometries())
99 print('Feature {}, {}, {} length: {}'
100 ''.format(category, name, scale, len(geoms)))
101
102
103 if __name__ == '__main__':
104 parser = argparse.ArgumentParser(description='Download feature datasets.')
105 parser.add_argument('group_names', nargs='+',
106 choices=FEATURE_DEFN_GROUPS,
107 metavar='GROUP_NAME',
108 help='Feature group name: %(choices)s')
109 parser.add_argument('--output', '-o',
110 help='save datasets in the specified directory '
111 '(default: user cache directory)')
112 parser.add_argument('--dry-run',
113 help='just print the URLs to download',
114 action='store_true')
115 parser.add_argument('--ignore-repo-data', action='store_true',
116 help='ignore existing repo data when downloading')
117 args = parser.parse_args()
118
119 if args.output:
120 config['pre_existing_data_dir'] = args.output
121 config['data_dir'] = args.output
122 if args.ignore_repo_data:
123 config['repo_data_dir'] = config['data_dir']
124 download_features(args.group_names, dry_run=args.dry_run)
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tools/cartopy_feature_download.py b/tools/cartopy_feature_download.py
--- a/tools/cartopy_feature_download.py
+++ b/tools/cartopy_feature_download.py
@@ -10,19 +10,24 @@
For detail on how to use this tool, execute it with the `-h` option:
- python download.py -h
+ python cartopy_feature_download.py -h
"""
import argparse
+import pathlib
from cartopy import config
from cartopy.feature import Feature, GSHHSFeature, NaturalEarthFeature
-from cartopy.io import Downloader
+from cartopy.io import Downloader, DownloadWarning
ALL_SCALES = ('110m', '50m', '10m')
+# See https://github.com/SciTools/cartopy/pull/1833
+URL_TEMPLATE = ('https://naturalearth.s3.amazonaws.com/{resolution}_'
+ '{category}/ne_{resolution}_{name}.zip')
+SHP_NE_SPEC = ('shapefiles', 'natural_earth')
FEATURE_DEFN_GROUPS = {
# Only need one GSHHS resolution because they *all* get downloaded
@@ -114,11 +119,28 @@
action='store_true')
parser.add_argument('--ignore-repo-data', action='store_true',
help='ignore existing repo data when downloading')
+ parser.add_argument('--no-warn',
+ action='store_true',
+ help='ignore cartopy "DownloadWarning" warnings')
args = parser.parse_args()
if args.output:
- config['pre_existing_data_dir'] = args.output
- config['data_dir'] = args.output
+ target_dir = pathlib.Path(args.output).expanduser().resolve()
+ target_dir.mkdir(parents=True, exist_ok=True)
+ config['pre_existing_data_dir'] = target_dir
+ config['data_dir'] = target_dir
if args.ignore_repo_data:
config['repo_data_dir'] = config['data_dir']
+ if args.no_warn:
+ import warnings
+ warnings.filterwarnings('ignore', category=DownloadWarning)
+
+ # Enforce use of stable AWS endpoint, regardless of cartopy version.
+ # In doing so, this allows users to download this script and execute it
+ # with any version of cartopy, thus taking advantage of the stable AWS
+ # endpoint.
+ # This removes the need to backport the associated fix
+ # https://github.com/SciTools/cartopy/pull/1833.
+ config['downloaders'][SHP_NE_SPEC].url_template = URL_TEMPLATE
+
download_features(args.group_names, dry_run=args.dry_run)
|
{"golden_diff": "diff --git a/tools/cartopy_feature_download.py b/tools/cartopy_feature_download.py\n--- a/tools/cartopy_feature_download.py\n+++ b/tools/cartopy_feature_download.py\n@@ -10,19 +10,24 @@\n \n For detail on how to use this tool, execute it with the `-h` option:\n \n- python download.py -h\n+ python cartopy_feature_download.py -h\n \n \"\"\"\n \n import argparse\n+import pathlib\n \n from cartopy import config\n from cartopy.feature import Feature, GSHHSFeature, NaturalEarthFeature\n-from cartopy.io import Downloader\n+from cartopy.io import Downloader, DownloadWarning\n \n \n ALL_SCALES = ('110m', '50m', '10m')\n \n+# See https://github.com/SciTools/cartopy/pull/1833\n+URL_TEMPLATE = ('https://naturalearth.s3.amazonaws.com/{resolution}_'\n+ '{category}/ne_{resolution}_{name}.zip')\n+SHP_NE_SPEC = ('shapefiles', 'natural_earth')\n \n FEATURE_DEFN_GROUPS = {\n # Only need one GSHHS resolution because they *all* get downloaded\n@@ -114,11 +119,28 @@\n action='store_true')\n parser.add_argument('--ignore-repo-data', action='store_true',\n help='ignore existing repo data when downloading')\n+ parser.add_argument('--no-warn',\n+ action='store_true',\n+ help='ignore cartopy \"DownloadWarning\" warnings')\n args = parser.parse_args()\n \n if args.output:\n- config['pre_existing_data_dir'] = args.output\n- config['data_dir'] = args.output\n+ target_dir = pathlib.Path(args.output).expanduser().resolve()\n+ target_dir.mkdir(parents=True, exist_ok=True)\n+ config['pre_existing_data_dir'] = target_dir\n+ config['data_dir'] = target_dir\n if args.ignore_repo_data:\n config['repo_data_dir'] = config['data_dir']\n+ if args.no_warn:\n+ import warnings\n+ warnings.filterwarnings('ignore', category=DownloadWarning)\n+\n+ # Enforce use of stable AWS endpoint, regardless of cartopy version.\n+ # In doing so, this allows users to download this script and execute it\n+ # with any version of cartopy, thus taking advantage of the stable AWS\n+ # endpoint.\n+ # This removes the need to backport the associated fix\n+ # https://github.com/SciTools/cartopy/pull/1833.\n+ config['downloaders'][SHP_NE_SPEC].url_template = URL_TEMPLATE\n+\n download_features(args.group_names, dry_run=args.dry_run)\n", "issue": "Backport NaturalEarth url change in release 0.18 and 0.19 ?\nIt should be great, if possible, to backport the change of NaturalEarth url (done in this [commit](https://github.com/SciTools/cartopy/commit/af814c31829002acd7c1c49ba52cee530c6ac19a) ) in at least releases 0.18 (==> 0.18.1) and 0.19 (==> 0.19.1), and of course to release a new version with this change.\n", "before_files": [{"content": "#!/usr/bin/env python\n# Copyright Cartopy Contributors\n#\n# This file is part of Cartopy and is released under the LGPL license.\n# See COPYING and COPYING.LESSER in the root of the repository for full\n# licensing details.\n\"\"\"\nThis module provides a command-line tool for triggering the download of\nthe data used by various Feature instances.\n\nFor detail on how to use this tool, execute it with the `-h` option:\n\n python download.py -h\n\n\"\"\"\n\nimport argparse\n\nfrom cartopy import config\nfrom cartopy.feature import Feature, GSHHSFeature, NaturalEarthFeature\nfrom cartopy.io import Downloader\n\n\nALL_SCALES = ('110m', '50m', '10m')\n\n\nFEATURE_DEFN_GROUPS = {\n # Only need one GSHHS resolution because they *all* get downloaded\n # from one file.\n 'gshhs': GSHHSFeature(scale='f'),\n 'physical': (\n ('physical', 'coastline', ALL_SCALES),\n ('physical', 'land', ALL_SCALES),\n ('physical', 'ocean', ALL_SCALES),\n ('physical', 'rivers_lake_centerlines', ALL_SCALES),\n ('physical', 'lakes', ALL_SCALES),\n ('physical', 'geography_regions_polys', ALL_SCALES),\n ('physical', 'geography_regions_points', ALL_SCALES),\n ('physical', 'geography_marine_polys', ALL_SCALES),\n ('physical', 'glaciated_areas', ALL_SCALES)\n ),\n 'cultural': (\n ('cultural', 'admin_0_countries', ALL_SCALES),\n ('cultural', 'admin_0_countries_lakes', ALL_SCALES),\n ('cultural', 'admin_0_sovereignty', ALL_SCALES),\n ('cultural', 'admin_0_boundary_lines_land', ALL_SCALES),\n\n ('cultural', 'urban_areas', ('50m', '10m')),\n\n ('cultural', 'roads', '10m'),\n ('cultural', 'roads_north_america', '10m'),\n ('cultural', 'railroads', '10m'),\n ('cultural', 'railroads_north_america', '10m'),\n ),\n 'cultural-extra': (\n ('cultural', 'admin_0_map_units', '110m'),\n ('cultural', 'admin_0_scale_rank', '110m'),\n ('cultural', 'admin_0_tiny_countries', '110m'),\n ('cultural', 'admin_0_pacific_groupings', '110m'),\n ('cultural', 'admin_1_states_provinces', '110m'),\n ('cultural', 'admin_1_states_provinces_lines', '110m'),\n ),\n}\n\n\ndef download_features(group_names, dry_run=True):\n for group_name in group_names:\n feature_defns = FEATURE_DEFN_GROUPS[group_name]\n if isinstance(feature_defns, Feature):\n feature = feature_defns\n level = list(feature._levels)[0]\n downloader = Downloader.from_config(('shapefiles', 'gshhs',\n feature._scale, level))\n format_dict = {'config': config, 'scale': feature._scale,\n 'level': level}\n if dry_run:\n print('URL: {}'.format(downloader.url(format_dict)))\n else:\n downloader.path(format_dict)\n geoms = list(feature.geometries())\n print('Feature {} length: {}'.format(feature, len(geoms)))\n else:\n for category, name, scales in feature_defns:\n if not isinstance(scales, tuple):\n scales = (scales,)\n for scale in scales:\n downloader = Downloader.from_config(('shapefiles',\n 'natural_earth',\n scale, category,\n name))\n feature = NaturalEarthFeature(category, name, scale)\n format_dict = {'config': config, 'category': category,\n 'name': name, 'resolution': scale}\n if dry_run:\n print('URL: {}'.format(downloader.url(format_dict)))\n else:\n downloader.path(format_dict)\n geoms = list(feature.geometries())\n print('Feature {}, {}, {} length: {}'\n ''.format(category, name, scale, len(geoms)))\n\n\nif __name__ == '__main__':\n parser = argparse.ArgumentParser(description='Download feature datasets.')\n parser.add_argument('group_names', nargs='+',\n choices=FEATURE_DEFN_GROUPS,\n metavar='GROUP_NAME',\n help='Feature group name: %(choices)s')\n parser.add_argument('--output', '-o',\n help='save datasets in the specified directory '\n '(default: user cache directory)')\n parser.add_argument('--dry-run',\n help='just print the URLs to download',\n action='store_true')\n parser.add_argument('--ignore-repo-data', action='store_true',\n help='ignore existing repo data when downloading')\n args = parser.parse_args()\n\n if args.output:\n config['pre_existing_data_dir'] = args.output\n config['data_dir'] = args.output\n if args.ignore_repo_data:\n config['repo_data_dir'] = config['data_dir']\n download_features(args.group_names, dry_run=args.dry_run)\n", "path": "tools/cartopy_feature_download.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# Copyright Cartopy Contributors\n#\n# This file is part of Cartopy and is released under the LGPL license.\n# See COPYING and COPYING.LESSER in the root of the repository for full\n# licensing details.\n\"\"\"\nThis module provides a command-line tool for triggering the download of\nthe data used by various Feature instances.\n\nFor detail on how to use this tool, execute it with the `-h` option:\n\n python cartopy_feature_download.py -h\n\n\"\"\"\n\nimport argparse\nimport pathlib\n\nfrom cartopy import config\nfrom cartopy.feature import Feature, GSHHSFeature, NaturalEarthFeature\nfrom cartopy.io import Downloader, DownloadWarning\n\n\nALL_SCALES = ('110m', '50m', '10m')\n\n# See https://github.com/SciTools/cartopy/pull/1833\nURL_TEMPLATE = ('https://naturalearth.s3.amazonaws.com/{resolution}_'\n '{category}/ne_{resolution}_{name}.zip')\nSHP_NE_SPEC = ('shapefiles', 'natural_earth')\n\nFEATURE_DEFN_GROUPS = {\n # Only need one GSHHS resolution because they *all* get downloaded\n # from one file.\n 'gshhs': GSHHSFeature(scale='f'),\n 'physical': (\n ('physical', 'coastline', ALL_SCALES),\n ('physical', 'land', ALL_SCALES),\n ('physical', 'ocean', ALL_SCALES),\n ('physical', 'rivers_lake_centerlines', ALL_SCALES),\n ('physical', 'lakes', ALL_SCALES),\n ('physical', 'geography_regions_polys', ALL_SCALES),\n ('physical', 'geography_regions_points', ALL_SCALES),\n ('physical', 'geography_marine_polys', ALL_SCALES),\n ('physical', 'glaciated_areas', ALL_SCALES)\n ),\n 'cultural': (\n ('cultural', 'admin_0_countries', ALL_SCALES),\n ('cultural', 'admin_0_countries_lakes', ALL_SCALES),\n ('cultural', 'admin_0_sovereignty', ALL_SCALES),\n ('cultural', 'admin_0_boundary_lines_land', ALL_SCALES),\n\n ('cultural', 'urban_areas', ('50m', '10m')),\n\n ('cultural', 'roads', '10m'),\n ('cultural', 'roads_north_america', '10m'),\n ('cultural', 'railroads', '10m'),\n ('cultural', 'railroads_north_america', '10m'),\n ),\n 'cultural-extra': (\n ('cultural', 'admin_0_map_units', '110m'),\n ('cultural', 'admin_0_scale_rank', '110m'),\n ('cultural', 'admin_0_tiny_countries', '110m'),\n ('cultural', 'admin_0_pacific_groupings', '110m'),\n ('cultural', 'admin_1_states_provinces', '110m'),\n ('cultural', 'admin_1_states_provinces_lines', '110m'),\n ),\n}\n\n\ndef download_features(group_names, dry_run=True):\n for group_name in group_names:\n feature_defns = FEATURE_DEFN_GROUPS[group_name]\n if isinstance(feature_defns, Feature):\n feature = feature_defns\n level = list(feature._levels)[0]\n downloader = Downloader.from_config(('shapefiles', 'gshhs',\n feature._scale, level))\n format_dict = {'config': config, 'scale': feature._scale,\n 'level': level}\n if dry_run:\n print('URL: {}'.format(downloader.url(format_dict)))\n else:\n downloader.path(format_dict)\n geoms = list(feature.geometries())\n print('Feature {} length: {}'.format(feature, len(geoms)))\n else:\n for category, name, scales in feature_defns:\n if not isinstance(scales, tuple):\n scales = (scales,)\n for scale in scales:\n downloader = Downloader.from_config(('shapefiles',\n 'natural_earth',\n scale, category,\n name))\n feature = NaturalEarthFeature(category, name, scale)\n format_dict = {'config': config, 'category': category,\n 'name': name, 'resolution': scale}\n if dry_run:\n print('URL: {}'.format(downloader.url(format_dict)))\n else:\n downloader.path(format_dict)\n geoms = list(feature.geometries())\n print('Feature {}, {}, {} length: {}'\n ''.format(category, name, scale, len(geoms)))\n\n\nif __name__ == '__main__':\n parser = argparse.ArgumentParser(description='Download feature datasets.')\n parser.add_argument('group_names', nargs='+',\n choices=FEATURE_DEFN_GROUPS,\n metavar='GROUP_NAME',\n help='Feature group name: %(choices)s')\n parser.add_argument('--output', '-o',\n help='save datasets in the specified directory '\n '(default: user cache directory)')\n parser.add_argument('--dry-run',\n help='just print the URLs to download',\n action='store_true')\n parser.add_argument('--ignore-repo-data', action='store_true',\n help='ignore existing repo data when downloading')\n parser.add_argument('--no-warn',\n action='store_true',\n help='ignore cartopy \"DownloadWarning\" warnings')\n args = parser.parse_args()\n\n if args.output:\n target_dir = pathlib.Path(args.output).expanduser().resolve()\n target_dir.mkdir(parents=True, exist_ok=True)\n config['pre_existing_data_dir'] = target_dir\n config['data_dir'] = target_dir\n if args.ignore_repo_data:\n config['repo_data_dir'] = config['data_dir']\n if args.no_warn:\n import warnings\n warnings.filterwarnings('ignore', category=DownloadWarning)\n\n # Enforce use of stable AWS endpoint, regardless of cartopy version.\n # In doing so, this allows users to download this script and execute it\n # with any version of cartopy, thus taking advantage of the stable AWS\n # endpoint.\n # This removes the need to backport the associated fix\n # https://github.com/SciTools/cartopy/pull/1833.\n config['downloaders'][SHP_NE_SPEC].url_template = URL_TEMPLATE\n\n download_features(args.group_names, dry_run=args.dry_run)\n", "path": "tools/cartopy_feature_download.py"}]}
| 1,816 | 589 |
gh_patches_debug_10586
|
rasdani/github-patches
|
git_diff
|
pyro-ppl__pyro-2014
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cholesky issue in HMC
In PyTorch, if Cholesky issue happens, a Runtime Error will be raised (instead of returning `nan`) regarding singular matrix. So it will be difficult to run MCMC with gaussian process kernels (which involves Cholesky decomposition of covariance matrices). This issue is raised in https://github.com/pyro-ppl/pyro/issues/1863 and #1973. Because `nan` is a valid potential energy in HMC, we should add a mechanism to catch this error.
My proposed solution is to add a try, catch exception to potential function so that when Cholesky issue happens, it will return `nan`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyro/ops/integrator.py`
Content:
```
1 import torch
2 from torch.autograd import grad
3
4
5 def velocity_verlet(z, r, potential_fn, inverse_mass_matrix, step_size, num_steps=1, z_grads=None):
6 r"""
7 Second order symplectic integrator that uses the velocity verlet algorithm.
8
9 :param dict z: dictionary of sample site names and their current values
10 (type :class:`~torch.Tensor`).
11 :param dict r: dictionary of sample site names and corresponding momenta
12 (type :class:`~torch.Tensor`).
13 :param callable potential_fn: function that returns potential energy given z
14 for each sample site. The negative gradient of the function with respect
15 to ``z`` determines the rate of change of the corresponding sites'
16 momenta ``r``.
17 :param torch.Tensor inverse_mass_matrix: a tensor :math:`M^{-1}` which is used
18 to calculate kinetic energy: :math:`E_{kinetic} = \frac{1}{2}z^T M^{-1} z`.
19 Here :math:`M` can be a 1D tensor (diagonal matrix) or a 2D tensor (dense matrix).
20 :param float step_size: step size for each time step iteration.
21 :param int num_steps: number of discrete time steps over which to integrate.
22 :param torch.Tensor z_grads: optional gradients of potential energy at current ``z``.
23 :return tuple (z_next, r_next, z_grads, potential_energy): next position and momenta,
24 together with the potential energy and its gradient w.r.t. ``z_next``.
25 """
26 z_next = z.copy()
27 r_next = r.copy()
28 for _ in range(num_steps):
29 z_next, r_next, z_grads, potential_energy = _single_step_verlet(z_next,
30 r_next,
31 potential_fn,
32 inverse_mass_matrix,
33 step_size,
34 z_grads)
35 return z_next, r_next, z_grads, potential_energy
36
37
38 def _single_step_verlet(z, r, potential_fn, inverse_mass_matrix, step_size, z_grads=None):
39 r"""
40 Single step velocity verlet that modifies the `z`, `r` dicts in place.
41 """
42
43 z_grads = potential_grad(potential_fn, z)[0] if z_grads is None else z_grads
44
45 for site_name in r:
46 r[site_name] = r[site_name] + 0.5 * step_size * (-z_grads[site_name]) # r(n+1/2)
47
48 r_grads = _kinetic_grad(inverse_mass_matrix, r)
49 for site_name in z:
50 z[site_name] = z[site_name] + step_size * r_grads[site_name] # z(n+1)
51
52 z_grads, potential_energy = potential_grad(potential_fn, z)
53 for site_name in r:
54 r[site_name] = r[site_name] + 0.5 * step_size * (-z_grads[site_name]) # r(n+1)
55
56 return z, r, z_grads, potential_energy
57
58
59 def potential_grad(potential_fn, z):
60 """
61 Gradient of `potential_fn` w.r.t. parameters z.
62
63 :param potential_fn: python callable that takes in a dictionary of parameters
64 and returns the potential energy.
65 :param dict z: dictionary of parameter values keyed by site name.
66 :return: tuple of `(z_grads, potential_energy)`, where `z_grads` is a dictionary
67 with the same keys as `z` containing gradients and potential_energy is a
68 torch scalar.
69 """
70 z_keys, z_nodes = zip(*z.items())
71 for node in z_nodes:
72 node.requires_grad_(True)
73 potential_energy = potential_fn(z)
74 grads = grad(potential_energy, z_nodes)
75 for node in z_nodes:
76 node.requires_grad_(False)
77 return dict(zip(z_keys, grads)), potential_energy.detach()
78
79
80 def _kinetic_grad(inverse_mass_matrix, r):
81 # XXX consider using list/OrderDict to store z and r
82 # so we don't have to sort the keys
83 r_flat = torch.cat([r[site_name].reshape(-1) for site_name in sorted(r)])
84 if inverse_mass_matrix.dim() == 1:
85 grads_flat = inverse_mass_matrix * r_flat
86 else:
87 grads_flat = inverse_mass_matrix.matmul(r_flat)
88
89 # unpacking
90 grads = {}
91 pos = 0
92 for site_name in sorted(r):
93 next_pos = pos + r[site_name].numel()
94 grads[site_name] = grads_flat[pos:next_pos].reshape(r[site_name].shape)
95 pos = next_pos
96 assert pos == grads_flat.size(0)
97 return grads
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pyro/ops/integrator.py b/pyro/ops/integrator.py
--- a/pyro/ops/integrator.py
+++ b/pyro/ops/integrator.py
@@ -70,7 +70,16 @@
z_keys, z_nodes = zip(*z.items())
for node in z_nodes:
node.requires_grad_(True)
- potential_energy = potential_fn(z)
+ try:
+ potential_energy = potential_fn(z)
+ # deal with singular matrices
+ except RuntimeError as e:
+ if "singular U" in str(e):
+ grads = {k: v.new_zeros(v.shape) for k, v in z.items()}
+ return grads, z_nodes[0].new_tensor(float('nan'))
+ else:
+ raise e
+
grads = grad(potential_energy, z_nodes)
for node in z_nodes:
node.requires_grad_(False)
|
{"golden_diff": "diff --git a/pyro/ops/integrator.py b/pyro/ops/integrator.py\n--- a/pyro/ops/integrator.py\n+++ b/pyro/ops/integrator.py\n@@ -70,7 +70,16 @@\n z_keys, z_nodes = zip(*z.items())\n for node in z_nodes:\n node.requires_grad_(True)\n- potential_energy = potential_fn(z)\n+ try:\n+ potential_energy = potential_fn(z)\n+ # deal with singular matrices\n+ except RuntimeError as e:\n+ if \"singular U\" in str(e):\n+ grads = {k: v.new_zeros(v.shape) for k, v in z.items()}\n+ return grads, z_nodes[0].new_tensor(float('nan'))\n+ else:\n+ raise e\n+\n grads = grad(potential_energy, z_nodes)\n for node in z_nodes:\n node.requires_grad_(False)\n", "issue": "Cholesky issue in HMC\nIn PyTorch, if Cholesky issue happens, a Runtime Error will be raised (instead of returning `nan`) regarding singular matrix. So it will be difficult to run MCMC with gaussian process kernels (which involves Cholesky decomposition of covariance matrices). This issue is raised in https://github.com/pyro-ppl/pyro/issues/1863 and #1973. Because `nan` is a valid potential energy in HMC, we should add a mechanism to catch this error.\r\n\r\nMy proposed solution is to add a try, catch exception to potential function so that when Cholesky issue happens, it will return `nan`.\n", "before_files": [{"content": "import torch\nfrom torch.autograd import grad\n\n\ndef velocity_verlet(z, r, potential_fn, inverse_mass_matrix, step_size, num_steps=1, z_grads=None):\n r\"\"\"\n Second order symplectic integrator that uses the velocity verlet algorithm.\n\n :param dict z: dictionary of sample site names and their current values\n (type :class:`~torch.Tensor`).\n :param dict r: dictionary of sample site names and corresponding momenta\n (type :class:`~torch.Tensor`).\n :param callable potential_fn: function that returns potential energy given z\n for each sample site. The negative gradient of the function with respect\n to ``z`` determines the rate of change of the corresponding sites'\n momenta ``r``.\n :param torch.Tensor inverse_mass_matrix: a tensor :math:`M^{-1}` which is used\n to calculate kinetic energy: :math:`E_{kinetic} = \\frac{1}{2}z^T M^{-1} z`.\n Here :math:`M` can be a 1D tensor (diagonal matrix) or a 2D tensor (dense matrix).\n :param float step_size: step size for each time step iteration.\n :param int num_steps: number of discrete time steps over which to integrate.\n :param torch.Tensor z_grads: optional gradients of potential energy at current ``z``.\n :return tuple (z_next, r_next, z_grads, potential_energy): next position and momenta,\n together with the potential energy and its gradient w.r.t. ``z_next``.\n \"\"\"\n z_next = z.copy()\n r_next = r.copy()\n for _ in range(num_steps):\n z_next, r_next, z_grads, potential_energy = _single_step_verlet(z_next,\n r_next,\n potential_fn,\n inverse_mass_matrix,\n step_size,\n z_grads)\n return z_next, r_next, z_grads, potential_energy\n\n\ndef _single_step_verlet(z, r, potential_fn, inverse_mass_matrix, step_size, z_grads=None):\n r\"\"\"\n Single step velocity verlet that modifies the `z`, `r` dicts in place.\n \"\"\"\n\n z_grads = potential_grad(potential_fn, z)[0] if z_grads is None else z_grads\n\n for site_name in r:\n r[site_name] = r[site_name] + 0.5 * step_size * (-z_grads[site_name]) # r(n+1/2)\n\n r_grads = _kinetic_grad(inverse_mass_matrix, r)\n for site_name in z:\n z[site_name] = z[site_name] + step_size * r_grads[site_name] # z(n+1)\n\n z_grads, potential_energy = potential_grad(potential_fn, z)\n for site_name in r:\n r[site_name] = r[site_name] + 0.5 * step_size * (-z_grads[site_name]) # r(n+1)\n\n return z, r, z_grads, potential_energy\n\n\ndef potential_grad(potential_fn, z):\n \"\"\"\n Gradient of `potential_fn` w.r.t. parameters z.\n\n :param potential_fn: python callable that takes in a dictionary of parameters\n and returns the potential energy.\n :param dict z: dictionary of parameter values keyed by site name.\n :return: tuple of `(z_grads, potential_energy)`, where `z_grads` is a dictionary\n with the same keys as `z` containing gradients and potential_energy is a\n torch scalar.\n \"\"\"\n z_keys, z_nodes = zip(*z.items())\n for node in z_nodes:\n node.requires_grad_(True)\n potential_energy = potential_fn(z)\n grads = grad(potential_energy, z_nodes)\n for node in z_nodes:\n node.requires_grad_(False)\n return dict(zip(z_keys, grads)), potential_energy.detach()\n\n\ndef _kinetic_grad(inverse_mass_matrix, r):\n # XXX consider using list/OrderDict to store z and r\n # so we don't have to sort the keys\n r_flat = torch.cat([r[site_name].reshape(-1) for site_name in sorted(r)])\n if inverse_mass_matrix.dim() == 1:\n grads_flat = inverse_mass_matrix * r_flat\n else:\n grads_flat = inverse_mass_matrix.matmul(r_flat)\n\n # unpacking\n grads = {}\n pos = 0\n for site_name in sorted(r):\n next_pos = pos + r[site_name].numel()\n grads[site_name] = grads_flat[pos:next_pos].reshape(r[site_name].shape)\n pos = next_pos\n assert pos == grads_flat.size(0)\n return grads\n", "path": "pyro/ops/integrator.py"}], "after_files": [{"content": "import torch\nfrom torch.autograd import grad\n\n\ndef velocity_verlet(z, r, potential_fn, inverse_mass_matrix, step_size, num_steps=1, z_grads=None):\n r\"\"\"\n Second order symplectic integrator that uses the velocity verlet algorithm.\n\n :param dict z: dictionary of sample site names and their current values\n (type :class:`~torch.Tensor`).\n :param dict r: dictionary of sample site names and corresponding momenta\n (type :class:`~torch.Tensor`).\n :param callable potential_fn: function that returns potential energy given z\n for each sample site. The negative gradient of the function with respect\n to ``z`` determines the rate of change of the corresponding sites'\n momenta ``r``.\n :param torch.Tensor inverse_mass_matrix: a tensor :math:`M^{-1}` which is used\n to calculate kinetic energy: :math:`E_{kinetic} = \\frac{1}{2}z^T M^{-1} z`.\n Here :math:`M` can be a 1D tensor (diagonal matrix) or a 2D tensor (dense matrix).\n :param float step_size: step size for each time step iteration.\n :param int num_steps: number of discrete time steps over which to integrate.\n :param torch.Tensor z_grads: optional gradients of potential energy at current ``z``.\n :return tuple (z_next, r_next, z_grads, potential_energy): next position and momenta,\n together with the potential energy and its gradient w.r.t. ``z_next``.\n \"\"\"\n z_next = z.copy()\n r_next = r.copy()\n for _ in range(num_steps):\n z_next, r_next, z_grads, potential_energy = _single_step_verlet(z_next,\n r_next,\n potential_fn,\n inverse_mass_matrix,\n step_size,\n z_grads)\n return z_next, r_next, z_grads, potential_energy\n\n\ndef _single_step_verlet(z, r, potential_fn, inverse_mass_matrix, step_size, z_grads=None):\n r\"\"\"\n Single step velocity verlet that modifies the `z`, `r` dicts in place.\n \"\"\"\n\n z_grads = potential_grad(potential_fn, z)[0] if z_grads is None else z_grads\n\n for site_name in r:\n r[site_name] = r[site_name] + 0.5 * step_size * (-z_grads[site_name]) # r(n+1/2)\n\n r_grads = _kinetic_grad(inverse_mass_matrix, r)\n for site_name in z:\n z[site_name] = z[site_name] + step_size * r_grads[site_name] # z(n+1)\n\n z_grads, potential_energy = potential_grad(potential_fn, z)\n for site_name in r:\n r[site_name] = r[site_name] + 0.5 * step_size * (-z_grads[site_name]) # r(n+1)\n\n return z, r, z_grads, potential_energy\n\n\ndef potential_grad(potential_fn, z):\n \"\"\"\n Gradient of `potential_fn` w.r.t. parameters z.\n\n :param potential_fn: python callable that takes in a dictionary of parameters\n and returns the potential energy.\n :param dict z: dictionary of parameter values keyed by site name.\n :return: tuple of `(z_grads, potential_energy)`, where `z_grads` is a dictionary\n with the same keys as `z` containing gradients and potential_energy is a\n torch scalar.\n \"\"\"\n z_keys, z_nodes = zip(*z.items())\n for node in z_nodes:\n node.requires_grad_(True)\n try:\n potential_energy = potential_fn(z)\n # deal with singular matrices\n except RuntimeError as e:\n if \"singular U\" in str(e):\n grads = {k: v.new_zeros(v.shape) for k, v in z.items()}\n return grads, z_nodes[0].new_tensor(float('nan'))\n else:\n raise e\n\n grads = grad(potential_energy, z_nodes)\n for node in z_nodes:\n node.requires_grad_(False)\n return dict(zip(z_keys, grads)), potential_energy.detach()\n\n\ndef _kinetic_grad(inverse_mass_matrix, r):\n # XXX consider using list/OrderDict to store z and r\n # so we don't have to sort the keys\n r_flat = torch.cat([r[site_name].reshape(-1) for site_name in sorted(r)])\n if inverse_mass_matrix.dim() == 1:\n grads_flat = inverse_mass_matrix * r_flat\n else:\n grads_flat = inverse_mass_matrix.matmul(r_flat)\n\n # unpacking\n grads = {}\n pos = 0\n for site_name in sorted(r):\n next_pos = pos + r[site_name].numel()\n grads[site_name] = grads_flat[pos:next_pos].reshape(r[site_name].shape)\n pos = next_pos\n assert pos == grads_flat.size(0)\n return grads\n", "path": "pyro/ops/integrator.py"}]}
| 1,635 | 200 |
gh_patches_debug_12638
|
rasdani/github-patches
|
git_diff
|
Kinto__kinto-1850
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Migrate to JSONschema Draft v7
Currently we use Draft4. The library that we use supports Draft7
https://json-schema.org/specification.html
The idea would be:
* to use `Draft7Validator` instead of `Draft4Validator` (nothing more...)
* list main changes and add links in CHANGELOG (see spec release notes)
* check compatibility with kinto-admin
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/schema_validation.py`
Content:
```
1 import colander
2 from jsonschema import Draft4Validator, ValidationError, SchemaError, RefResolutionError, validate
3 from pyramid.settings import asbool
4
5 from kinto.core import utils
6 from kinto.core.errors import raise_invalid
7 from kinto.views import object_exists_or_404
8
9
10 class JSONSchemaMapping(colander.SchemaNode):
11 def schema_type(self, **kw):
12 return colander.Mapping(unknown="preserve")
13
14 def deserialize(self, cstruct=colander.null):
15 # Start by deserializing a simple mapping.
16 validated = super().deserialize(cstruct)
17
18 # In case it is optional in parent schema.
19 if not validated or validated in (colander.null, colander.drop):
20 return validated
21 try:
22 check_schema(validated)
23 except ValidationError as e:
24 self.raise_invalid(e.message)
25 return validated
26
27
28 def check_schema(data):
29 try:
30 Draft4Validator.check_schema(data)
31 except SchemaError as e:
32 message = e.path.pop() + e.message
33 raise ValidationError(message)
34
35
36 def validate_schema(data, schema, ignore_fields=[]):
37 required_fields = [f for f in schema.get("required", []) if f not in ignore_fields]
38 # jsonschema doesn't accept 'required': [] yet.
39 # See https://github.com/Julian/jsonschema/issues/337.
40 # In the meantime, strip out 'required' if no other fields are required.
41 if required_fields:
42 schema = {**schema, "required": required_fields}
43 else:
44 schema = {f: v for f, v in schema.items() if f != "required"}
45
46 data = {f: v for f, v in data.items() if f not in ignore_fields}
47
48 try:
49 validate(data, schema)
50 except ValidationError as e:
51 if e.path:
52 field = e.path[-1]
53 elif e.validator_value:
54 field = e.validator_value[-1]
55 else:
56 field = e.schema_path[-1]
57 e.field = field
58 raise e
59 # Raise an error here if a reference in the schema doesn't resolve.
60 # jsonschema doesn't provide schema validation checking upon creation yet,
61 # it must be validated against data.
62 # See https://github.com/Julian/jsonschema/issues/399
63 # For future support https://github.com/Julian/jsonschema/issues/346.
64 except RefResolutionError as e:
65 raise e
66
67
68 def validate_from_bucket_schema_or_400(data, resource_name, request, ignore_fields=[]):
69 """Lookup in the parent objects if a schema was defined for this resource.
70
71 If the schema validation feature is enabled, if a schema is/are defined, and if the
72 data does not validate it/them, then it raises a 400 exception.
73 """
74 settings = request.registry.settings
75 schema_validation = "experimental_collection_schema_validation"
76 # If disabled from settings, do nothing.
77 if not asbool(settings.get(schema_validation)):
78 return
79
80 bucket_id = request.matchdict["bucket_id"]
81 bucket_uri = utils.instance_uri(request, "bucket", id=bucket_id)
82 buckets = request.bound_data.setdefault("buckets", {})
83 if bucket_uri not in buckets:
84 # Unknown yet, fetch from storage.
85 bucket = object_exists_or_404(
86 request, collection_id="bucket", parent_id="", object_id=bucket_id
87 )
88 buckets[bucket_uri] = bucket
89
90 # Let's see if the bucket defines a schema for this resource.
91 metadata_field = "{}:schema".format(resource_name)
92 bucket = buckets[bucket_uri]
93 if metadata_field not in bucket:
94 return
95
96 # Validate or fail with 400.
97 schema = bucket[metadata_field]
98 try:
99 validate_schema(data, schema, ignore_fields=ignore_fields)
100 except ValidationError as e:
101 raise_invalid(request, name=e.field, description=e.message)
102 except RefResolutionError as e:
103 raise_invalid(request, name="schema", description=str(e))
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kinto/schema_validation.py b/kinto/schema_validation.py
--- a/kinto/schema_validation.py
+++ b/kinto/schema_validation.py
@@ -1,5 +1,5 @@
import colander
-from jsonschema import Draft4Validator, ValidationError, SchemaError, RefResolutionError, validate
+from jsonschema import Draft7Validator, ValidationError, SchemaError, RefResolutionError, validate
from pyramid.settings import asbool
from kinto.core import utils
@@ -27,7 +27,7 @@
def check_schema(data):
try:
- Draft4Validator.check_schema(data)
+ Draft7Validator.check_schema(data)
except SchemaError as e:
message = e.path.pop() + e.message
raise ValidationError(message)
|
{"golden_diff": "diff --git a/kinto/schema_validation.py b/kinto/schema_validation.py\n--- a/kinto/schema_validation.py\n+++ b/kinto/schema_validation.py\n@@ -1,5 +1,5 @@\n import colander\n-from jsonschema import Draft4Validator, ValidationError, SchemaError, RefResolutionError, validate\n+from jsonschema import Draft7Validator, ValidationError, SchemaError, RefResolutionError, validate\n from pyramid.settings import asbool\n \n from kinto.core import utils\n@@ -27,7 +27,7 @@\n \n def check_schema(data):\n try:\n- Draft4Validator.check_schema(data)\n+ Draft7Validator.check_schema(data)\n except SchemaError as e:\n message = e.path.pop() + e.message\n raise ValidationError(message)\n", "issue": "Migrate to JSONschema Draft v7\nCurrently we use Draft4. The library that we use supports Draft7\r\n\r\nhttps://json-schema.org/specification.html\r\n\r\nThe idea would be:\r\n* to use `Draft7Validator` instead of `Draft4Validator` (nothing more...)\r\n* list main changes and add links in CHANGELOG (see spec release notes)\r\n* check compatibility with kinto-admin\r\n\n", "before_files": [{"content": "import colander\nfrom jsonschema import Draft4Validator, ValidationError, SchemaError, RefResolutionError, validate\nfrom pyramid.settings import asbool\n\nfrom kinto.core import utils\nfrom kinto.core.errors import raise_invalid\nfrom kinto.views import object_exists_or_404\n\n\nclass JSONSchemaMapping(colander.SchemaNode):\n def schema_type(self, **kw):\n return colander.Mapping(unknown=\"preserve\")\n\n def deserialize(self, cstruct=colander.null):\n # Start by deserializing a simple mapping.\n validated = super().deserialize(cstruct)\n\n # In case it is optional in parent schema.\n if not validated or validated in (colander.null, colander.drop):\n return validated\n try:\n check_schema(validated)\n except ValidationError as e:\n self.raise_invalid(e.message)\n return validated\n\n\ndef check_schema(data):\n try:\n Draft4Validator.check_schema(data)\n except SchemaError as e:\n message = e.path.pop() + e.message\n raise ValidationError(message)\n\n\ndef validate_schema(data, schema, ignore_fields=[]):\n required_fields = [f for f in schema.get(\"required\", []) if f not in ignore_fields]\n # jsonschema doesn't accept 'required': [] yet.\n # See https://github.com/Julian/jsonschema/issues/337.\n # In the meantime, strip out 'required' if no other fields are required.\n if required_fields:\n schema = {**schema, \"required\": required_fields}\n else:\n schema = {f: v for f, v in schema.items() if f != \"required\"}\n\n data = {f: v for f, v in data.items() if f not in ignore_fields}\n\n try:\n validate(data, schema)\n except ValidationError as e:\n if e.path:\n field = e.path[-1]\n elif e.validator_value:\n field = e.validator_value[-1]\n else:\n field = e.schema_path[-1]\n e.field = field\n raise e\n # Raise an error here if a reference in the schema doesn't resolve.\n # jsonschema doesn't provide schema validation checking upon creation yet,\n # it must be validated against data.\n # See https://github.com/Julian/jsonschema/issues/399\n # For future support https://github.com/Julian/jsonschema/issues/346.\n except RefResolutionError as e:\n raise e\n\n\ndef validate_from_bucket_schema_or_400(data, resource_name, request, ignore_fields=[]):\n \"\"\"Lookup in the parent objects if a schema was defined for this resource.\n\n If the schema validation feature is enabled, if a schema is/are defined, and if the\n data does not validate it/them, then it raises a 400 exception.\n \"\"\"\n settings = request.registry.settings\n schema_validation = \"experimental_collection_schema_validation\"\n # If disabled from settings, do nothing.\n if not asbool(settings.get(schema_validation)):\n return\n\n bucket_id = request.matchdict[\"bucket_id\"]\n bucket_uri = utils.instance_uri(request, \"bucket\", id=bucket_id)\n buckets = request.bound_data.setdefault(\"buckets\", {})\n if bucket_uri not in buckets:\n # Unknown yet, fetch from storage.\n bucket = object_exists_or_404(\n request, collection_id=\"bucket\", parent_id=\"\", object_id=bucket_id\n )\n buckets[bucket_uri] = bucket\n\n # Let's see if the bucket defines a schema for this resource.\n metadata_field = \"{}:schema\".format(resource_name)\n bucket = buckets[bucket_uri]\n if metadata_field not in bucket:\n return\n\n # Validate or fail with 400.\n schema = bucket[metadata_field]\n try:\n validate_schema(data, schema, ignore_fields=ignore_fields)\n except ValidationError as e:\n raise_invalid(request, name=e.field, description=e.message)\n except RefResolutionError as e:\n raise_invalid(request, name=\"schema\", description=str(e))\n", "path": "kinto/schema_validation.py"}], "after_files": [{"content": "import colander\nfrom jsonschema import Draft7Validator, ValidationError, SchemaError, RefResolutionError, validate\nfrom pyramid.settings import asbool\n\nfrom kinto.core import utils\nfrom kinto.core.errors import raise_invalid\nfrom kinto.views import object_exists_or_404\n\n\nclass JSONSchemaMapping(colander.SchemaNode):\n def schema_type(self, **kw):\n return colander.Mapping(unknown=\"preserve\")\n\n def deserialize(self, cstruct=colander.null):\n # Start by deserializing a simple mapping.\n validated = super().deserialize(cstruct)\n\n # In case it is optional in parent schema.\n if not validated or validated in (colander.null, colander.drop):\n return validated\n try:\n check_schema(validated)\n except ValidationError as e:\n self.raise_invalid(e.message)\n return validated\n\n\ndef check_schema(data):\n try:\n Draft7Validator.check_schema(data)\n except SchemaError as e:\n message = e.path.pop() + e.message\n raise ValidationError(message)\n\n\ndef validate_schema(data, schema, ignore_fields=[]):\n required_fields = [f for f in schema.get(\"required\", []) if f not in ignore_fields]\n # jsonschema doesn't accept 'required': [] yet.\n # See https://github.com/Julian/jsonschema/issues/337.\n # In the meantime, strip out 'required' if no other fields are required.\n if required_fields:\n schema = {**schema, \"required\": required_fields}\n else:\n schema = {f: v for f, v in schema.items() if f != \"required\"}\n\n data = {f: v for f, v in data.items() if f not in ignore_fields}\n\n try:\n validate(data, schema)\n except ValidationError as e:\n if e.path:\n field = e.path[-1]\n elif e.validator_value:\n field = e.validator_value[-1]\n else:\n field = e.schema_path[-1]\n e.field = field\n raise e\n # Raise an error here if a reference in the schema doesn't resolve.\n # jsonschema doesn't provide schema validation checking upon creation yet,\n # it must be validated against data.\n # See https://github.com/Julian/jsonschema/issues/399\n # For future support https://github.com/Julian/jsonschema/issues/346.\n except RefResolutionError as e:\n raise e\n\n\ndef validate_from_bucket_schema_or_400(data, resource_name, request, ignore_fields=[]):\n \"\"\"Lookup in the parent objects if a schema was defined for this resource.\n\n If the schema validation feature is enabled, if a schema is/are defined, and if the\n data does not validate it/them, then it raises a 400 exception.\n \"\"\"\n settings = request.registry.settings\n schema_validation = \"experimental_collection_schema_validation\"\n # If disabled from settings, do nothing.\n if not asbool(settings.get(schema_validation)):\n return\n\n bucket_id = request.matchdict[\"bucket_id\"]\n bucket_uri = utils.instance_uri(request, \"bucket\", id=bucket_id)\n buckets = request.bound_data.setdefault(\"buckets\", {})\n if bucket_uri not in buckets:\n # Unknown yet, fetch from storage.\n bucket = object_exists_or_404(\n request, collection_id=\"bucket\", parent_id=\"\", object_id=bucket_id\n )\n buckets[bucket_uri] = bucket\n\n # Let's see if the bucket defines a schema for this resource.\n metadata_field = \"{}:schema\".format(resource_name)\n bucket = buckets[bucket_uri]\n if metadata_field not in bucket:\n return\n\n # Validate or fail with 400.\n schema = bucket[metadata_field]\n try:\n validate_schema(data, schema, ignore_fields=ignore_fields)\n except ValidationError as e:\n raise_invalid(request, name=e.field, description=e.message)\n except RefResolutionError as e:\n raise_invalid(request, name=\"schema\", description=str(e))\n", "path": "kinto/schema_validation.py"}]}
| 1,414 | 161 |
gh_patches_debug_12854
|
rasdani/github-patches
|
git_diff
|
librosa__librosa-1457
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update show_versions to match current dependencies
**Describe the bug**
Looks like we forgot to revise the list of modules checked by `show_versions()` in the 0.9.0 release.
This isn't a major problem, but we should fix it for 0.9.2 and keep it as part of the release audit sequence going forward.
In general, `show_versions()` should track the dependencies listed in setup.cfg: https://github.com/librosa/librosa/blob/ef482b824c609222abb265357f7a79b11d174dd2/setup.cfg#L45-L84
Update show_versions to match current dependencies
**Describe the bug**
Looks like we forgot to revise the list of modules checked by `show_versions()` in the 0.9.0 release.
This isn't a major problem, but we should fix it for 0.9.2 and keep it as part of the release audit sequence going forward.
In general, `show_versions()` should track the dependencies listed in setup.cfg: https://github.com/librosa/librosa/blob/ef482b824c609222abb265357f7a79b11d174dd2/setup.cfg#L45-L84
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `librosa/version.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 """Version info"""
4
5 import sys
6 import importlib
7
8 short_version = "0.9"
9 version = "0.9.1"
10
11
12 def __get_mod_version(modname):
13
14 try:
15 if modname in sys.modules:
16 mod = sys.modules[modname]
17 else:
18 mod = importlib.import_module(modname)
19 try:
20 return mod.__version__
21 except AttributeError:
22 return "installed, no version number available"
23
24 except ImportError:
25 return None
26
27
28 def show_versions():
29 """Return the version information for all librosa dependencies."""
30
31 core_deps = [
32 "audioread",
33 "numpy",
34 "scipy",
35 "sklearn",
36 "joblib",
37 "decorator",
38 "soundfile",
39 "resampy",
40 "numba",
41 ]
42
43 extra_deps = [
44 "numpydoc",
45 "sphinx",
46 "sphinx_rtd_theme",
47 "sphinxcontrib.versioning",
48 "sphinx-gallery",
49 "pytest",
50 "pytest-mpl",
51 "pytest-cov",
52 "matplotlib",
53 "presets",
54 ]
55
56 print("INSTALLED VERSIONS")
57 print("------------------")
58 print("python: {}\n".format(sys.version))
59 print("librosa: {}\n".format(version))
60 for dep in core_deps:
61 print("{}: {}".format(dep, __get_mod_version(dep)))
62 print("")
63 for dep in extra_deps:
64 print("{}: {}".format(dep, __get_mod_version(dep)))
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/librosa/version.py b/librosa/version.py
--- a/librosa/version.py
+++ b/librosa/version.py
@@ -38,18 +38,26 @@
"soundfile",
"resampy",
"numba",
+ "pooch",
+ "packaging"
]
extra_deps = [
"numpydoc",
"sphinx",
"sphinx_rtd_theme",
- "sphinxcontrib.versioning",
- "sphinx-gallery",
+ "sphinx_multiversion",
+ "sphinx_gallery",
+ "mir_eval",
+ "ipython",
+ "sphinxcontrib-svg2pdfconverter",
"pytest",
"pytest-mpl",
"pytest-cov",
"matplotlib",
+ "samplerate",
+ "soxr",
+ "contextlib2",
"presets",
]
|
{"golden_diff": "diff --git a/librosa/version.py b/librosa/version.py\n--- a/librosa/version.py\n+++ b/librosa/version.py\n@@ -38,18 +38,26 @@\n \"soundfile\",\n \"resampy\",\n \"numba\",\n+ \"pooch\",\n+ \"packaging\"\n ]\n \n extra_deps = [\n \"numpydoc\",\n \"sphinx\",\n \"sphinx_rtd_theme\",\n- \"sphinxcontrib.versioning\",\n- \"sphinx-gallery\",\n+ \"sphinx_multiversion\",\n+ \"sphinx_gallery\",\n+ \"mir_eval\",\n+ \"ipython\",\n+ \"sphinxcontrib-svg2pdfconverter\",\n \"pytest\",\n \"pytest-mpl\",\n \"pytest-cov\",\n \"matplotlib\",\n+ \"samplerate\",\n+ \"soxr\",\n+ \"contextlib2\",\n \"presets\",\n ]\n", "issue": "Update show_versions to match current dependencies\n**Describe the bug**\r\n\r\nLooks like we forgot to revise the list of modules checked by `show_versions()` in the 0.9.0 release.\r\nThis isn't a major problem, but we should fix it for 0.9.2 and keep it as part of the release audit sequence going forward.\r\n\r\nIn general, `show_versions()` should track the dependencies listed in setup.cfg: https://github.com/librosa/librosa/blob/ef482b824c609222abb265357f7a79b11d174dd2/setup.cfg#L45-L84\nUpdate show_versions to match current dependencies\n**Describe the bug**\r\n\r\nLooks like we forgot to revise the list of modules checked by `show_versions()` in the 0.9.0 release.\r\nThis isn't a major problem, but we should fix it for 0.9.2 and keep it as part of the release audit sequence going forward.\r\n\r\nIn general, `show_versions()` should track the dependencies listed in setup.cfg: https://github.com/librosa/librosa/blob/ef482b824c609222abb265357f7a79b11d174dd2/setup.cfg#L45-L84\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"Version info\"\"\"\n\nimport sys\nimport importlib\n\nshort_version = \"0.9\"\nversion = \"0.9.1\"\n\n\ndef __get_mod_version(modname):\n\n try:\n if modname in sys.modules:\n mod = sys.modules[modname]\n else:\n mod = importlib.import_module(modname)\n try:\n return mod.__version__\n except AttributeError:\n return \"installed, no version number available\"\n\n except ImportError:\n return None\n\n\ndef show_versions():\n \"\"\"Return the version information for all librosa dependencies.\"\"\"\n\n core_deps = [\n \"audioread\",\n \"numpy\",\n \"scipy\",\n \"sklearn\",\n \"joblib\",\n \"decorator\",\n \"soundfile\",\n \"resampy\",\n \"numba\",\n ]\n\n extra_deps = [\n \"numpydoc\",\n \"sphinx\",\n \"sphinx_rtd_theme\",\n \"sphinxcontrib.versioning\",\n \"sphinx-gallery\",\n \"pytest\",\n \"pytest-mpl\",\n \"pytest-cov\",\n \"matplotlib\",\n \"presets\",\n ]\n\n print(\"INSTALLED VERSIONS\")\n print(\"------------------\")\n print(\"python: {}\\n\".format(sys.version))\n print(\"librosa: {}\\n\".format(version))\n for dep in core_deps:\n print(\"{}: {}\".format(dep, __get_mod_version(dep)))\n print(\"\")\n for dep in extra_deps:\n print(\"{}: {}\".format(dep, __get_mod_version(dep)))\n", "path": "librosa/version.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"Version info\"\"\"\n\nimport sys\nimport importlib\n\nshort_version = \"0.9\"\nversion = \"0.9.1\"\n\n\ndef __get_mod_version(modname):\n\n try:\n if modname in sys.modules:\n mod = sys.modules[modname]\n else:\n mod = importlib.import_module(modname)\n try:\n return mod.__version__\n except AttributeError:\n return \"installed, no version number available\"\n\n except ImportError:\n return None\n\n\ndef show_versions():\n \"\"\"Return the version information for all librosa dependencies.\"\"\"\n\n core_deps = [\n \"audioread\",\n \"numpy\",\n \"scipy\",\n \"sklearn\",\n \"joblib\",\n \"decorator\",\n \"soundfile\",\n \"resampy\",\n \"numba\",\n \"pooch\",\n \"packaging\"\n ]\n\n extra_deps = [\n \"numpydoc\",\n \"sphinx\",\n \"sphinx_rtd_theme\",\n \"sphinx_multiversion\",\n \"sphinx_gallery\",\n \"mir_eval\",\n \"ipython\",\n \"sphinxcontrib-svg2pdfconverter\",\n \"pytest\",\n \"pytest-mpl\",\n \"pytest-cov\",\n \"matplotlib\",\n \"samplerate\",\n \"soxr\",\n \"contextlib2\",\n \"presets\",\n ]\n\n print(\"INSTALLED VERSIONS\")\n print(\"------------------\")\n print(\"python: {}\\n\".format(sys.version))\n print(\"librosa: {}\\n\".format(version))\n for dep in core_deps:\n print(\"{}: {}\".format(dep, __get_mod_version(dep)))\n print(\"\")\n for dep in extra_deps:\n print(\"{}: {}\".format(dep, __get_mod_version(dep)))\n", "path": "librosa/version.py"}]}
| 1,014 | 203 |
gh_patches_debug_10338
|
rasdani/github-patches
|
git_diff
|
digitalfabrik__integreat-cms-449
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Region list menu points to api
### Describe the Bug
<!-- A clear and concise description of what the bug is. -->
The menu item for regions points to the region api and not the region list.
### Steps to Reproduce
1. Go to 'Regions'
### Expected Behavior
<!-- A clear and concise description of what you expected to happen. -->
The region list (`/regions/`) opens.
### Actual Behavior
<!-- A clear and concise description of what actually happened. -->
The region api (`/api/regions/`) opens.
### Additional Information
<!-- Add any other context (e.g. logs, screenshots, etc.) about the problem here. -->
This is because both urls have the name 'regions' and the last matching item in urlpatterns determines the actual url.
Region list menu points to api
### Describe the Bug
<!-- A clear and concise description of what the bug is. -->
The menu item for regions points to the region api and not the region list.
### Steps to Reproduce
1. Go to 'Regions'
### Expected Behavior
<!-- A clear and concise description of what you expected to happen. -->
The region list (`/regions/`) opens.
### Actual Behavior
<!-- A clear and concise description of what actually happened. -->
The region api (`/api/regions/`) opens.
### Additional Information
<!-- Add any other context (e.g. logs, screenshots, etc.) about the problem here. -->
This is because both urls have the name 'regions' and the last matching item in urlpatterns determines the actual url.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/api/urls.py`
Content:
```
1 '''
2 Expansion of API-Endpoints for the CMS
3 '''
4 from django.conf.urls import include, url
5
6 from .v3.feedback import (
7 page_feedback,
8 search_result_feedback,
9 region_feedback,
10 offer_list_feedback,
11 event_list_feedback)
12 from .v3.languages import languages
13 from .v3.pages import pages
14 from .v3.push_notifications import sent_push_notifications
15 from .v3.regions import regions, liveregions, hiddenregions, pushnew
16 from .v3.offers import offers
17 from .v3.single_page import single_page
18
19 urlpatterns = [
20 url(r'regions/$', regions, name='regions'),
21 url(r'regions/live/$', liveregions, name='liveregions'),
22 url(r'regions/hidden/$', hiddenregions, name='hiddenregions'),
23 url(r'regions/pushnew/$', pushnew, name='pushnew'),
24 url(r'(?P<region_slug>[-\w]+)/', include([
25 url(r'languages/$', languages),
26 url(r'offers/$', offers),
27 url(r'(?P<lan_code>[-\w]+)/sent_push_notifications/$', sent_push_notifications),
28 url(r'(?P<languages>[-\w]+)/feedback/$', page_feedback.feedback),
29 url(r'(?P<language_code>[-\w]+)/feedback/categories$', region_feedback.region_feedback),
30 url(r'(?P<language_code>[-\w]+)/feedback/search$', search_result_feedback.search_result_feedback),
31 url(r'(?P<language_code>[-\w]+)/feedback/extras$', offer_list_feedback.offer_list_feedback),
32 url(r'(?P<language_code>[-\w]+)/feedback/events$', event_list_feedback.event_list_feedback),
33 url(r'(?P<language_code>[-\w]+)/pages/$', pages),
34 url(r'(?P<language_code>[-\w]+)/offers/$', offers),
35 url(r'(?P<language_code>[-\w]+)/page/$', single_page),
36 ])),
37 ]
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/api/urls.py b/src/api/urls.py
--- a/src/api/urls.py
+++ b/src/api/urls.py
@@ -17,10 +17,10 @@
from .v3.single_page import single_page
urlpatterns = [
- url(r'regions/$', regions, name='regions'),
- url(r'regions/live/$', liveregions, name='liveregions'),
- url(r'regions/hidden/$', hiddenregions, name='hiddenregions'),
- url(r'regions/pushnew/$', pushnew, name='pushnew'),
+ url(r'regions/$', regions),
+ url(r'regions/live/$', liveregions),
+ url(r'regions/hidden/$', hiddenregions),
+ url(r'regions/pushnew/$', pushnew),
url(r'(?P<region_slug>[-\w]+)/', include([
url(r'languages/$', languages),
url(r'offers/$', offers),
|
{"golden_diff": "diff --git a/src/api/urls.py b/src/api/urls.py\n--- a/src/api/urls.py\n+++ b/src/api/urls.py\n@@ -17,10 +17,10 @@\n from .v3.single_page import single_page\n \n urlpatterns = [\n- url(r'regions/$', regions, name='regions'),\n- url(r'regions/live/$', liveregions, name='liveregions'),\n- url(r'regions/hidden/$', hiddenregions, name='hiddenregions'),\n- url(r'regions/pushnew/$', pushnew, name='pushnew'),\n+ url(r'regions/$', regions),\n+ url(r'regions/live/$', liveregions),\n+ url(r'regions/hidden/$', hiddenregions),\n+ url(r'regions/pushnew/$', pushnew),\n url(r'(?P<region_slug>[-\\w]+)/', include([\n url(r'languages/$', languages),\n url(r'offers/$', offers),\n", "issue": "Region list menu points to api\n### Describe the Bug\r\n<!-- A clear and concise description of what the bug is. -->\r\nThe menu item for regions points to the region api and not the region list.\r\n\r\n### Steps to Reproduce\r\n\r\n1. Go to 'Regions'\r\n\r\n### Expected Behavior\r\n<!-- A clear and concise description of what you expected to happen. -->\r\nThe region list (`/regions/`) opens.\r\n\r\n### Actual Behavior\r\n<!-- A clear and concise description of what actually happened. -->\r\nThe region api (`/api/regions/`) opens.\r\n\r\n### Additional Information\r\n<!-- Add any other context (e.g. logs, screenshots, etc.) about the problem here. -->\r\nThis is because both urls have the name 'regions' and the last matching item in urlpatterns determines the actual url.\r\n\nRegion list menu points to api\n### Describe the Bug\r\n<!-- A clear and concise description of what the bug is. -->\r\nThe menu item for regions points to the region api and not the region list.\r\n\r\n### Steps to Reproduce\r\n\r\n1. Go to 'Regions'\r\n\r\n### Expected Behavior\r\n<!-- A clear and concise description of what you expected to happen. -->\r\nThe region list (`/regions/`) opens.\r\n\r\n### Actual Behavior\r\n<!-- A clear and concise description of what actually happened. -->\r\nThe region api (`/api/regions/`) opens.\r\n\r\n### Additional Information\r\n<!-- Add any other context (e.g. logs, screenshots, etc.) about the problem here. -->\r\nThis is because both urls have the name 'regions' and the last matching item in urlpatterns determines the actual url.\r\n\n", "before_files": [{"content": "'''\nExpansion of API-Endpoints for the CMS\n'''\nfrom django.conf.urls import include, url\n\nfrom .v3.feedback import (\n page_feedback,\n search_result_feedback,\n region_feedback,\n offer_list_feedback,\n event_list_feedback)\nfrom .v3.languages import languages\nfrom .v3.pages import pages\nfrom .v3.push_notifications import sent_push_notifications\nfrom .v3.regions import regions, liveregions, hiddenregions, pushnew\nfrom .v3.offers import offers\nfrom .v3.single_page import single_page\n\nurlpatterns = [\n url(r'regions/$', regions, name='regions'),\n url(r'regions/live/$', liveregions, name='liveregions'),\n url(r'regions/hidden/$', hiddenregions, name='hiddenregions'),\n url(r'regions/pushnew/$', pushnew, name='pushnew'),\n url(r'(?P<region_slug>[-\\w]+)/', include([\n url(r'languages/$', languages),\n url(r'offers/$', offers),\n url(r'(?P<lan_code>[-\\w]+)/sent_push_notifications/$', sent_push_notifications),\n url(r'(?P<languages>[-\\w]+)/feedback/$', page_feedback.feedback),\n url(r'(?P<language_code>[-\\w]+)/feedback/categories$', region_feedback.region_feedback),\n url(r'(?P<language_code>[-\\w]+)/feedback/search$', search_result_feedback.search_result_feedback),\n url(r'(?P<language_code>[-\\w]+)/feedback/extras$', offer_list_feedback.offer_list_feedback),\n url(r'(?P<language_code>[-\\w]+)/feedback/events$', event_list_feedback.event_list_feedback),\n url(r'(?P<language_code>[-\\w]+)/pages/$', pages),\n url(r'(?P<language_code>[-\\w]+)/offers/$', offers),\n url(r'(?P<language_code>[-\\w]+)/page/$', single_page),\n ])),\n]\n", "path": "src/api/urls.py"}], "after_files": [{"content": "'''\nExpansion of API-Endpoints for the CMS\n'''\nfrom django.conf.urls import include, url\n\nfrom .v3.feedback import (\n page_feedback,\n search_result_feedback,\n region_feedback,\n offer_list_feedback,\n event_list_feedback)\nfrom .v3.languages import languages\nfrom .v3.pages import pages\nfrom .v3.push_notifications import sent_push_notifications\nfrom .v3.regions import regions, liveregions, hiddenregions, pushnew\nfrom .v3.offers import offers\nfrom .v3.single_page import single_page\n\nurlpatterns = [\n url(r'regions/$', regions),\n url(r'regions/live/$', liveregions),\n url(r'regions/hidden/$', hiddenregions),\n url(r'regions/pushnew/$', pushnew),\n url(r'(?P<region_slug>[-\\w]+)/', include([\n url(r'languages/$', languages),\n url(r'offers/$', offers),\n url(r'(?P<lan_code>[-\\w]+)/sent_push_notifications/$', sent_push_notifications),\n url(r'(?P<languages>[-\\w]+)/feedback/$', page_feedback.feedback),\n url(r'(?P<language_code>[-\\w]+)/feedback/categories$', region_feedback.region_feedback),\n url(r'(?P<language_code>[-\\w]+)/feedback/search$', search_result_feedback.search_result_feedback),\n url(r'(?P<language_code>[-\\w]+)/feedback/extras$', offer_list_feedback.offer_list_feedback),\n url(r'(?P<language_code>[-\\w]+)/feedback/events$', event_list_feedback.event_list_feedback),\n url(r'(?P<language_code>[-\\w]+)/pages/$', pages),\n url(r'(?P<language_code>[-\\w]+)/offers/$', offers),\n url(r'(?P<language_code>[-\\w]+)/page/$', single_page),\n ])),\n]\n", "path": "src/api/urls.py"}]}
| 1,059 | 212 |
gh_patches_debug_2228
|
rasdani/github-patches
|
git_diff
|
rucio__rucio-2492
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Issue in client_extract download
Motivation
----------
Modification
------------
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/rucio/vcsversion.py`
Content:
```
1
2 '''
3 This file is automatically generated; Do not edit it. :)
4 '''
5 VERSION_INFO = {
6 'final': True,
7 'version': '1.19.5',
8 'branch_nick': 'patch-0-1_19_5_preparation',
9 'revision_id': '9e14d56c9d958e5348b19ddc7e5fa45d4a778807',
10 'revno': 7951
11 }
12
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lib/rucio/vcsversion.py b/lib/rucio/vcsversion.py
--- a/lib/rucio/vcsversion.py
+++ b/lib/rucio/vcsversion.py
@@ -4,8 +4,8 @@
'''
VERSION_INFO = {
'final': True,
- 'version': '1.19.5',
- 'branch_nick': 'patch-0-1_19_5_preparation',
- 'revision_id': '9e14d56c9d958e5348b19ddc7e5fa45d4a778807',
- 'revno': 7951
+ 'version': '1.19.6',
+ 'branch_nick': 'patch-0-Release__Rucio_1_19_6_preparation',
+ 'revision_id': 'a8c639a7a70a9e605ad90535d28d2eab04d89cce',
+ 'revno': 7992
}
|
{"golden_diff": "diff --git a/lib/rucio/vcsversion.py b/lib/rucio/vcsversion.py\n--- a/lib/rucio/vcsversion.py\n+++ b/lib/rucio/vcsversion.py\n@@ -4,8 +4,8 @@\n '''\n VERSION_INFO = {\n 'final': True,\n- 'version': '1.19.5',\n- 'branch_nick': 'patch-0-1_19_5_preparation',\n- 'revision_id': '9e14d56c9d958e5348b19ddc7e5fa45d4a778807',\n- 'revno': 7951\n+ 'version': '1.19.6',\n+ 'branch_nick': 'patch-0-Release__Rucio_1_19_6_preparation',\n+ 'revision_id': 'a8c639a7a70a9e605ad90535d28d2eab04d89cce',\n+ 'revno': 7992\n }\n", "issue": "Issue in client_extract download \nMotivation\r\n----------\r\n\r\n\r\n\r\nModification\r\n------------\r\n\r\n\r\n\n", "before_files": [{"content": "\n'''\nThis file is automatically generated; Do not edit it. :)\n'''\nVERSION_INFO = {\n 'final': True,\n 'version': '1.19.5',\n 'branch_nick': 'patch-0-1_19_5_preparation',\n 'revision_id': '9e14d56c9d958e5348b19ddc7e5fa45d4a778807',\n 'revno': 7951\n}\n", "path": "lib/rucio/vcsversion.py"}], "after_files": [{"content": "\n'''\nThis file is automatically generated; Do not edit it. :)\n'''\nVERSION_INFO = {\n 'final': True,\n 'version': '1.19.6',\n 'branch_nick': 'patch-0-Release__Rucio_1_19_6_preparation',\n 'revision_id': 'a8c639a7a70a9e605ad90535d28d2eab04d89cce',\n 'revno': 7992\n}\n", "path": "lib/rucio/vcsversion.py"}]}
| 407 | 252 |
gh_patches_debug_16416
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-1094
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Docker language fails on Windows - os.getuid()
[`docker_cmd`](https://github.com/pre-commit/pre-commit/blob/0cf2638b3b6a2cfde89cb7013e61b2d4a3e12875/pre_commit/languages/docker.py#L80) in languages/docker.py uses `os.getuid()` and `os.getgid()` that don't exist in Python on Windows.
A hook that uses Docker on Windows fails to execute with
```
An unexpected error has occurred: AttributeError: module 'os' has no attribute 'getuid'
Traceback (most recent call last):
File "C:\Python36\lib\site-packages\pre_commit\error_handler.py", line 46, in error_handler
yield
File "C:\Python36\lib\site-packages\pre_commit\main.py", line 294, in main
return run(args.config, store, args)
File "C:\Python36\lib\site-packages\pre_commit\commands\run.py", line 293, in run
return _run_hooks(config, hooks, args, environ)
File "C:\Python36\lib\site-packages\pre_commit\commands\run.py", line 214, in _run_hooks
retval |= _run_single_hook(classifier, hook, args, skips, cols)
File "C:\Python36\lib\site-packages\pre_commit\commands\run.py", line 124, in _run_single_hook
tuple(filenames) if hook.pass_filenames else (),
File "C:\Python36\lib\site-packages\pre_commit\repository.py", line 100, in run
return lang.run_hook(self, file_args)
File "C:\Python36\lib\site-packages\pre_commit\languages\docker.py", line 99, in run_hook
cmd = docker_cmd() + entry_tag + cmd_rest
File "C:\Python36\lib\site-packages\pre_commit\languages\docker.py", line 80, in docker_cmd
'-u', '{}:{}'.format(os.getuid(), os.getgid()),
AttributeError: module 'os' has no attribute 'getuid'
```
The same hook works fine in Windows Subsystem for Linux.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/languages/docker.py`
Content:
```
1 from __future__ import absolute_import
2 from __future__ import unicode_literals
3
4 import hashlib
5 import os
6
7 import pre_commit.constants as C
8 from pre_commit import five
9 from pre_commit.languages import helpers
10 from pre_commit.util import CalledProcessError
11 from pre_commit.util import clean_path_on_failure
12 from pre_commit.util import cmd_output
13
14
15 ENVIRONMENT_DIR = 'docker'
16 PRE_COMMIT_LABEL = 'PRE_COMMIT'
17 get_default_version = helpers.basic_get_default_version
18 healthy = helpers.basic_healthy
19
20
21 def md5(s): # pragma: windows no cover
22 return hashlib.md5(five.to_bytes(s)).hexdigest()
23
24
25 def docker_tag(prefix): # pragma: windows no cover
26 md5sum = md5(os.path.basename(prefix.prefix_dir)).lower()
27 return 'pre-commit-{}'.format(md5sum)
28
29
30 def docker_is_running(): # pragma: windows no cover
31 try:
32 return cmd_output('docker', 'ps')[0] == 0
33 except CalledProcessError:
34 return False
35
36
37 def assert_docker_available(): # pragma: windows no cover
38 assert docker_is_running(), (
39 'Docker is either not running or not configured in this environment'
40 )
41
42
43 def build_docker_image(prefix, **kwargs): # pragma: windows no cover
44 pull = kwargs.pop('pull')
45 assert not kwargs, kwargs
46 cmd = (
47 'docker', 'build',
48 '--tag', docker_tag(prefix),
49 '--label', PRE_COMMIT_LABEL,
50 )
51 if pull:
52 cmd += ('--pull',)
53 # This must come last for old versions of docker. See #477
54 cmd += ('.',)
55 helpers.run_setup_cmd(prefix, cmd)
56
57
58 def install_environment(
59 prefix, version, additional_dependencies,
60 ): # pragma: windows no cover
61 helpers.assert_version_default('docker', version)
62 helpers.assert_no_additional_deps('docker', additional_dependencies)
63 assert_docker_available()
64
65 directory = prefix.path(
66 helpers.environment_dir(ENVIRONMENT_DIR, C.DEFAULT),
67 )
68
69 # Docker doesn't really have relevant disk environment, but pre-commit
70 # still needs to cleanup it's state files on failure
71 with clean_path_on_failure(directory):
72 build_docker_image(prefix, pull=True)
73 os.mkdir(directory)
74
75
76 def docker_cmd(): # pragma: windows no cover
77 return (
78 'docker', 'run',
79 '--rm',
80 '-u', '{}:{}'.format(os.getuid(), os.getgid()),
81 # https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container-volumes-from
82 # The `Z` option tells Docker to label the content with a private
83 # unshared label. Only the current container can use a private volume.
84 '-v', '{}:/src:rw,Z'.format(os.getcwd()),
85 '--workdir', '/src',
86 )
87
88
89 def run_hook(hook, file_args): # pragma: windows no cover
90 assert_docker_available()
91 # Rebuild the docker image in case it has gone missing, as many people do
92 # automated cleanup of docker images.
93 build_docker_image(hook.prefix, pull=False)
94
95 hook_cmd = helpers.to_cmd(hook)
96 entry_exe, cmd_rest = hook_cmd[0], hook_cmd[1:]
97
98 entry_tag = ('--entrypoint', entry_exe, docker_tag(hook.prefix))
99 cmd = docker_cmd() + entry_tag + cmd_rest
100 return helpers.run_xargs(hook, cmd, file_args)
101
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pre_commit/languages/docker.py b/pre_commit/languages/docker.py
--- a/pre_commit/languages/docker.py
+++ b/pre_commit/languages/docker.py
@@ -73,11 +73,18 @@
os.mkdir(directory)
+def get_docker_user(): # pragma: windows no cover
+ try:
+ return '{}:{}'.format(os.getuid(), os.getgid())
+ except AttributeError:
+ return '1000:1000'
+
+
def docker_cmd(): # pragma: windows no cover
return (
'docker', 'run',
'--rm',
- '-u', '{}:{}'.format(os.getuid(), os.getgid()),
+ '-u', get_docker_user(),
# https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container-volumes-from
# The `Z` option tells Docker to label the content with a private
# unshared label. Only the current container can use a private volume.
|
{"golden_diff": "diff --git a/pre_commit/languages/docker.py b/pre_commit/languages/docker.py\n--- a/pre_commit/languages/docker.py\n+++ b/pre_commit/languages/docker.py\n@@ -73,11 +73,18 @@\n os.mkdir(directory)\n \n \n+def get_docker_user(): # pragma: windows no cover\n+ try:\n+ return '{}:{}'.format(os.getuid(), os.getgid())\n+ except AttributeError:\n+ return '1000:1000'\n+\n+\n def docker_cmd(): # pragma: windows no cover\n return (\n 'docker', 'run',\n '--rm',\n- '-u', '{}:{}'.format(os.getuid(), os.getgid()),\n+ '-u', get_docker_user(),\n # https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container-volumes-from\n # The `Z` option tells Docker to label the content with a private\n # unshared label. Only the current container can use a private volume.\n", "issue": "Docker language fails on Windows - os.getuid()\n[`docker_cmd`](https://github.com/pre-commit/pre-commit/blob/0cf2638b3b6a2cfde89cb7013e61b2d4a3e12875/pre_commit/languages/docker.py#L80) in languages/docker.py uses `os.getuid()` and `os.getgid()` that don't exist in Python on Windows.\r\n\r\nA hook that uses Docker on Windows fails to execute with\r\n\r\n```\r\nAn unexpected error has occurred: AttributeError: module 'os' has no attribute 'getuid'\r\nTraceback (most recent call last):\r\n File \"C:\\Python36\\lib\\site-packages\\pre_commit\\error_handler.py\", line 46, in error_handler\r\n yield\r\n File \"C:\\Python36\\lib\\site-packages\\pre_commit\\main.py\", line 294, in main\r\n return run(args.config, store, args)\r\n File \"C:\\Python36\\lib\\site-packages\\pre_commit\\commands\\run.py\", line 293, in run\r\n return _run_hooks(config, hooks, args, environ)\r\n File \"C:\\Python36\\lib\\site-packages\\pre_commit\\commands\\run.py\", line 214, in _run_hooks\r\n retval |= _run_single_hook(classifier, hook, args, skips, cols)\r\n File \"C:\\Python36\\lib\\site-packages\\pre_commit\\commands\\run.py\", line 124, in _run_single_hook\r\n tuple(filenames) if hook.pass_filenames else (),\r\n File \"C:\\Python36\\lib\\site-packages\\pre_commit\\repository.py\", line 100, in run\r\n return lang.run_hook(self, file_args)\r\n File \"C:\\Python36\\lib\\site-packages\\pre_commit\\languages\\docker.py\", line 99, in run_hook\r\n cmd = docker_cmd() + entry_tag + cmd_rest\r\n File \"C:\\Python36\\lib\\site-packages\\pre_commit\\languages\\docker.py\", line 80, in docker_cmd\r\n '-u', '{}:{}'.format(os.getuid(), os.getgid()),\r\nAttributeError: module 'os' has no attribute 'getuid'\r\n```\r\n\r\nThe same hook works fine in Windows Subsystem for Linux.\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import unicode_literals\n\nimport hashlib\nimport os\n\nimport pre_commit.constants as C\nfrom pre_commit import five\nfrom pre_commit.languages import helpers\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output\n\n\nENVIRONMENT_DIR = 'docker'\nPRE_COMMIT_LABEL = 'PRE_COMMIT'\nget_default_version = helpers.basic_get_default_version\nhealthy = helpers.basic_healthy\n\n\ndef md5(s): # pragma: windows no cover\n return hashlib.md5(five.to_bytes(s)).hexdigest()\n\n\ndef docker_tag(prefix): # pragma: windows no cover\n md5sum = md5(os.path.basename(prefix.prefix_dir)).lower()\n return 'pre-commit-{}'.format(md5sum)\n\n\ndef docker_is_running(): # pragma: windows no cover\n try:\n return cmd_output('docker', 'ps')[0] == 0\n except CalledProcessError:\n return False\n\n\ndef assert_docker_available(): # pragma: windows no cover\n assert docker_is_running(), (\n 'Docker is either not running or not configured in this environment'\n )\n\n\ndef build_docker_image(prefix, **kwargs): # pragma: windows no cover\n pull = kwargs.pop('pull')\n assert not kwargs, kwargs\n cmd = (\n 'docker', 'build',\n '--tag', docker_tag(prefix),\n '--label', PRE_COMMIT_LABEL,\n )\n if pull:\n cmd += ('--pull',)\n # This must come last for old versions of docker. See #477\n cmd += ('.',)\n helpers.run_setup_cmd(prefix, cmd)\n\n\ndef install_environment(\n prefix, version, additional_dependencies,\n): # pragma: windows no cover\n helpers.assert_version_default('docker', version)\n helpers.assert_no_additional_deps('docker', additional_dependencies)\n assert_docker_available()\n\n directory = prefix.path(\n helpers.environment_dir(ENVIRONMENT_DIR, C.DEFAULT),\n )\n\n # Docker doesn't really have relevant disk environment, but pre-commit\n # still needs to cleanup it's state files on failure\n with clean_path_on_failure(directory):\n build_docker_image(prefix, pull=True)\n os.mkdir(directory)\n\n\ndef docker_cmd(): # pragma: windows no cover\n return (\n 'docker', 'run',\n '--rm',\n '-u', '{}:{}'.format(os.getuid(), os.getgid()),\n # https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container-volumes-from\n # The `Z` option tells Docker to label the content with a private\n # unshared label. Only the current container can use a private volume.\n '-v', '{}:/src:rw,Z'.format(os.getcwd()),\n '--workdir', '/src',\n )\n\n\ndef run_hook(hook, file_args): # pragma: windows no cover\n assert_docker_available()\n # Rebuild the docker image in case it has gone missing, as many people do\n # automated cleanup of docker images.\n build_docker_image(hook.prefix, pull=False)\n\n hook_cmd = helpers.to_cmd(hook)\n entry_exe, cmd_rest = hook_cmd[0], hook_cmd[1:]\n\n entry_tag = ('--entrypoint', entry_exe, docker_tag(hook.prefix))\n cmd = docker_cmd() + entry_tag + cmd_rest\n return helpers.run_xargs(hook, cmd, file_args)\n", "path": "pre_commit/languages/docker.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import unicode_literals\n\nimport hashlib\nimport os\n\nimport pre_commit.constants as C\nfrom pre_commit import five\nfrom pre_commit.languages import helpers\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output\n\n\nENVIRONMENT_DIR = 'docker'\nPRE_COMMIT_LABEL = 'PRE_COMMIT'\nget_default_version = helpers.basic_get_default_version\nhealthy = helpers.basic_healthy\n\n\ndef md5(s): # pragma: windows no cover\n return hashlib.md5(five.to_bytes(s)).hexdigest()\n\n\ndef docker_tag(prefix): # pragma: windows no cover\n md5sum = md5(os.path.basename(prefix.prefix_dir)).lower()\n return 'pre-commit-{}'.format(md5sum)\n\n\ndef docker_is_running(): # pragma: windows no cover\n try:\n return cmd_output('docker', 'ps')[0] == 0\n except CalledProcessError:\n return False\n\n\ndef assert_docker_available(): # pragma: windows no cover\n assert docker_is_running(), (\n 'Docker is either not running or not configured in this environment'\n )\n\n\ndef build_docker_image(prefix, **kwargs): # pragma: windows no cover\n pull = kwargs.pop('pull')\n assert not kwargs, kwargs\n cmd = (\n 'docker', 'build',\n '--tag', docker_tag(prefix),\n '--label', PRE_COMMIT_LABEL,\n )\n if pull:\n cmd += ('--pull',)\n # This must come last for old versions of docker. See #477\n cmd += ('.',)\n helpers.run_setup_cmd(prefix, cmd)\n\n\ndef install_environment(\n prefix, version, additional_dependencies,\n): # pragma: windows no cover\n helpers.assert_version_default('docker', version)\n helpers.assert_no_additional_deps('docker', additional_dependencies)\n assert_docker_available()\n\n directory = prefix.path(\n helpers.environment_dir(ENVIRONMENT_DIR, C.DEFAULT),\n )\n\n # Docker doesn't really have relevant disk environment, but pre-commit\n # still needs to cleanup it's state files on failure\n with clean_path_on_failure(directory):\n build_docker_image(prefix, pull=True)\n os.mkdir(directory)\n\n\ndef get_docker_user(): # pragma: windows no cover\n try:\n return '{}:{}'.format(os.getuid(), os.getgid())\n except AttributeError:\n return '1000:1000'\n\n\ndef docker_cmd(): # pragma: windows no cover\n return (\n 'docker', 'run',\n '--rm',\n '-u', get_docker_user(),\n # https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container-volumes-from\n # The `Z` option tells Docker to label the content with a private\n # unshared label. Only the current container can use a private volume.\n '-v', '{}:/src:rw,Z'.format(os.getcwd()),\n '--workdir', '/src',\n )\n\n\ndef run_hook(hook, file_args): # pragma: windows no cover\n assert_docker_available()\n # Rebuild the docker image in case it has gone missing, as many people do\n # automated cleanup of docker images.\n build_docker_image(hook.prefix, pull=False)\n\n hook_cmd = helpers.to_cmd(hook)\n entry_exe, cmd_rest = hook_cmd[0], hook_cmd[1:]\n\n entry_tag = ('--entrypoint', entry_exe, docker_tag(hook.prefix))\n cmd = docker_cmd() + entry_tag + cmd_rest\n return helpers.run_xargs(hook, cmd, file_args)\n", "path": "pre_commit/languages/docker.py"}]}
| 1,720 | 219 |
gh_patches_debug_24309
|
rasdani/github-patches
|
git_diff
|
openstates__openstates-scrapers-1356
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
WV: subcommittees treated as full committees
Hello,
So, with WV, the subcommittees (e.g., Finance subcommittees A, B, C) are not identified as a subcommittee.
I know WV is a [round C state](https://github.com/openstates/planning/issues/11), but I thought it was worth mentioning...if only for a "TODO later" note.
Best,
Alex
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openstates/wv/committees.py`
Content:
```
1 import re
2
3 from billy.scrape.committees import CommitteeScraper, Committee
4
5 import lxml.html
6
7
8 class WVCommitteeScraper(CommitteeScraper):
9 jurisdiction = "wv"
10
11 def scrape(self, chamber, term):
12 getattr(self, 'scrape_' + chamber)()
13
14 def scrape_lower(self):
15 url = 'http://www.legis.state.wv.us/committees/house/main.cfm'
16 html = self.get(url).text
17 doc = lxml.html.fromstring(html)
18 doc.make_links_absolute(url)
19
20 xpath = '//a[contains(@href, "HouseCommittee")]'
21 for link in doc.xpath(xpath):
22 text = link.text_content().strip()
23 if text == '-':
24 continue
25 committee = self.scrape_lower_committee(link=link, name=text)
26 committee.add_source(url)
27 self.save_committee(committee)
28
29 url = 'http://www.legis.state.wv.us/committees/interims/interims.cfm'
30 html = self.get(url).text
31 doc = lxml.html.fromstring(html)
32 doc.make_links_absolute(url)
33 xpath = '//a[contains(@href, "committee.cfm")]'
34 for link in doc.xpath(xpath):
35 text = link.text_content().strip()
36 if text == '-':
37 continue
38 committee = self.scrape_interim_committee(link=link, name=text)
39 committee.add_source(url)
40 self.save_committee(committee)
41
42 def scrape_lower_committee(self, link, name):
43 url = re.sub(r'\s+', '', link.attrib['href'])
44 html = self.get(url).text
45 doc = lxml.html.fromstring(html)
46 doc.make_links_absolute(url)
47
48 comm = Committee('lower', name)
49 comm.add_source(url)
50
51 xpath = '//a[contains(@href, "?member=")]'
52 for link in doc.xpath(xpath):
53 name = link.text_content().strip()
54 name = re.sub(r'^Delegate\s+', '', name)
55 role = link.getnext().text or 'member'
56 comm.add_member(name, role.strip())
57
58 return comm
59
60 def scrape_interim_committee(self, link, name):
61 url = re.sub(r'\s+', '', link.attrib['href'])
62 html = self.get(url).text
63 doc = lxml.html.fromstring(html)
64 doc.make_links_absolute(url)
65
66 comm = Committee('joint', name)
67 comm.add_source(url)
68
69 xpath = '//a[contains(@href, "?member=")]'
70 for link in doc.xpath(xpath):
71 name = link.text_content().strip()
72 name = re.sub(r'^Delegate\s+', '', name)
73 name = re.sub(r'^Senator\s+', '', name)
74 role = link.getnext().text or 'member'
75 comm.add_member(name, role.strip())
76
77 return comm
78
79 def scrape_upper(self):
80 url = 'http://www.legis.state.wv.us/committees/senate/main.cfm'
81 html = self.get(url).text
82 doc = lxml.html.fromstring(html)
83 doc.make_links_absolute(url)
84
85 xpath = '//a[contains(@href, "SenateCommittee")]'
86 for link in doc.xpath(xpath):
87 text = link.text_content().strip()
88 if text == '-':
89 continue
90 committee = self.scrape_upper_committee(link=link, name=text)
91 committee.add_source(url)
92 self.save_committee(committee)
93
94 def scrape_upper_committee(self, link, name):
95 url = re.sub(r'\s+', '', link.attrib['href'])
96 html = self.get(url).text
97 doc = lxml.html.fromstring(html)
98 doc.make_links_absolute(url)
99
100 comm = Committee('upper', name)
101 comm.add_source(url)
102
103 xpath = '//a[contains(@href, "?member=")]'
104 for link in doc.xpath(xpath):
105 name = link.text_content().strip()
106 name = re.sub(r'^Delegate\s+', '', name)
107 role = link.getnext().text or 'member'
108 comm.add_member(name, role.strip())
109
110 return comm
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/openstates/wv/committees.py b/openstates/wv/committees.py
--- a/openstates/wv/committees.py
+++ b/openstates/wv/committees.py
@@ -8,6 +8,14 @@
class WVCommitteeScraper(CommitteeScraper):
jurisdiction = "wv"
+ # Manually resolved links between subcommittees and parent committees.
+ subcommittee_parent_map = {
+ 'Post Audits Subcommittee': 'Government and Finance',
+ 'Parks, Recreation and Natural Resources Subcommittee': \
+ 'Government and Finance',
+ 'Tax Reform Subcommittee A': 'Joint Tax Reform',
+ }
+
def scrape(self, chamber, term):
getattr(self, 'scrape_' + chamber)()
@@ -63,7 +71,16 @@
doc = lxml.html.fromstring(html)
doc.make_links_absolute(url)
- comm = Committee('joint', name)
+ if 'Subcommittee' in name:
+ # Check whether the parent committee is manually defined first
+ # before attempting to automatically resolve it.
+ parent = WVCommitteeScraper.subcommittee_parent_map.get(name, None)
+ if parent is None:
+ parent = name.partition('Subcommittee')[0].strip()
+
+ comm = Committee('joint', parent, subcommittee=name)
+ else:
+ comm = Committee('joint', name)
comm.add_source(url)
xpath = '//a[contains(@href, "?member=")]'
|
{"golden_diff": "diff --git a/openstates/wv/committees.py b/openstates/wv/committees.py\n--- a/openstates/wv/committees.py\n+++ b/openstates/wv/committees.py\n@@ -8,6 +8,14 @@\n class WVCommitteeScraper(CommitteeScraper):\n jurisdiction = \"wv\"\n \n+ # Manually resolved links between subcommittees and parent committees.\n+ subcommittee_parent_map = {\n+ 'Post Audits Subcommittee': 'Government and Finance',\n+ 'Parks, Recreation and Natural Resources Subcommittee': \\\n+ 'Government and Finance',\n+ 'Tax Reform Subcommittee A': 'Joint Tax Reform',\n+ }\n+\n def scrape(self, chamber, term):\n getattr(self, 'scrape_' + chamber)()\n \n@@ -63,7 +71,16 @@\n doc = lxml.html.fromstring(html)\n doc.make_links_absolute(url)\n \n- comm = Committee('joint', name)\n+ if 'Subcommittee' in name:\n+ # Check whether the parent committee is manually defined first\n+ # before attempting to automatically resolve it.\n+ parent = WVCommitteeScraper.subcommittee_parent_map.get(name, None)\n+ if parent is None:\n+ parent = name.partition('Subcommittee')[0].strip()\n+\n+ comm = Committee('joint', parent, subcommittee=name)\n+ else:\n+ comm = Committee('joint', name)\n comm.add_source(url)\n \n xpath = '//a[contains(@href, \"?member=\")]'\n", "issue": "WV: subcommittees treated as full committees\nHello,\r\n\r\nSo, with WV, the subcommittees (e.g., Finance subcommittees A, B, C) are not identified as a subcommittee.\r\n\r\nI know WV is a [round C state](https://github.com/openstates/planning/issues/11), but I thought it was worth mentioning...if only for a \"TODO later\" note.\r\n\r\nBest,\r\nAlex\n", "before_files": [{"content": "import re\n\nfrom billy.scrape.committees import CommitteeScraper, Committee\n\nimport lxml.html\n\n\nclass WVCommitteeScraper(CommitteeScraper):\n jurisdiction = \"wv\"\n\n def scrape(self, chamber, term):\n getattr(self, 'scrape_' + chamber)()\n\n def scrape_lower(self):\n url = 'http://www.legis.state.wv.us/committees/house/main.cfm'\n html = self.get(url).text\n doc = lxml.html.fromstring(html)\n doc.make_links_absolute(url)\n\n xpath = '//a[contains(@href, \"HouseCommittee\")]'\n for link in doc.xpath(xpath):\n text = link.text_content().strip()\n if text == '-':\n continue\n committee = self.scrape_lower_committee(link=link, name=text)\n committee.add_source(url)\n self.save_committee(committee)\n\n url = 'http://www.legis.state.wv.us/committees/interims/interims.cfm'\n html = self.get(url).text\n doc = lxml.html.fromstring(html)\n doc.make_links_absolute(url)\n xpath = '//a[contains(@href, \"committee.cfm\")]'\n for link in doc.xpath(xpath):\n text = link.text_content().strip()\n if text == '-':\n continue\n committee = self.scrape_interim_committee(link=link, name=text)\n committee.add_source(url)\n self.save_committee(committee)\n\n def scrape_lower_committee(self, link, name):\n url = re.sub(r'\\s+', '', link.attrib['href'])\n html = self.get(url).text\n doc = lxml.html.fromstring(html)\n doc.make_links_absolute(url)\n\n comm = Committee('lower', name)\n comm.add_source(url)\n\n xpath = '//a[contains(@href, \"?member=\")]'\n for link in doc.xpath(xpath):\n name = link.text_content().strip()\n name = re.sub(r'^Delegate\\s+', '', name)\n role = link.getnext().text or 'member'\n comm.add_member(name, role.strip())\n\n return comm\n\n def scrape_interim_committee(self, link, name):\n url = re.sub(r'\\s+', '', link.attrib['href'])\n html = self.get(url).text\n doc = lxml.html.fromstring(html)\n doc.make_links_absolute(url)\n\n comm = Committee('joint', name)\n comm.add_source(url)\n\n xpath = '//a[contains(@href, \"?member=\")]'\n for link in doc.xpath(xpath):\n name = link.text_content().strip()\n name = re.sub(r'^Delegate\\s+', '', name)\n name = re.sub(r'^Senator\\s+', '', name)\n role = link.getnext().text or 'member'\n comm.add_member(name, role.strip())\n\n return comm\n\n def scrape_upper(self):\n url = 'http://www.legis.state.wv.us/committees/senate/main.cfm'\n html = self.get(url).text\n doc = lxml.html.fromstring(html)\n doc.make_links_absolute(url)\n\n xpath = '//a[contains(@href, \"SenateCommittee\")]'\n for link in doc.xpath(xpath):\n text = link.text_content().strip()\n if text == '-':\n continue\n committee = self.scrape_upper_committee(link=link, name=text)\n committee.add_source(url)\n self.save_committee(committee)\n\n def scrape_upper_committee(self, link, name):\n url = re.sub(r'\\s+', '', link.attrib['href'])\n html = self.get(url).text\n doc = lxml.html.fromstring(html)\n doc.make_links_absolute(url)\n\n comm = Committee('upper', name)\n comm.add_source(url)\n\n xpath = '//a[contains(@href, \"?member=\")]'\n for link in doc.xpath(xpath):\n name = link.text_content().strip()\n name = re.sub(r'^Delegate\\s+', '', name)\n role = link.getnext().text or 'member'\n comm.add_member(name, role.strip())\n\n return comm\n", "path": "openstates/wv/committees.py"}], "after_files": [{"content": "import re\n\nfrom billy.scrape.committees import CommitteeScraper, Committee\n\nimport lxml.html\n\n\nclass WVCommitteeScraper(CommitteeScraper):\n jurisdiction = \"wv\"\n\n # Manually resolved links between subcommittees and parent committees.\n subcommittee_parent_map = {\n 'Post Audits Subcommittee': 'Government and Finance',\n 'Parks, Recreation and Natural Resources Subcommittee': \\\n 'Government and Finance',\n 'Tax Reform Subcommittee A': 'Joint Tax Reform',\n }\n\n def scrape(self, chamber, term):\n getattr(self, 'scrape_' + chamber)()\n\n def scrape_lower(self):\n url = 'http://www.legis.state.wv.us/committees/house/main.cfm'\n html = self.get(url).text\n doc = lxml.html.fromstring(html)\n doc.make_links_absolute(url)\n\n xpath = '//a[contains(@href, \"HouseCommittee\")]'\n for link in doc.xpath(xpath):\n text = link.text_content().strip()\n if text == '-':\n continue\n committee = self.scrape_lower_committee(link=link, name=text)\n committee.add_source(url)\n self.save_committee(committee)\n\n url = 'http://www.legis.state.wv.us/committees/interims/interims.cfm'\n html = self.get(url).text\n doc = lxml.html.fromstring(html)\n doc.make_links_absolute(url)\n xpath = '//a[contains(@href, \"committee.cfm\")]'\n for link in doc.xpath(xpath):\n text = link.text_content().strip()\n if text == '-':\n continue\n committee = self.scrape_interim_committee(link=link, name=text)\n committee.add_source(url)\n self.save_committee(committee)\n\n def scrape_lower_committee(self, link, name):\n url = re.sub(r'\\s+', '', link.attrib['href'])\n html = self.get(url).text\n doc = lxml.html.fromstring(html)\n doc.make_links_absolute(url)\n\n comm = Committee('lower', name)\n comm.add_source(url)\n\n xpath = '//a[contains(@href, \"?member=\")]'\n for link in doc.xpath(xpath):\n name = link.text_content().strip()\n name = re.sub(r'^Delegate\\s+', '', name)\n role = link.getnext().text or 'member'\n comm.add_member(name, role.strip())\n\n return comm\n\n def scrape_interim_committee(self, link, name):\n url = re.sub(r'\\s+', '', link.attrib['href'])\n html = self.get(url).text\n doc = lxml.html.fromstring(html)\n doc.make_links_absolute(url)\n\n if 'Subcommittee' in name:\n # Check whether the parent committee is manually defined first\n # before attempting to automatically resolve it.\n parent = WVCommitteeScraper.subcommittee_parent_map.get(name, None)\n if parent is None:\n parent = name.partition('Subcommittee')[0].strip()\n\n comm = Committee('joint', parent, subcommittee=name)\n else:\n comm = Committee('joint', name)\n comm.add_source(url)\n\n xpath = '//a[contains(@href, \"?member=\")]'\n for link in doc.xpath(xpath):\n name = link.text_content().strip()\n name = re.sub(r'^Delegate\\s+', '', name)\n name = re.sub(r'^Senator\\s+', '', name)\n role = link.getnext().text or 'member'\n comm.add_member(name, role.strip())\n\n return comm\n\n def scrape_upper(self):\n url = 'http://www.legis.state.wv.us/committees/senate/main.cfm'\n html = self.get(url).text\n doc = lxml.html.fromstring(html)\n doc.make_links_absolute(url)\n\n xpath = '//a[contains(@href, \"SenateCommittee\")]'\n for link in doc.xpath(xpath):\n text = link.text_content().strip()\n if text == '-':\n continue\n committee = self.scrape_upper_committee(link=link, name=text)\n committee.add_source(url)\n self.save_committee(committee)\n\n def scrape_upper_committee(self, link, name):\n url = re.sub(r'\\s+', '', link.attrib['href'])\n html = self.get(url).text\n doc = lxml.html.fromstring(html)\n doc.make_links_absolute(url)\n\n comm = Committee('upper', name)\n comm.add_source(url)\n\n xpath = '//a[contains(@href, \"?member=\")]'\n for link in doc.xpath(xpath):\n name = link.text_content().strip()\n name = re.sub(r'^Delegate\\s+', '', name)\n role = link.getnext().text or 'member'\n comm.add_member(name, role.strip())\n\n return comm\n", "path": "openstates/wv/committees.py"}]}
| 1,454 | 332 |
gh_patches_debug_9551
|
rasdani/github-patches
|
git_diff
|
airctic__icevision-539
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
EfficientDet images_size has to be divisible by 128
## 📓 Documentation Update
**What part of documentation was unclear or wrong?**
It has to be clear that the image size for the efficientdet model has to be divisible by 128
**Describe the solution you'd like**
Add this information to the `efficientdet/model.py` docstring and also make it explicit in the tutorials that use efficientdet.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `icevision/models/efficientdet/model.py`
Content:
```
1 __all__ = ["model"]
2
3 from icevision.imports import *
4 from icevision.utils import *
5 from effdet import get_efficientdet_config, EfficientDet, DetBenchTrain, unwrap_bench
6 from effdet import create_model_from_config
7 from effdet.efficientdet import HeadNet
8
9
10 def model(
11 model_name: str, num_classes: int, img_size: int, pretrained: bool = True
12 ) -> nn.Module:
13 """Creates the efficientdet model specified by `model_name`.
14
15 The model implementation is by Ross Wightman, original repo
16 [here](https://github.com/rwightman/efficientdet-pytorch).
17
18 # Arguments
19 model_name: Specifies the model to create. For pretrained models, check
20 [this](https://github.com/rwightman/efficientdet-pytorch#models) table.
21 num_classes: Number of classes of your dataset (including background).
22 img_size: Image size that will be fed to the model. Must be squared and
23 divisible by 64.
24 pretrained: If True, use a pretrained backbone (on COCO).
25
26 # Returns
27 A PyTorch model.
28 """
29 config = get_efficientdet_config(model_name=model_name)
30 config.image_size = (img_size, img_size) if isinstance(img_size, int) else img_size
31
32 model_bench = create_model_from_config(
33 config,
34 bench_task="train",
35 bench_labeler=True,
36 num_classes=num_classes,
37 pretrained=pretrained,
38 )
39
40 # TODO: Break down param groups for backbone
41 def param_groups_fn(model: nn.Module) -> List[List[nn.Parameter]]:
42 unwrapped = unwrap_bench(model)
43
44 layers = [
45 unwrapped.backbone,
46 unwrapped.fpn,
47 nn.Sequential(unwrapped.class_net, unwrapped.box_net),
48 ]
49 param_groups = [list(layer.parameters()) for layer in layers]
50 check_all_model_params_in_groups2(model, param_groups)
51
52 return param_groups
53
54 model_bench.param_groups = MethodType(param_groups_fn, model_bench)
55
56 return model_bench
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/icevision/models/efficientdet/model.py b/icevision/models/efficientdet/model.py
--- a/icevision/models/efficientdet/model.py
+++ b/icevision/models/efficientdet/model.py
@@ -20,7 +20,7 @@
[this](https://github.com/rwightman/efficientdet-pytorch#models) table.
num_classes: Number of classes of your dataset (including background).
img_size: Image size that will be fed to the model. Must be squared and
- divisible by 64.
+ divisible by 128.
pretrained: If True, use a pretrained backbone (on COCO).
# Returns
|
{"golden_diff": "diff --git a/icevision/models/efficientdet/model.py b/icevision/models/efficientdet/model.py\n--- a/icevision/models/efficientdet/model.py\n+++ b/icevision/models/efficientdet/model.py\n@@ -20,7 +20,7 @@\n [this](https://github.com/rwightman/efficientdet-pytorch#models) table.\n num_classes: Number of classes of your dataset (including background).\n img_size: Image size that will be fed to the model. Must be squared and\n- divisible by 64.\n+ divisible by 128.\n pretrained: If True, use a pretrained backbone (on COCO).\n \n # Returns\n", "issue": "EfficientDet images_size has to be divisible by 128\n## \ud83d\udcd3 Documentation Update\r\n**What part of documentation was unclear or wrong?**\r\nIt has to be clear that the image size for the efficientdet model has to be divisible by 128\r\n\r\n**Describe the solution you'd like**\r\nAdd this information to the `efficientdet/model.py` docstring and also make it explicit in the tutorials that use efficientdet.\r\n\r\n\r\n\n", "before_files": [{"content": "__all__ = [\"model\"]\n\nfrom icevision.imports import *\nfrom icevision.utils import *\nfrom effdet import get_efficientdet_config, EfficientDet, DetBenchTrain, unwrap_bench\nfrom effdet import create_model_from_config\nfrom effdet.efficientdet import HeadNet\n\n\ndef model(\n model_name: str, num_classes: int, img_size: int, pretrained: bool = True\n) -> nn.Module:\n \"\"\"Creates the efficientdet model specified by `model_name`.\n\n The model implementation is by Ross Wightman, original repo\n [here](https://github.com/rwightman/efficientdet-pytorch).\n\n # Arguments\n model_name: Specifies the model to create. For pretrained models, check\n [this](https://github.com/rwightman/efficientdet-pytorch#models) table.\n num_classes: Number of classes of your dataset (including background).\n img_size: Image size that will be fed to the model. Must be squared and\n divisible by 64.\n pretrained: If True, use a pretrained backbone (on COCO).\n\n # Returns\n A PyTorch model.\n \"\"\"\n config = get_efficientdet_config(model_name=model_name)\n config.image_size = (img_size, img_size) if isinstance(img_size, int) else img_size\n\n model_bench = create_model_from_config(\n config,\n bench_task=\"train\",\n bench_labeler=True,\n num_classes=num_classes,\n pretrained=pretrained,\n )\n\n # TODO: Break down param groups for backbone\n def param_groups_fn(model: nn.Module) -> List[List[nn.Parameter]]:\n unwrapped = unwrap_bench(model)\n\n layers = [\n unwrapped.backbone,\n unwrapped.fpn,\n nn.Sequential(unwrapped.class_net, unwrapped.box_net),\n ]\n param_groups = [list(layer.parameters()) for layer in layers]\n check_all_model_params_in_groups2(model, param_groups)\n\n return param_groups\n\n model_bench.param_groups = MethodType(param_groups_fn, model_bench)\n\n return model_bench\n", "path": "icevision/models/efficientdet/model.py"}], "after_files": [{"content": "__all__ = [\"model\"]\n\nfrom icevision.imports import *\nfrom icevision.utils import *\nfrom effdet import get_efficientdet_config, EfficientDet, DetBenchTrain, unwrap_bench\nfrom effdet import create_model_from_config\nfrom effdet.efficientdet import HeadNet\n\n\ndef model(\n model_name: str, num_classes: int, img_size: int, pretrained: bool = True\n) -> nn.Module:\n \"\"\"Creates the efficientdet model specified by `model_name`.\n\n The model implementation is by Ross Wightman, original repo\n [here](https://github.com/rwightman/efficientdet-pytorch).\n\n # Arguments\n model_name: Specifies the model to create. For pretrained models, check\n [this](https://github.com/rwightman/efficientdet-pytorch#models) table.\n num_classes: Number of classes of your dataset (including background).\n img_size: Image size that will be fed to the model. Must be squared and\n divisible by 128.\n pretrained: If True, use a pretrained backbone (on COCO).\n\n # Returns\n A PyTorch model.\n \"\"\"\n config = get_efficientdet_config(model_name=model_name)\n config.image_size = (img_size, img_size) if isinstance(img_size, int) else img_size\n\n model_bench = create_model_from_config(\n config,\n bench_task=\"train\",\n bench_labeler=True,\n num_classes=num_classes,\n pretrained=pretrained,\n )\n\n # TODO: Break down param groups for backbone\n def param_groups_fn(model: nn.Module) -> List[List[nn.Parameter]]:\n unwrapped = unwrap_bench(model)\n\n layers = [\n unwrapped.backbone,\n unwrapped.fpn,\n nn.Sequential(unwrapped.class_net, unwrapped.box_net),\n ]\n param_groups = [list(layer.parameters()) for layer in layers]\n check_all_model_params_in_groups2(model, param_groups)\n\n return param_groups\n\n model_bench.param_groups = MethodType(param_groups_fn, model_bench)\n\n return model_bench\n", "path": "icevision/models/efficientdet/model.py"}]}
| 919 | 153 |
gh_patches_debug_17672
|
rasdani/github-patches
|
git_diff
|
facebookresearch__hydra-234
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Better error message if config structure passed to instantiate is invalid.
# 🐛 Bug
In [common patterns#creating objects](https://cli.dev/docs/patterns/objects), if the `params` are mistakenly valued with a single string, the [`instantiate`](https://github.com/facebookresearch/hydra/blob/cb7904c666b34a53e09194993a0dd7b10fd97c99/hydra/utils.py#L44-L53) function exists with the error mentioned in "Stack trace/error messages":
This is fine to some extent, which is do you (Hydra team) aim at providing this kind of output messages?
<!-- A clear and concise description of what the bug is. -->
## To reproduce
** Minimal Code/Config snippet to reproduce **
Set `params` to `bug_report` as in:
```YAML
class: ...
params: bug_report
```
** Stack trace/error message **
```
> AttributeError: 'str' object has no attribute 'merge_with'
[2019-10-16 02:25:21,592][hydra.utils][ERROR] - Error instantiating [[OBJECT_NAME]] : 'str' object has no attribute 'merge_with'
```
**Optional video**
If applicable, record a short [video](https://asciinema.org) with to help explain your problem.
You can also
## Expected Behavior
<!-- A clear and concise description of what you expected to happen. -->
I expect to have some sort of an error message explaining that `params` should be a collection of values where [[TYPE]] was provided.
## System information
- <!-- Hydra Version --> Hydra Version: 0.9.0
- <!-- Operating system --> Operating System: Arch Linux
## Additional context
If you confirm that this should be fixed, I've already implemented the solution (IMO).
Better error message if config structure passed to instantiate is invalid.
# 🐛 Bug
In [common patterns#creating objects](https://cli.dev/docs/patterns/objects), if the `params` are mistakenly valued with a single string, the [`instantiate`](https://github.com/facebookresearch/hydra/blob/cb7904c666b34a53e09194993a0dd7b10fd97c99/hydra/utils.py#L44-L53) function exists with the error mentioned in "Stack trace/error messages":
This is fine to some extent, which is do you (Hydra team) aim at providing this kind of output messages?
<!-- A clear and concise description of what the bug is. -->
## To reproduce
** Minimal Code/Config snippet to reproduce **
Set `params` to `bug_report` as in:
```YAML
class: ...
params: bug_report
```
** Stack trace/error message **
```
> AttributeError: 'str' object has no attribute 'merge_with'
[2019-10-16 02:25:21,592][hydra.utils][ERROR] - Error instantiating [[OBJECT_NAME]] : 'str' object has no attribute 'merge_with'
```
**Optional video**
If applicable, record a short [video](https://asciinema.org) with to help explain your problem.
You can also
## Expected Behavior
<!-- A clear and concise description of what you expected to happen. -->
I expect to have some sort of an error message explaining that `params` should be a collection of values where [[TYPE]] was provided.
## System information
- <!-- Hydra Version --> Hydra Version: 0.9.0
- <!-- Operating system --> Operating System: Arch Linux
## Additional context
If you confirm that this should be fixed, I've already implemented the solution (IMO).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hydra/utils.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 import logging.config
3
4 from omegaconf import OmegaConf
5
6 # pylint: disable=C0103
7 log = logging.getLogger(__name__)
8
9
10 def get_method(path):
11 return get_class(path)
12
13
14 def get_class(path):
15 try:
16 from importlib import import_module
17
18 module_path, _, class_name = path.rpartition(".")
19 mod = import_module(module_path)
20 try:
21 klass = getattr(mod, class_name)
22 except AttributeError:
23 raise ImportError(
24 "Class {} is not in module {}".format(class_name, module_path)
25 )
26 return klass
27 except ValueError as e:
28 log.error("Error initializing class " + path)
29 raise e
30
31
32 def get_static_method(full_method_name):
33 try:
34 spl = full_method_name.split(".")
35 method_name = spl.pop()
36 class_name = ".".join(spl)
37 clz = get_class(class_name)
38 return getattr(clz, method_name)
39 except Exception as e:
40 log.error("Error getting static method {} : {}".format(full_method_name, e))
41 raise e
42
43
44 def instantiate(config, *args, **kwargs):
45 assert config is not None, "Input config is None"
46 try:
47 clazz = get_class(config["class"])
48 params = config.params if "params" in config else OmegaConf.create()
49 params.merge_with(OmegaConf.create(kwargs))
50 return clazz(*args, **params)
51 except Exception as e:
52 log.error("Error instantiating {} : {}".format(config["class"], e))
53 raise e
54
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/hydra/utils.py b/hydra/utils.py
--- a/hydra/utils.py
+++ b/hydra/utils.py
@@ -1,7 +1,7 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
import logging.config
-from omegaconf import OmegaConf
+from omegaconf import OmegaConf, DictConfig
# pylint: disable=C0103
log = logging.getLogger(__name__)
@@ -46,6 +46,11 @@
try:
clazz = get_class(config["class"])
params = config.params if "params" in config else OmegaConf.create()
+ assert isinstance(
+ params, DictConfig
+ ), "Input config params are expected to be a mapping, found {}".format(
+ type(config.params)
+ )
params.merge_with(OmegaConf.create(kwargs))
return clazz(*args, **params)
except Exception as e:
|
{"golden_diff": "diff --git a/hydra/utils.py b/hydra/utils.py\n--- a/hydra/utils.py\n+++ b/hydra/utils.py\n@@ -1,7 +1,7 @@\n # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n import logging.config\n \n-from omegaconf import OmegaConf\n+from omegaconf import OmegaConf, DictConfig\n \n # pylint: disable=C0103\n log = logging.getLogger(__name__)\n@@ -46,6 +46,11 @@\n try:\n clazz = get_class(config[\"class\"])\n params = config.params if \"params\" in config else OmegaConf.create()\n+ assert isinstance(\n+ params, DictConfig\n+ ), \"Input config params are expected to be a mapping, found {}\".format(\n+ type(config.params)\n+ )\n params.merge_with(OmegaConf.create(kwargs))\n return clazz(*args, **params)\n except Exception as e:\n", "issue": "Better error message if config structure passed to instantiate is invalid.\n# \ud83d\udc1b Bug\r\n\r\nIn [common patterns#creating objects](https://cli.dev/docs/patterns/objects), if the `params` are mistakenly valued with a single string, the [`instantiate`](https://github.com/facebookresearch/hydra/blob/cb7904c666b34a53e09194993a0dd7b10fd97c99/hydra/utils.py#L44-L53) function exists with the error mentioned in \"Stack trace/error messages\":\r\n\r\nThis is fine to some extent, which is do you (Hydra team) aim at providing this kind of output messages? \r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n## To reproduce\r\n\r\n** Minimal Code/Config snippet to reproduce **\r\n\r\nSet `params` to `bug_report` as in:\r\n\r\n```YAML\r\nclass: ...\r\nparams: bug_report\r\n```\r\n\r\n** Stack trace/error message **\r\n```\r\n> AttributeError: 'str' object has no attribute 'merge_with'\r\n[2019-10-16 02:25:21,592][hydra.utils][ERROR] - Error instantiating [[OBJECT_NAME]] : 'str' object has no attribute 'merge_with'\r\n```\r\n**Optional video**\r\nIf applicable, record a short [video](https://asciinema.org) with to help explain your problem.\r\nYou can also \r\n\r\n## Expected Behavior\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\nI expect to have some sort of an error message explaining that `params` should be a collection of values where [[TYPE]] was provided. \r\n \r\n## System information\r\n- <!-- Hydra Version --> Hydra Version: 0.9.0\r\n\r\n- <!-- Operating system --> Operating System: Arch Linux\r\n\r\n## Additional context\r\nIf you confirm that this should be fixed, I've already implemented the solution (IMO). \r\n\nBetter error message if config structure passed to instantiate is invalid.\n# \ud83d\udc1b Bug\r\n\r\nIn [common patterns#creating objects](https://cli.dev/docs/patterns/objects), if the `params` are mistakenly valued with a single string, the [`instantiate`](https://github.com/facebookresearch/hydra/blob/cb7904c666b34a53e09194993a0dd7b10fd97c99/hydra/utils.py#L44-L53) function exists with the error mentioned in \"Stack trace/error messages\":\r\n\r\nThis is fine to some extent, which is do you (Hydra team) aim at providing this kind of output messages? \r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n## To reproduce\r\n\r\n** Minimal Code/Config snippet to reproduce **\r\n\r\nSet `params` to `bug_report` as in:\r\n\r\n```YAML\r\nclass: ...\r\nparams: bug_report\r\n```\r\n\r\n** Stack trace/error message **\r\n```\r\n> AttributeError: 'str' object has no attribute 'merge_with'\r\n[2019-10-16 02:25:21,592][hydra.utils][ERROR] - Error instantiating [[OBJECT_NAME]] : 'str' object has no attribute 'merge_with'\r\n```\r\n**Optional video**\r\nIf applicable, record a short [video](https://asciinema.org) with to help explain your problem.\r\nYou can also \r\n\r\n## Expected Behavior\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\nI expect to have some sort of an error message explaining that `params` should be a collection of values where [[TYPE]] was provided. \r\n \r\n## System information\r\n- <!-- Hydra Version --> Hydra Version: 0.9.0\r\n\r\n- <!-- Operating system --> Operating System: Arch Linux\r\n\r\n## Additional context\r\nIf you confirm that this should be fixed, I've already implemented the solution (IMO). \r\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport logging.config\n\nfrom omegaconf import OmegaConf\n\n# pylint: disable=C0103\nlog = logging.getLogger(__name__)\n\n\ndef get_method(path):\n return get_class(path)\n\n\ndef get_class(path):\n try:\n from importlib import import_module\n\n module_path, _, class_name = path.rpartition(\".\")\n mod = import_module(module_path)\n try:\n klass = getattr(mod, class_name)\n except AttributeError:\n raise ImportError(\n \"Class {} is not in module {}\".format(class_name, module_path)\n )\n return klass\n except ValueError as e:\n log.error(\"Error initializing class \" + path)\n raise e\n\n\ndef get_static_method(full_method_name):\n try:\n spl = full_method_name.split(\".\")\n method_name = spl.pop()\n class_name = \".\".join(spl)\n clz = get_class(class_name)\n return getattr(clz, method_name)\n except Exception as e:\n log.error(\"Error getting static method {} : {}\".format(full_method_name, e))\n raise e\n\n\ndef instantiate(config, *args, **kwargs):\n assert config is not None, \"Input config is None\"\n try:\n clazz = get_class(config[\"class\"])\n params = config.params if \"params\" in config else OmegaConf.create()\n params.merge_with(OmegaConf.create(kwargs))\n return clazz(*args, **params)\n except Exception as e:\n log.error(\"Error instantiating {} : {}\".format(config[\"class\"], e))\n raise e\n", "path": "hydra/utils.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport logging.config\n\nfrom omegaconf import OmegaConf, DictConfig\n\n# pylint: disable=C0103\nlog = logging.getLogger(__name__)\n\n\ndef get_method(path):\n return get_class(path)\n\n\ndef get_class(path):\n try:\n from importlib import import_module\n\n module_path, _, class_name = path.rpartition(\".\")\n mod = import_module(module_path)\n try:\n klass = getattr(mod, class_name)\n except AttributeError:\n raise ImportError(\n \"Class {} is not in module {}\".format(class_name, module_path)\n )\n return klass\n except ValueError as e:\n log.error(\"Error initializing class \" + path)\n raise e\n\n\ndef get_static_method(full_method_name):\n try:\n spl = full_method_name.split(\".\")\n method_name = spl.pop()\n class_name = \".\".join(spl)\n clz = get_class(class_name)\n return getattr(clz, method_name)\n except Exception as e:\n log.error(\"Error getting static method {} : {}\".format(full_method_name, e))\n raise e\n\n\ndef instantiate(config, *args, **kwargs):\n assert config is not None, \"Input config is None\"\n try:\n clazz = get_class(config[\"class\"])\n params = config.params if \"params\" in config else OmegaConf.create()\n assert isinstance(\n params, DictConfig\n ), \"Input config params are expected to be a mapping, found {}\".format(\n type(config.params)\n )\n params.merge_with(OmegaConf.create(kwargs))\n return clazz(*args, **params)\n except Exception as e:\n log.error(\"Error instantiating {} : {}\".format(config[\"class\"], e))\n raise e\n", "path": "hydra/utils.py"}]}
| 1,539 | 203 |
gh_patches_debug_29306
|
rasdani/github-patches
|
git_diff
|
saleor__saleor-2827
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CustomJsonEncoder drops currency information from Money instance
Usually, we are using `settings.DEFAULT_CURRENCY` all over the place, however, there can be a bug, where `Money` is saved into `JSONField`, and `settings.DEFAULT_CURRENCY` changed meanwhile.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/core/utils/json_serializer.py`
Content:
```
1 from django.core.serializers.json import (
2 Deserializer as JsonDeserializer, DjangoJSONEncoder,
3 Serializer as JsonSerializer)
4 from prices import Money
5
6
7 class Serializer(JsonSerializer):
8 def _init_options(self):
9 super()._init_options()
10 self.json_kwargs['cls'] = CustomJsonEncoder
11
12
13 class CustomJsonEncoder(DjangoJSONEncoder):
14 def default(self, obj):
15 if isinstance(obj, Money):
16 return obj.amount
17 return super().default(obj)
18
19
20 Deserializer = JsonDeserializer
21
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/saleor/core/utils/json_serializer.py b/saleor/core/utils/json_serializer.py
--- a/saleor/core/utils/json_serializer.py
+++ b/saleor/core/utils/json_serializer.py
@@ -1,8 +1,12 @@
+import json
+
+from django.core.serializers.base import DeserializationError
from django.core.serializers.json import (
- Deserializer as JsonDeserializer, DjangoJSONEncoder,
- Serializer as JsonSerializer)
+ DjangoJSONEncoder, PythonDeserializer, Serializer as JsonSerializer)
from prices import Money
+MONEY_TYPE = 'Money'
+
class Serializer(JsonSerializer):
def _init_options(self):
@@ -13,8 +17,32 @@
class CustomJsonEncoder(DjangoJSONEncoder):
def default(self, obj):
if isinstance(obj, Money):
- return obj.amount
+ return {
+ '_type': MONEY_TYPE, 'amount': obj.amount,
+ 'currency': obj.currency}
return super().default(obj)
-Deserializer = JsonDeserializer
+def object_hook(obj):
+ if '_type' in obj and obj['_type'] == MONEY_TYPE:
+ return Money(obj['amount'], obj['currency'])
+ return obj
+
+
+def Deserializer(stream_or_string, **options):
+ """Deserialize a stream or string of JSON data. This is a slightly modified
+ copy of Django implementation with additional argument <object_hook> in
+ json.loads"""
+ if not isinstance(stream_or_string, (bytes, str)):
+ stream_or_string = stream_or_string.read()
+ if isinstance(stream_or_string, bytes):
+ stream_or_string = stream_or_string.decode()
+ try:
+ objects = json.loads(stream_or_string, object_hook=object_hook)
+ yield from PythonDeserializer(objects, **options)
+ except Exception as exc:
+ # ugly construction to overcome pylint's warning
+ # "The except handler raises immediately"
+ if isinstance(exc, (GeneratorExit, DeserializationError)):
+ raise
+ raise DeserializationError() from exc
|
{"golden_diff": "diff --git a/saleor/core/utils/json_serializer.py b/saleor/core/utils/json_serializer.py\n--- a/saleor/core/utils/json_serializer.py\n+++ b/saleor/core/utils/json_serializer.py\n@@ -1,8 +1,12 @@\n+import json\n+\n+from django.core.serializers.base import DeserializationError\n from django.core.serializers.json import (\n- Deserializer as JsonDeserializer, DjangoJSONEncoder,\n- Serializer as JsonSerializer)\n+ DjangoJSONEncoder, PythonDeserializer, Serializer as JsonSerializer)\n from prices import Money\n \n+MONEY_TYPE = 'Money'\n+\n \n class Serializer(JsonSerializer):\n def _init_options(self):\n@@ -13,8 +17,32 @@\n class CustomJsonEncoder(DjangoJSONEncoder):\n def default(self, obj):\n if isinstance(obj, Money):\n- return obj.amount\n+ return {\n+ '_type': MONEY_TYPE, 'amount': obj.amount,\n+ 'currency': obj.currency}\n return super().default(obj)\n \n \n-Deserializer = JsonDeserializer\n+def object_hook(obj):\n+ if '_type' in obj and obj['_type'] == MONEY_TYPE:\n+ return Money(obj['amount'], obj['currency'])\n+ return obj\n+\n+\n+def Deserializer(stream_or_string, **options):\n+ \"\"\"Deserialize a stream or string of JSON data. This is a slightly modified\n+ copy of Django implementation with additional argument <object_hook> in\n+ json.loads\"\"\"\n+ if not isinstance(stream_or_string, (bytes, str)):\n+ stream_or_string = stream_or_string.read()\n+ if isinstance(stream_or_string, bytes):\n+ stream_or_string = stream_or_string.decode()\n+ try:\n+ objects = json.loads(stream_or_string, object_hook=object_hook)\n+ yield from PythonDeserializer(objects, **options)\n+ except Exception as exc:\n+ # ugly construction to overcome pylint's warning\n+ # \"The except handler raises immediately\"\n+ if isinstance(exc, (GeneratorExit, DeserializationError)):\n+ raise\n+ raise DeserializationError() from exc\n", "issue": "CustomJsonEncoder drops currency information from Money instance\nUsually, we are using `settings.DEFAULT_CURRENCY` all over the place, however, there can be a bug, where `Money` is saved into `JSONField`, and `settings.DEFAULT_CURRENCY` changed meanwhile.\r\n\r\n\n", "before_files": [{"content": "from django.core.serializers.json import (\n Deserializer as JsonDeserializer, DjangoJSONEncoder,\n Serializer as JsonSerializer)\nfrom prices import Money\n\n\nclass Serializer(JsonSerializer):\n def _init_options(self):\n super()._init_options()\n self.json_kwargs['cls'] = CustomJsonEncoder\n\n\nclass CustomJsonEncoder(DjangoJSONEncoder):\n def default(self, obj):\n if isinstance(obj, Money):\n return obj.amount\n return super().default(obj)\n\n\nDeserializer = JsonDeserializer\n", "path": "saleor/core/utils/json_serializer.py"}], "after_files": [{"content": "import json\n\nfrom django.core.serializers.base import DeserializationError\nfrom django.core.serializers.json import (\n DjangoJSONEncoder, PythonDeserializer, Serializer as JsonSerializer)\nfrom prices import Money\n\nMONEY_TYPE = 'Money'\n\n\nclass Serializer(JsonSerializer):\n def _init_options(self):\n super()._init_options()\n self.json_kwargs['cls'] = CustomJsonEncoder\n\n\nclass CustomJsonEncoder(DjangoJSONEncoder):\n def default(self, obj):\n if isinstance(obj, Money):\n return {\n '_type': MONEY_TYPE, 'amount': obj.amount,\n 'currency': obj.currency}\n return super().default(obj)\n\n\ndef object_hook(obj):\n if '_type' in obj and obj['_type'] == MONEY_TYPE:\n return Money(obj['amount'], obj['currency'])\n return obj\n\n\ndef Deserializer(stream_or_string, **options):\n \"\"\"Deserialize a stream or string of JSON data. This is a slightly modified\n copy of Django implementation with additional argument <object_hook> in\n json.loads\"\"\"\n if not isinstance(stream_or_string, (bytes, str)):\n stream_or_string = stream_or_string.read()\n if isinstance(stream_or_string, bytes):\n stream_or_string = stream_or_string.decode()\n try:\n objects = json.loads(stream_or_string, object_hook=object_hook)\n yield from PythonDeserializer(objects, **options)\n except Exception as exc:\n # ugly construction to overcome pylint's warning\n # \"The except handler raises immediately\"\n if isinstance(exc, (GeneratorExit, DeserializationError)):\n raise\n raise DeserializationError() from exc\n", "path": "saleor/core/utils/json_serializer.py"}]}
| 457 | 444 |
gh_patches_debug_13540
|
rasdani/github-patches
|
git_diff
|
bokeh__bokeh-7939
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Using on_event with string names in Python2.7
As discovered in #7893, trying to set `on_event` with a string (i.e. `"tap"` instead of `events.Tap`) currently fails with:
``` python
=================================== FAILURES ===================================
Test_standalone_docs_json_and_render_items.test_log_warning_if_python_event_callback
self = <bokeh.embed.tests.test_util.Test_standalone_docs_json_and_render_items object at 0x7f6f7a9a5950>
caplog = <_pytest.logging.LogCaptureFixture object at 0x7f6f7a9a57d0>
def test_log_warning_if_python_event_callback(self, caplog):
d = Document()
m1 = EmbedTestUtilModel()
c1 = _GoodEventCallback()
d.add_root(m1)
> m1.on_event('tap', c1)
bokeh/embed/tests/test_util.py:155:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = EmbedTestUtilModel(id='b0037dfa-87fa-48d7-82ea-df2b14d2aff4', ...)
event = 'tap'
callbacks = (<bokeh.embed.tests.test_util._GoodEventCallback object at 0x7f6f7a9a5d90>,)
def on_event(self, event, *callbacks):
> if not isinstance(event, str) and issubclass(event, Event):
E TypeError: issubclass() arg 1 must be a class
bokeh/util/callback_manager.py:31: TypeError
```
We should:
- [ ] Add tests that check setting `on_event` with a string
- [ ] Fix `bokeh/util/callback_manager.py` to accept strings in Python2.7
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bokeh/util/callback_manager.py`
Content:
```
1 ''' Provides ``PropertyCallbackManager`` and ``EventCallbackManager``
2 mixin classes for adding ``on_change`` and ``on_event`` callback
3 interfaces to classes.
4 '''
5 from __future__ import absolute_import
6
7 from ..events import Event
8 from ..util.future import get_param_info, format_signature, signature
9
10 def _check_callback(callback, fargs, what="Callback functions"):
11 '''Bokeh-internal function to check callback signature'''
12 sig = signature(callback)
13 formatted_args = format_signature(sig)
14 error_msg = what + " must have signature func(%s), got func%s"
15
16 all_names, default_values = get_param_info(sig)
17
18 if len(all_names) - len(default_values) != len(fargs):
19 raise ValueError(error_msg % (", ".join(fargs), formatted_args))
20
21 class EventCallbackManager(object):
22 ''' A mixin class to provide an interface for registering and
23 triggering event callbacks on the Python side.
24
25 '''
26 def __init__(self, *args, **kw):
27 super(EventCallbackManager, self).__init__(*args, **kw)
28 self._event_callbacks = dict()
29
30 def on_event(self, event, *callbacks):
31 if not isinstance(event, str) and issubclass(event, Event):
32 event = event.event_name
33
34 for callback in callbacks:
35 _check_callback(callback, ('event',), what='Event callback')
36
37 if event not in self._event_callbacks:
38 self._event_callbacks[event] = [cb for cb in callbacks]
39 else:
40 self._event_callbacks[event].extend(callbacks)
41
42 if event not in self.subscribed_events:
43 self.subscribed_events.append(event)
44
45 def _trigger_event(self, event):
46 for callback in self._event_callbacks.get(event.event_name,[]):
47 if event._model_id is not None and self._id == event._model_id:
48 callback(event)
49
50 def _update_event_callbacks(self):
51 if self.document is None:
52 return
53
54 for key in self._event_callbacks:
55 self.document._subscribed_models[key].add(self)
56
57
58 class PropertyCallbackManager(object):
59 ''' A mixin class to provide an interface for registering and
60 triggering callbacks.
61
62 '''
63
64 def __init__(self, *args, **kw):
65 super(PropertyCallbackManager, self).__init__(*args, **kw)
66 self._callbacks = dict()
67
68 def on_change(self, attr, *callbacks):
69 ''' Add a callback on this object to trigger when ``attr`` changes.
70
71 Args:
72 attr (str) : an attribute name on this object
73 callback (callable) : a callback function to register
74
75 Returns:
76 None
77
78 '''
79 if len(callbacks) == 0:
80 raise ValueError("on_change takes an attribute name and one or more callbacks, got only one parameter")
81
82 _callbacks = self._callbacks.setdefault(attr, [])
83 for callback in callbacks:
84
85 if callback in _callbacks:
86 continue
87
88 _check_callback(callback, ('attr', 'old', 'new'))
89
90 _callbacks.append(callback)
91
92 def remove_on_change(self, attr, *callbacks):
93 ''' Remove a callback from this object '''
94 if len(callbacks) == 0:
95 raise ValueError("remove_on_change takes an attribute name and one or more callbacks, got only one parameter")
96 _callbacks = self._callbacks.setdefault(attr, [])
97 for callback in callbacks:
98 _callbacks.remove(callback)
99
100 def trigger(self, attr, old, new, hint=None, setter=None):
101 ''' Trigger callbacks for ``attr`` on this object.
102
103 Args:
104 attr (str) :
105 old (object) :
106 new (object) :
107
108 Returns:
109 None
110
111 '''
112 def invoke():
113 callbacks = self._callbacks.get(attr)
114 if callbacks:
115 for callback in callbacks:
116 callback(attr, old, new)
117 if hasattr(self, '_document') and self._document is not None:
118 self._document._notify_change(self, attr, old, new, hint, setter, invoke)
119 else:
120 invoke()
121
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bokeh/util/callback_manager.py b/bokeh/util/callback_manager.py
--- a/bokeh/util/callback_manager.py
+++ b/bokeh/util/callback_manager.py
@@ -4,6 +4,8 @@
'''
from __future__ import absolute_import
+from six import string_types
+
from ..events import Event
from ..util.future import get_param_info, format_signature, signature
@@ -28,7 +30,7 @@
self._event_callbacks = dict()
def on_event(self, event, *callbacks):
- if not isinstance(event, str) and issubclass(event, Event):
+ if not isinstance(event, string_types) and issubclass(event, Event):
event = event.event_name
for callback in callbacks:
|
{"golden_diff": "diff --git a/bokeh/util/callback_manager.py b/bokeh/util/callback_manager.py\n--- a/bokeh/util/callback_manager.py\n+++ b/bokeh/util/callback_manager.py\n@@ -4,6 +4,8 @@\n '''\n from __future__ import absolute_import\n \n+from six import string_types\n+\n from ..events import Event\n from ..util.future import get_param_info, format_signature, signature\n \n@@ -28,7 +30,7 @@\n self._event_callbacks = dict()\n \n def on_event(self, event, *callbacks):\n- if not isinstance(event, str) and issubclass(event, Event):\n+ if not isinstance(event, string_types) and issubclass(event, Event):\n event = event.event_name\n \n for callback in callbacks:\n", "issue": "Using on_event with string names in Python2.7\nAs discovered in #7893, trying to set `on_event` with a string (i.e. `\"tap\"` instead of `events.Tap`) currently fails with:\r\n\r\n``` python\r\n=================================== FAILURES ===================================\r\n Test_standalone_docs_json_and_render_items.test_log_warning_if_python_event_callback \r\nself = <bokeh.embed.tests.test_util.Test_standalone_docs_json_and_render_items object at 0x7f6f7a9a5950>\r\ncaplog = <_pytest.logging.LogCaptureFixture object at 0x7f6f7a9a57d0>\r\n def test_log_warning_if_python_event_callback(self, caplog):\r\n d = Document()\r\n m1 = EmbedTestUtilModel()\r\n c1 = _GoodEventCallback()\r\n d.add_root(m1)\r\n \r\n> m1.on_event('tap', c1)\r\nbokeh/embed/tests/test_util.py:155: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\nself = EmbedTestUtilModel(id='b0037dfa-87fa-48d7-82ea-df2b14d2aff4', ...)\r\nevent = 'tap'\r\ncallbacks = (<bokeh.embed.tests.test_util._GoodEventCallback object at 0x7f6f7a9a5d90>,)\r\n def on_event(self, event, *callbacks):\r\n> if not isinstance(event, str) and issubclass(event, Event):\r\nE TypeError: issubclass() arg 1 must be a class\r\nbokeh/util/callback_manager.py:31: TypeError\r\n```\r\n\r\nWe should:\r\n- [ ] Add tests that check setting `on_event` with a string\r\n- [ ] Fix `bokeh/util/callback_manager.py` to accept strings in Python2.7\n", "before_files": [{"content": "''' Provides ``PropertyCallbackManager`` and ``EventCallbackManager``\nmixin classes for adding ``on_change`` and ``on_event`` callback\ninterfaces to classes.\n'''\nfrom __future__ import absolute_import\n\nfrom ..events import Event\nfrom ..util.future import get_param_info, format_signature, signature\n\ndef _check_callback(callback, fargs, what=\"Callback functions\"):\n '''Bokeh-internal function to check callback signature'''\n sig = signature(callback)\n formatted_args = format_signature(sig)\n error_msg = what + \" must have signature func(%s), got func%s\"\n\n all_names, default_values = get_param_info(sig)\n\n if len(all_names) - len(default_values) != len(fargs):\n raise ValueError(error_msg % (\", \".join(fargs), formatted_args))\n\nclass EventCallbackManager(object):\n ''' A mixin class to provide an interface for registering and\n triggering event callbacks on the Python side.\n\n '''\n def __init__(self, *args, **kw):\n super(EventCallbackManager, self).__init__(*args, **kw)\n self._event_callbacks = dict()\n\n def on_event(self, event, *callbacks):\n if not isinstance(event, str) and issubclass(event, Event):\n event = event.event_name\n\n for callback in callbacks:\n _check_callback(callback, ('event',), what='Event callback')\n\n if event not in self._event_callbacks:\n self._event_callbacks[event] = [cb for cb in callbacks]\n else:\n self._event_callbacks[event].extend(callbacks)\n\n if event not in self.subscribed_events:\n self.subscribed_events.append(event)\n\n def _trigger_event(self, event):\n for callback in self._event_callbacks.get(event.event_name,[]):\n if event._model_id is not None and self._id == event._model_id:\n callback(event)\n\n def _update_event_callbacks(self):\n if self.document is None:\n return\n\n for key in self._event_callbacks:\n self.document._subscribed_models[key].add(self)\n\n\nclass PropertyCallbackManager(object):\n ''' A mixin class to provide an interface for registering and\n triggering callbacks.\n\n '''\n\n def __init__(self, *args, **kw):\n super(PropertyCallbackManager, self).__init__(*args, **kw)\n self._callbacks = dict()\n\n def on_change(self, attr, *callbacks):\n ''' Add a callback on this object to trigger when ``attr`` changes.\n\n Args:\n attr (str) : an attribute name on this object\n callback (callable) : a callback function to register\n\n Returns:\n None\n\n '''\n if len(callbacks) == 0:\n raise ValueError(\"on_change takes an attribute name and one or more callbacks, got only one parameter\")\n\n _callbacks = self._callbacks.setdefault(attr, [])\n for callback in callbacks:\n\n if callback in _callbacks:\n continue\n\n _check_callback(callback, ('attr', 'old', 'new'))\n\n _callbacks.append(callback)\n\n def remove_on_change(self, attr, *callbacks):\n ''' Remove a callback from this object '''\n if len(callbacks) == 0:\n raise ValueError(\"remove_on_change takes an attribute name and one or more callbacks, got only one parameter\")\n _callbacks = self._callbacks.setdefault(attr, [])\n for callback in callbacks:\n _callbacks.remove(callback)\n\n def trigger(self, attr, old, new, hint=None, setter=None):\n ''' Trigger callbacks for ``attr`` on this object.\n\n Args:\n attr (str) :\n old (object) :\n new (object) :\n\n Returns:\n None\n\n '''\n def invoke():\n callbacks = self._callbacks.get(attr)\n if callbacks:\n for callback in callbacks:\n callback(attr, old, new)\n if hasattr(self, '_document') and self._document is not None:\n self._document._notify_change(self, attr, old, new, hint, setter, invoke)\n else:\n invoke()\n", "path": "bokeh/util/callback_manager.py"}], "after_files": [{"content": "''' Provides ``PropertyCallbackManager`` and ``EventCallbackManager``\nmixin classes for adding ``on_change`` and ``on_event`` callback\ninterfaces to classes.\n'''\nfrom __future__ import absolute_import\n\nfrom six import string_types\n\nfrom ..events import Event\nfrom ..util.future import get_param_info, format_signature, signature\n\ndef _check_callback(callback, fargs, what=\"Callback functions\"):\n '''Bokeh-internal function to check callback signature'''\n sig = signature(callback)\n formatted_args = format_signature(sig)\n error_msg = what + \" must have signature func(%s), got func%s\"\n\n all_names, default_values = get_param_info(sig)\n\n if len(all_names) - len(default_values) != len(fargs):\n raise ValueError(error_msg % (\", \".join(fargs), formatted_args))\n\nclass EventCallbackManager(object):\n ''' A mixin class to provide an interface for registering and\n triggering event callbacks on the Python side.\n\n '''\n def __init__(self, *args, **kw):\n super(EventCallbackManager, self).__init__(*args, **kw)\n self._event_callbacks = dict()\n\n def on_event(self, event, *callbacks):\n if not isinstance(event, string_types) and issubclass(event, Event):\n event = event.event_name\n\n for callback in callbacks:\n _check_callback(callback, ('event',), what='Event callback')\n\n if event not in self._event_callbacks:\n self._event_callbacks[event] = [cb for cb in callbacks]\n else:\n self._event_callbacks[event].extend(callbacks)\n\n if event not in self.subscribed_events:\n self.subscribed_events.append(event)\n\n def _trigger_event(self, event):\n for callback in self._event_callbacks.get(event.event_name,[]):\n if event._model_id is not None and self._id == event._model_id:\n callback(event)\n\n def _update_event_callbacks(self):\n if self.document is None:\n return\n\n for key in self._event_callbacks:\n self.document._subscribed_models[key].add(self)\n\n\nclass PropertyCallbackManager(object):\n ''' A mixin class to provide an interface for registering and\n triggering callbacks.\n\n '''\n\n def __init__(self, *args, **kw):\n super(PropertyCallbackManager, self).__init__(*args, **kw)\n self._callbacks = dict()\n\n def on_change(self, attr, *callbacks):\n ''' Add a callback on this object to trigger when ``attr`` changes.\n\n Args:\n attr (str) : an attribute name on this object\n callback (callable) : a callback function to register\n\n Returns:\n None\n\n '''\n if len(callbacks) == 0:\n raise ValueError(\"on_change takes an attribute name and one or more callbacks, got only one parameter\")\n\n _callbacks = self._callbacks.setdefault(attr, [])\n for callback in callbacks:\n\n if callback in _callbacks:\n continue\n\n _check_callback(callback, ('attr', 'old', 'new'))\n\n _callbacks.append(callback)\n\n def remove_on_change(self, attr, *callbacks):\n ''' Remove a callback from this object '''\n if len(callbacks) == 0:\n raise ValueError(\"remove_on_change takes an attribute name and one or more callbacks, got only one parameter\")\n _callbacks = self._callbacks.setdefault(attr, [])\n for callback in callbacks:\n _callbacks.remove(callback)\n\n def trigger(self, attr, old, new, hint=None, setter=None):\n ''' Trigger callbacks for ``attr`` on this object.\n\n Args:\n attr (str) :\n old (object) :\n new (object) :\n\n Returns:\n None\n\n '''\n def invoke():\n callbacks = self._callbacks.get(attr)\n if callbacks:\n for callback in callbacks:\n callback(attr, old, new)\n if hasattr(self, '_document') and self._document is not None:\n self._document._notify_change(self, attr, old, new, hint, setter, invoke)\n else:\n invoke()\n", "path": "bokeh/util/callback_manager.py"}]}
| 1,803 | 165 |
gh_patches_debug_6987
|
rasdani/github-patches
|
git_diff
|
rasterio__rasterio-750
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
-f does not work as expected in rio-warp
Running on a preexisting file
```
» rio warp tests/data/RGB.byte.tif /tmp/hi.tif
Error: Could not open file : file exists and won't be overwritten without use of the `-f` or `-o` options.
```
Then
```
» rio warp tests/data/RGB.byte.tif /tmp/hi.tif -f
Error: -f option requires an argument
```
But
```
rio warp tests/data/RGB.byte.tif /tmp/hi.tif --force-overwrite
```
Works.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rasterio/rio/helpers.py`
Content:
```
1 """
2 Helper objects used by multiple CLI commands.
3 """
4
5 import json
6 import os
7
8 from rasterio.errors import FileOverwriteError
9
10
11 def coords(obj):
12 """Yield all coordinate coordinate tuples from a geometry or feature.
13 From python-geojson package."""
14 if isinstance(obj, (tuple, list)):
15 coordinates = obj
16 elif 'geometry' in obj:
17 coordinates = obj['geometry']['coordinates']
18 else:
19 coordinates = obj.get('coordinates', obj)
20 for e in coordinates:
21 if isinstance(e, (float, int)):
22 yield tuple(coordinates)
23 break
24 else:
25 for f in coords(e):
26 yield f
27
28
29 def write_features(
30 fobj, collection, sequence=False, geojson_type='feature', use_rs=False,
31 **dump_kwds):
32 """Read an iterator of (feat, bbox) pairs and write to file using
33 the selected modes."""
34 # Sequence of features expressed as bbox, feature, or collection.
35 if sequence:
36 for feat in collection():
37 xs, ys = zip(*coords(feat))
38 bbox = (min(xs), min(ys), max(xs), max(ys))
39 if use_rs:
40 fobj.write(u'\u001e')
41 if geojson_type == 'feature':
42 fobj.write(json.dumps(feat, **dump_kwds))
43 elif geojson_type == 'bbox':
44 fobj.write(json.dumps(bbox, **dump_kwds))
45 else:
46 fobj.write(
47 json.dumps({
48 'type': 'FeatureCollection',
49 'bbox': bbox,
50 'features': [feat]}, **dump_kwds))
51 fobj.write('\n')
52 # Aggregate all features into a single object expressed as
53 # bbox or collection.
54 else:
55 features = list(collection())
56 if geojson_type == 'bbox':
57 fobj.write(json.dumps(collection.bbox, **dump_kwds))
58 elif geojson_type == 'feature':
59 fobj.write(json.dumps(features[0], **dump_kwds))
60 else:
61 fobj.write(json.dumps({
62 'bbox': collection.bbox,
63 'type': 'FeatureCollection',
64 'features': features},
65 **dump_kwds))
66 fobj.write('\n')
67
68
69 def resolve_inout(input=None, output=None, files=None, force_overwrite=False):
70 """Resolves inputs and outputs from standard args and options.
71
72 :param input: a single input filename, optional.
73 :param output: a single output filename, optional.
74 :param files: a sequence of filenames in which the last is the
75 output filename.
76 :param force_overwrite: whether to force overwriting the output
77 file, bool.
78 :return: the resolved output filename and input filenames as a
79 tuple of length 2.
80
81 If provided, the :param:`output` file may be overwritten. An output
82 file extracted from :param:`files` will not be overwritten unless
83 :param:`force_overwrite` is `True`.
84 """
85 resolved_output = output or (files[-1] if files else None)
86 force_overwrite = output is not None or force_overwrite
87 if not force_overwrite and resolved_output and os.path.exists(
88 resolved_output):
89 raise FileOverwriteError(
90 "file exists and won't be overwritten without use of the "
91 "`-f` or `-o` options.")
92 resolved_inputs = (
93 [input] if input else [] +
94 list(files[:-1 if not output else None]) if files else [])
95 return resolved_output, resolved_inputs
96
97
98 def to_lower(ctx, param, value):
99 """Click callback, converts values to lowercase."""
100 return value.lower()
101
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/rasterio/rio/helpers.py b/rasterio/rio/helpers.py
--- a/rasterio/rio/helpers.py
+++ b/rasterio/rio/helpers.py
@@ -88,7 +88,7 @@
resolved_output):
raise FileOverwriteError(
"file exists and won't be overwritten without use of the "
- "`-f` or `-o` options.")
+ "`--force-overwrite` or `--output` options.")
resolved_inputs = (
[input] if input else [] +
list(files[:-1 if not output else None]) if files else [])
|
{"golden_diff": "diff --git a/rasterio/rio/helpers.py b/rasterio/rio/helpers.py\n--- a/rasterio/rio/helpers.py\n+++ b/rasterio/rio/helpers.py\n@@ -88,7 +88,7 @@\n resolved_output):\n raise FileOverwriteError(\n \"file exists and won't be overwritten without use of the \"\n- \"`-f` or `-o` options.\")\n+ \"`--force-overwrite` or `--output` options.\")\n resolved_inputs = (\n [input] if input else [] +\n list(files[:-1 if not output else None]) if files else [])\n", "issue": "-f does not work as expected in rio-warp\nRunning on a preexisting file\n\n```\n\u00bb rio warp tests/data/RGB.byte.tif /tmp/hi.tif\nError: Could not open file : file exists and won't be overwritten without use of the `-f` or `-o` options.\n```\n\nThen\n\n```\n\u00bb rio warp tests/data/RGB.byte.tif /tmp/hi.tif -f\nError: -f option requires an argument\n```\n\nBut\n\n```\nrio warp tests/data/RGB.byte.tif /tmp/hi.tif --force-overwrite\n```\n\nWorks.\n\n", "before_files": [{"content": "\"\"\"\nHelper objects used by multiple CLI commands.\n\"\"\"\n\nimport json\nimport os\n\nfrom rasterio.errors import FileOverwriteError\n\n\ndef coords(obj):\n \"\"\"Yield all coordinate coordinate tuples from a geometry or feature.\n From python-geojson package.\"\"\"\n if isinstance(obj, (tuple, list)):\n coordinates = obj\n elif 'geometry' in obj:\n coordinates = obj['geometry']['coordinates']\n else:\n coordinates = obj.get('coordinates', obj)\n for e in coordinates:\n if isinstance(e, (float, int)):\n yield tuple(coordinates)\n break\n else:\n for f in coords(e):\n yield f\n\n\ndef write_features(\n fobj, collection, sequence=False, geojson_type='feature', use_rs=False,\n **dump_kwds):\n \"\"\"Read an iterator of (feat, bbox) pairs and write to file using\n the selected modes.\"\"\"\n # Sequence of features expressed as bbox, feature, or collection.\n if sequence:\n for feat in collection():\n xs, ys = zip(*coords(feat))\n bbox = (min(xs), min(ys), max(xs), max(ys))\n if use_rs:\n fobj.write(u'\\u001e')\n if geojson_type == 'feature':\n fobj.write(json.dumps(feat, **dump_kwds))\n elif geojson_type == 'bbox':\n fobj.write(json.dumps(bbox, **dump_kwds))\n else:\n fobj.write(\n json.dumps({\n 'type': 'FeatureCollection',\n 'bbox': bbox,\n 'features': [feat]}, **dump_kwds))\n fobj.write('\\n')\n # Aggregate all features into a single object expressed as\n # bbox or collection.\n else:\n features = list(collection())\n if geojson_type == 'bbox':\n fobj.write(json.dumps(collection.bbox, **dump_kwds))\n elif geojson_type == 'feature':\n fobj.write(json.dumps(features[0], **dump_kwds))\n else:\n fobj.write(json.dumps({\n 'bbox': collection.bbox,\n 'type': 'FeatureCollection',\n 'features': features},\n **dump_kwds))\n fobj.write('\\n')\n\n\ndef resolve_inout(input=None, output=None, files=None, force_overwrite=False):\n \"\"\"Resolves inputs and outputs from standard args and options.\n\n :param input: a single input filename, optional.\n :param output: a single output filename, optional.\n :param files: a sequence of filenames in which the last is the\n output filename.\n :param force_overwrite: whether to force overwriting the output\n file, bool.\n :return: the resolved output filename and input filenames as a\n tuple of length 2.\n\n If provided, the :param:`output` file may be overwritten. An output\n file extracted from :param:`files` will not be overwritten unless\n :param:`force_overwrite` is `True`.\n \"\"\"\n resolved_output = output or (files[-1] if files else None)\n force_overwrite = output is not None or force_overwrite\n if not force_overwrite and resolved_output and os.path.exists(\n resolved_output):\n raise FileOverwriteError(\n \"file exists and won't be overwritten without use of the \"\n \"`-f` or `-o` options.\")\n resolved_inputs = (\n [input] if input else [] +\n list(files[:-1 if not output else None]) if files else [])\n return resolved_output, resolved_inputs\n\n\ndef to_lower(ctx, param, value):\n \"\"\"Click callback, converts values to lowercase.\"\"\"\n return value.lower()\n", "path": "rasterio/rio/helpers.py"}], "after_files": [{"content": "\"\"\"\nHelper objects used by multiple CLI commands.\n\"\"\"\n\nimport json\nimport os\n\nfrom rasterio.errors import FileOverwriteError\n\n\ndef coords(obj):\n \"\"\"Yield all coordinate coordinate tuples from a geometry or feature.\n From python-geojson package.\"\"\"\n if isinstance(obj, (tuple, list)):\n coordinates = obj\n elif 'geometry' in obj:\n coordinates = obj['geometry']['coordinates']\n else:\n coordinates = obj.get('coordinates', obj)\n for e in coordinates:\n if isinstance(e, (float, int)):\n yield tuple(coordinates)\n break\n else:\n for f in coords(e):\n yield f\n\n\ndef write_features(\n fobj, collection, sequence=False, geojson_type='feature', use_rs=False,\n **dump_kwds):\n \"\"\"Read an iterator of (feat, bbox) pairs and write to file using\n the selected modes.\"\"\"\n # Sequence of features expressed as bbox, feature, or collection.\n if sequence:\n for feat in collection():\n xs, ys = zip(*coords(feat))\n bbox = (min(xs), min(ys), max(xs), max(ys))\n if use_rs:\n fobj.write(u'\\u001e')\n if geojson_type == 'feature':\n fobj.write(json.dumps(feat, **dump_kwds))\n elif geojson_type == 'bbox':\n fobj.write(json.dumps(bbox, **dump_kwds))\n else:\n fobj.write(\n json.dumps({\n 'type': 'FeatureCollection',\n 'bbox': bbox,\n 'features': [feat]}, **dump_kwds))\n fobj.write('\\n')\n # Aggregate all features into a single object expressed as\n # bbox or collection.\n else:\n features = list(collection())\n if geojson_type == 'bbox':\n fobj.write(json.dumps(collection.bbox, **dump_kwds))\n elif geojson_type == 'feature':\n fobj.write(json.dumps(features[0], **dump_kwds))\n else:\n fobj.write(json.dumps({\n 'bbox': collection.bbox,\n 'type': 'FeatureCollection',\n 'features': features},\n **dump_kwds))\n fobj.write('\\n')\n\n\ndef resolve_inout(input=None, output=None, files=None, force_overwrite=False):\n \"\"\"Resolves inputs and outputs from standard args and options.\n\n :param input: a single input filename, optional.\n :param output: a single output filename, optional.\n :param files: a sequence of filenames in which the last is the\n output filename.\n :param force_overwrite: whether to force overwriting the output\n file, bool.\n :return: the resolved output filename and input filenames as a\n tuple of length 2.\n\n If provided, the :param:`output` file may be overwritten. An output\n file extracted from :param:`files` will not be overwritten unless\n :param:`force_overwrite` is `True`.\n \"\"\"\n resolved_output = output or (files[-1] if files else None)\n force_overwrite = output is not None or force_overwrite\n if not force_overwrite and resolved_output and os.path.exists(\n resolved_output):\n raise FileOverwriteError(\n \"file exists and won't be overwritten without use of the \"\n \"`--force-overwrite` or `--output` options.\")\n resolved_inputs = (\n [input] if input else [] +\n list(files[:-1 if not output else None]) if files else [])\n return resolved_output, resolved_inputs\n\n\ndef to_lower(ctx, param, value):\n \"\"\"Click callback, converts values to lowercase.\"\"\"\n return value.lower()\n", "path": "rasterio/rio/helpers.py"}]}
| 1,361 | 133 |
gh_patches_debug_8115
|
rasdani/github-patches
|
git_diff
|
ocadotechnology__aimmo-51
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Submitting empty code leads to server error
When sending empty program, we are getting:
Traceback:
File "/Library/Python/2.7/site-packages/django/core/handlers/base.py" in get_response
1. response = wrapped_callback(request, _callback_args, *_callback_kwargs)
File "/Library/Python/2.7/site-packages/django/contrib/auth/decorators.py" in _wrapped_view
2. return view_func(request, _args, *_kwargs)
File "/Users/paulina.koch/Documents/C4L/forks/aimmo/ui/players/views.py" in code
3. world.player_changed_code(request.user.id, request.user.player.code)
File "/Users/paulina.koch/Documents/C4L/forks/aimmo/simulation/game_state.py" in player_changed_code
4. avatar.set_code(code)
File "/Users/paulina.koch/Documents/C4L/forks/aimmo/simulation/avatar/avatar_wrapper.py" in set_code
5. self.avatar = Avatar()
Exception Type: TypeError at /api/code/
Exception Value: 'NoneType' object is not callable
Simple check should be enough, but we could use it as a chance to implement an extension of validation or error handling/printing when submitting code.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `simulation/avatar/avatar_wrapper.py`
Content:
```
1 import traceback
2 import sys
3
4 from simulation.action import WaitAction
5
6
7 # This class will be implemented by the player
8 Avatar = None
9
10
11 class UserCodeException(Exception):
12 def __init__(self, *args, **kwargs):
13 super(Exception, self).__init__(*args, **kwargs)
14 self.exc_type, self.exc_value, self.exc_traceback = sys.exc_info()
15
16 def to_user_string(self):
17 lines = traceback.format_exception(self.exc_type, self.exc_value, self.exc_traceback)
18 return '<br/>'.join(lines)
19
20
21 class AvatarWrapper(object):
22 """
23 The application's view of a character, not to be confused with "Avatar", the player-supplied code.
24 """
25
26 def __init__(self, initial_location, initial_code, player_id, avatar_appearance):
27 self.location = initial_location
28 self.health = 5
29 self.score = 0
30 self.events = []
31 self.player_id = player_id
32 self.avatar_appearance = avatar_appearance
33 self.avatar = None
34
35 self.set_code(initial_code)
36
37 def handle_turn(self, state):
38 try:
39 next_action = self.avatar.handle_turn(state, self.events)
40 except Exception as e:
41 # TODO: tell user their program threw an exception during execution somehow...
42 print('avatar threw exception during handle_turn:', e)
43 traceback.print_exc()
44 next_action = WaitAction()
45 # Reset event log
46 self.events = []
47
48 return next_action
49
50 def die(self, respawn_location):
51 # TODO: extract settings for health and score loss on death
52 self.health = 5
53 self.score = max(0, self.score - 2)
54 self.location = respawn_location
55
56 def add_event(self, event):
57 self.events.append(event)
58
59 def set_code(self, code):
60 self.code = code
61 try:
62 exec(code)
63 except Exception as ex:
64 raise UserCodeException("Exception in user code", ex)
65 self.avatar = Avatar()
66
67 def __repr__(self):
68 return 'Avatar(id={}, location={}, health={}, score={})'.format(self.player_id, self.location,
69 self.health, self.score)
70
71
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/simulation/avatar/avatar_wrapper.py b/simulation/avatar/avatar_wrapper.py
--- a/simulation/avatar/avatar_wrapper.py
+++ b/simulation/avatar/avatar_wrapper.py
@@ -60,9 +60,9 @@
self.code = code
try:
exec(code)
+ self.avatar = Avatar()
except Exception as ex:
raise UserCodeException("Exception in user code", ex)
- self.avatar = Avatar()
def __repr__(self):
return 'Avatar(id={}, location={}, health={}, score={})'.format(self.player_id, self.location,
|
{"golden_diff": "diff --git a/simulation/avatar/avatar_wrapper.py b/simulation/avatar/avatar_wrapper.py\n--- a/simulation/avatar/avatar_wrapper.py\n+++ b/simulation/avatar/avatar_wrapper.py\n@@ -60,9 +60,9 @@\n self.code = code\n try:\n exec(code)\n+ self.avatar = Avatar()\n except Exception as ex:\n raise UserCodeException(\"Exception in user code\", ex)\n- self.avatar = Avatar()\n \n def __repr__(self):\n return 'Avatar(id={}, location={}, health={}, score={})'.format(self.player_id, self.location,\n", "issue": "Submitting empty code leads to server error\nWhen sending empty program, we are getting:\n\nTraceback:\nFile \"/Library/Python/2.7/site-packages/django/core/handlers/base.py\" in get_response\n1. response = wrapped_callback(request, _callback_args, *_callback_kwargs)\n File \"/Library/Python/2.7/site-packages/django/contrib/auth/decorators.py\" in _wrapped_view\n2. return view_func(request, _args, *_kwargs)\n File \"/Users/paulina.koch/Documents/C4L/forks/aimmo/ui/players/views.py\" in code\n3. world.player_changed_code(request.user.id, request.user.player.code)\n File \"/Users/paulina.koch/Documents/C4L/forks/aimmo/simulation/game_state.py\" in player_changed_code\n4. avatar.set_code(code)\n File \"/Users/paulina.koch/Documents/C4L/forks/aimmo/simulation/avatar/avatar_wrapper.py\" in set_code\n5. self.avatar = Avatar()\n\nException Type: TypeError at /api/code/\nException Value: 'NoneType' object is not callable\n\nSimple check should be enough, but we could use it as a chance to implement an extension of validation or error handling/printing when submitting code.\n\n", "before_files": [{"content": "import traceback\nimport sys\n\nfrom simulation.action import WaitAction\n\n\n# This class will be implemented by the player\nAvatar = None\n\n\nclass UserCodeException(Exception):\n def __init__(self, *args, **kwargs):\n super(Exception, self).__init__(*args, **kwargs)\n self.exc_type, self.exc_value, self.exc_traceback = sys.exc_info()\n\n def to_user_string(self):\n lines = traceback.format_exception(self.exc_type, self.exc_value, self.exc_traceback)\n return '<br/>'.join(lines)\n\n\nclass AvatarWrapper(object):\n \"\"\"\n The application's view of a character, not to be confused with \"Avatar\", the player-supplied code.\n \"\"\"\n\n def __init__(self, initial_location, initial_code, player_id, avatar_appearance):\n self.location = initial_location\n self.health = 5\n self.score = 0\n self.events = []\n self.player_id = player_id\n self.avatar_appearance = avatar_appearance\n self.avatar = None\n\n self.set_code(initial_code)\n\n def handle_turn(self, state):\n try:\n next_action = self.avatar.handle_turn(state, self.events)\n except Exception as e:\n # TODO: tell user their program threw an exception during execution somehow...\n print('avatar threw exception during handle_turn:', e)\n traceback.print_exc()\n next_action = WaitAction()\n # Reset event log\n self.events = []\n\n return next_action\n\n def die(self, respawn_location):\n # TODO: extract settings for health and score loss on death\n self.health = 5\n self.score = max(0, self.score - 2)\n self.location = respawn_location\n\n def add_event(self, event):\n self.events.append(event)\n\n def set_code(self, code):\n self.code = code\n try:\n exec(code)\n except Exception as ex:\n raise UserCodeException(\"Exception in user code\", ex)\n self.avatar = Avatar()\n\n def __repr__(self):\n return 'Avatar(id={}, location={}, health={}, score={})'.format(self.player_id, self.location,\n self.health, self.score)\n\n", "path": "simulation/avatar/avatar_wrapper.py"}], "after_files": [{"content": "import traceback\nimport sys\n\nfrom simulation.action import WaitAction\n\n\n# This class will be implemented by the player\nAvatar = None\n\n\nclass UserCodeException(Exception):\n def __init__(self, *args, **kwargs):\n super(Exception, self).__init__(*args, **kwargs)\n self.exc_type, self.exc_value, self.exc_traceback = sys.exc_info()\n\n def to_user_string(self):\n lines = traceback.format_exception(self.exc_type, self.exc_value, self.exc_traceback)\n return '<br/>'.join(lines)\n\n\nclass AvatarWrapper(object):\n \"\"\"\n The application's view of a character, not to be confused with \"Avatar\", the player-supplied code.\n \"\"\"\n\n def __init__(self, initial_location, initial_code, player_id, avatar_appearance):\n self.location = initial_location\n self.health = 5\n self.score = 0\n self.events = []\n self.player_id = player_id\n self.avatar_appearance = avatar_appearance\n self.avatar = None\n\n self.set_code(initial_code)\n\n def handle_turn(self, state):\n try:\n next_action = self.avatar.handle_turn(state, self.events)\n except Exception as e:\n # TODO: tell user their program threw an exception during execution somehow...\n print('avatar threw exception during handle_turn:', e)\n traceback.print_exc()\n next_action = WaitAction()\n # Reset event log\n self.events = []\n\n return next_action\n\n def die(self, respawn_location):\n # TODO: extract settings for health and score loss on death\n self.health = 5\n self.score = max(0, self.score - 2)\n self.location = respawn_location\n\n def add_event(self, event):\n self.events.append(event)\n\n def set_code(self, code):\n self.code = code\n try:\n exec(code)\n self.avatar = Avatar()\n except Exception as ex:\n raise UserCodeException(\"Exception in user code\", ex)\n\n def __repr__(self):\n return 'Avatar(id={}, location={}, health={}, score={})'.format(self.player_id, self.location,\n self.health, self.score)\n\n", "path": "simulation/avatar/avatar_wrapper.py"}]}
| 1,130 | 124 |
gh_patches_debug_1611
|
rasdani/github-patches
|
git_diff
|
google__mobly-311
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Exceptions in `setup_test` should leave the test in `ERROR` status
Regardless of the type of the exception, `setup_test` error should cause `ERROR` status.
This is different from a test method.
In a test method, an exception based on signals.TestFailure should cause the test to exit with `FAILED` status.
This is to be consistent with pyunit's behavior.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mobly/signals.py`
Content:
```
1 # Copyright 2016 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """This module is where all the test signal classes and related utilities live.
15 """
16
17 import functools
18 import json
19
20
21 class TestSignalError(Exception):
22 """Raised when an error occurs inside a test signal."""
23
24
25 class TestSignal(Exception):
26 """Base class for all test result control signals. This is used to signal
27 the result of a test.
28
29 Attribute:
30 details: A string that describes the reason for raising this signal.
31 extras: A json-serializable data type to convey extra information about
32 a test result.
33 """
34
35 def __init__(self, details, extras=None):
36 super(TestSignal, self).__init__(details)
37 self.details = str(details)
38 try:
39 json.dumps(extras)
40 self.extras = extras
41 except TypeError:
42 raise TestSignalError('Extras must be json serializable. %s '
43 'is not.' % extras)
44
45 def __str__(self):
46 return 'Details=%s, Extras=%s' % (self.details, self.extras)
47
48
49 class TestFailure(TestSignal):
50 """Raised when a test has failed."""
51
52
53 class TestPass(TestSignal):
54 """Raised when a test has passed."""
55
56
57 class TestSkip(TestSignal):
58 """Raised when a test has been skipped."""
59
60
61 class TestAbortClass(TestSignal):
62 """Raised when all subsequent tests within the same test class should
63 be aborted.
64 """
65
66
67 class TestAbortAll(TestSignal):
68 """Raised when all subsequent tests should be aborted."""
69
70
71 class ControllerError(Exception):
72 """Raised when an error occured in controller classes."""
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mobly/signals.py b/mobly/signals.py
--- a/mobly/signals.py
+++ b/mobly/signals.py
@@ -46,6 +46,10 @@
return 'Details=%s, Extras=%s' % (self.details, self.extras)
+class TestError(TestSignal):
+ """Raised when a test has an unexpected error."""
+
+
class TestFailure(TestSignal):
"""Raised when a test has failed."""
|
{"golden_diff": "diff --git a/mobly/signals.py b/mobly/signals.py\n--- a/mobly/signals.py\n+++ b/mobly/signals.py\n@@ -46,6 +46,10 @@\n return 'Details=%s, Extras=%s' % (self.details, self.extras)\n \n \n+class TestError(TestSignal):\n+ \"\"\"Raised when a test has an unexpected error.\"\"\"\n+\n+\n class TestFailure(TestSignal):\n \"\"\"Raised when a test has failed.\"\"\"\n", "issue": "Exceptions in `setup_test` should leave the test in `ERROR` status\nRegardless of the type of the exception, `setup_test` error should cause `ERROR` status.\r\n\r\nThis is different from a test method.\r\nIn a test method, an exception based on signals.TestFailure should cause the test to exit with `FAILED` status.\r\n\r\nThis is to be consistent with pyunit's behavior.\n", "before_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"This module is where all the test signal classes and related utilities live.\n\"\"\"\n\nimport functools\nimport json\n\n\nclass TestSignalError(Exception):\n \"\"\"Raised when an error occurs inside a test signal.\"\"\"\n\n\nclass TestSignal(Exception):\n \"\"\"Base class for all test result control signals. This is used to signal\n the result of a test.\n\n Attribute:\n details: A string that describes the reason for raising this signal.\n extras: A json-serializable data type to convey extra information about\n a test result.\n \"\"\"\n\n def __init__(self, details, extras=None):\n super(TestSignal, self).__init__(details)\n self.details = str(details)\n try:\n json.dumps(extras)\n self.extras = extras\n except TypeError:\n raise TestSignalError('Extras must be json serializable. %s '\n 'is not.' % extras)\n\n def __str__(self):\n return 'Details=%s, Extras=%s' % (self.details, self.extras)\n\n\nclass TestFailure(TestSignal):\n \"\"\"Raised when a test has failed.\"\"\"\n\n\nclass TestPass(TestSignal):\n \"\"\"Raised when a test has passed.\"\"\"\n\n\nclass TestSkip(TestSignal):\n \"\"\"Raised when a test has been skipped.\"\"\"\n\n\nclass TestAbortClass(TestSignal):\n \"\"\"Raised when all subsequent tests within the same test class should\n be aborted.\n \"\"\"\n\n\nclass TestAbortAll(TestSignal):\n \"\"\"Raised when all subsequent tests should be aborted.\"\"\"\n\n\nclass ControllerError(Exception):\n \"\"\"Raised when an error occured in controller classes.\"\"\"\n", "path": "mobly/signals.py"}], "after_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"This module is where all the test signal classes and related utilities live.\n\"\"\"\n\nimport functools\nimport json\n\n\nclass TestSignalError(Exception):\n \"\"\"Raised when an error occurs inside a test signal.\"\"\"\n\n\nclass TestSignal(Exception):\n \"\"\"Base class for all test result control signals. This is used to signal\n the result of a test.\n\n Attribute:\n details: A string that describes the reason for raising this signal.\n extras: A json-serializable data type to convey extra information about\n a test result.\n \"\"\"\n\n def __init__(self, details, extras=None):\n super(TestSignal, self).__init__(details)\n self.details = str(details)\n try:\n json.dumps(extras)\n self.extras = extras\n except TypeError:\n raise TestSignalError('Extras must be json serializable. %s '\n 'is not.' % extras)\n\n def __str__(self):\n return 'Details=%s, Extras=%s' % (self.details, self.extras)\n\n\nclass TestError(TestSignal):\n \"\"\"Raised when a test has an unexpected error.\"\"\"\n\n\nclass TestFailure(TestSignal):\n \"\"\"Raised when a test has failed.\"\"\"\n\n\nclass TestPass(TestSignal):\n \"\"\"Raised when a test has passed.\"\"\"\n\n\nclass TestSkip(TestSignal):\n \"\"\"Raised when a test has been skipped.\"\"\"\n\n\nclass TestAbortClass(TestSignal):\n \"\"\"Raised when all subsequent tests within the same test class should\n be aborted.\n \"\"\"\n\n\nclass TestAbortAll(TestSignal):\n \"\"\"Raised when all subsequent tests should be aborted.\"\"\"\n\n\nclass ControllerError(Exception):\n \"\"\"Raised when an error occured in controller classes.\"\"\"\n", "path": "mobly/signals.py"}]}
| 942 | 106 |
gh_patches_debug_37219
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-3290
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bfmtv - No playable streams found on this URL
<!--
Thanks for reporting a plugin issue!
USE THE TEMPLATE. Otherwise your plugin issue may be rejected.
First, see the contribution guidelines:
https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink
Also check the list of open and closed plugin issues:
https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22
Please see the text preview to avoid unnecessary formatting errors.
-->
## Plugin Issue
<!-- Replace [ ] with [x] in order to check the box -->
- [x] This is a plugin issue and I have read the contribution guidelines.
### Description
<!-- Explain the plugin issue as thoroughly as you can. -->
There is no playable streams for bfmtv
### Reproduction steps / Explicit stream URLs to test
<!-- How can we reproduce this? Please note the exact steps below using the list format supplied. If you need more steps please add them. -->
`streamlink https://www.bfmtv.com/en-direct/ best`
or `streamlink https://www.bfmtv.com/paris/en-direct/ best`
or any other channels supported by this plugin
### Log output
<!--
TEXT LOG OUTPUT IS REQUIRED for a plugin issue!
Use the `--loglevel debug` parameter and avoid using parameters which suppress log output.
https://streamlink.github.io/cli.html#cmdoption-l
Make sure to **remove usernames and passwords**
You can copy the output to https://gist.github.com/ or paste it below.
Don't post screenshots of the log output and instead copy the text from your terminal application.
-->
```
streamlink --loglevel debug https://www.bfmtv.com/en-direct/ best
[cli][debug] OS: Linux-5.8.15-201.fc32.x86_64-x86_64-with-glibc2.2.5
[cli][debug] Python: 3.8.6
[cli][debug] Streamlink: 1.7.0
[cli][debug] Requests(2.24.0), Socks(1.7.1), Websocket(0.57.0)
[cli][info] Found matching plugin bfmtv for URL https://www.bfmtv.com/en-direct/
error: No playable streams found on this URL: https://www.bfmtv.com/en-direct/
```
### Additional comments, etc.
As a workaround you can use their dailymotion stream: `streamlink https://www.dailymotion.com/embed/video/xgz4t1 best`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/bfmtv.py`
Content:
```
1 import re
2
3 from streamlink.plugin import Plugin
4 from streamlink.plugins.brightcove import BrightcovePlayer
5 from streamlink.stream import HLSStream
6
7
8 class BFMTV(Plugin):
9 _url_re = re.compile(r'https://.+\.(?:bfmtv|01net)\.com')
10 _brightcove_video_re = re.compile(
11 r'data-holder="video(?P<video_id>[0-9]+)" data-account="(?P<account_id>[0-9]+)"'
12 )
13 _brightcove_video_alt_re = re.compile(
14 r'data-account="(?P<account_id>[0-9]+).*?data-video-id="(?P<video_id>[0-9]+)"',
15 re.DOTALL
16 )
17 _embed_video_url_re = re.compile(
18 r"\$YOPLAYER\('liveStitching', {.+?file: '(?P<video_url>[^\"]+?)'.+?}\);",
19 re.DOTALL
20 )
21
22 @classmethod
23 def can_handle_url(cls, url):
24 return BFMTV._url_re.match(url)
25
26 def _get_streams(self):
27 # Retrieve URL page and search for Brightcove video data
28 res = self.session.http.get(self.url)
29 match = self._brightcove_video_re.search(res.text) or self._brightcove_video_alt_re.search(res.text)
30 if match is not None:
31 account_id = match.group('account_id')
32 video_id = match.group('video_id')
33 player = BrightcovePlayer(self.session, account_id)
34 for stream in player.get_streams(video_id):
35 yield stream
36 else:
37 # Try to get the stream URL in the page
38 match = self._embed_video_url_re.search(res.text)
39 if match is not None:
40 video_url = match.group('video_url')
41 if '.m3u8' in video_url:
42 yield from HLSStream.parse_variant_playlist(self.session, video_url).items()
43
44
45 __plugin__ = BFMTV
46
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/streamlink/plugins/bfmtv.py b/src/streamlink/plugins/bfmtv.py
--- a/src/streamlink/plugins/bfmtv.py
+++ b/src/streamlink/plugins/bfmtv.py
@@ -1,45 +1,46 @@
+import logging
import re
from streamlink.plugin import Plugin
from streamlink.plugins.brightcove import BrightcovePlayer
-from streamlink.stream import HLSStream
+
+log = logging.getLogger(__name__)
class BFMTV(Plugin):
_url_re = re.compile(r'https://.+\.(?:bfmtv|01net)\.com')
+ _dailymotion_url = 'https://www.dailymotion.com/embed/video/{}'
_brightcove_video_re = re.compile(
- r'data-holder="video(?P<video_id>[0-9]+)" data-account="(?P<account_id>[0-9]+)"'
- )
- _brightcove_video_alt_re = re.compile(
- r'data-account="(?P<account_id>[0-9]+).*?data-video-id="(?P<video_id>[0-9]+)"',
+ r'accountid="(?P<account_id>[0-9]+).*?videoid="(?P<video_id>[0-9]+)"',
re.DOTALL
)
- _embed_video_url_re = re.compile(
- r"\$YOPLAYER\('liveStitching', {.+?file: '(?P<video_url>[^\"]+?)'.+?}\);",
+ _embed_video_id_re = re.compile(
+ r'<iframe.*?src=".*?/(?P<video_id>\w+)"',
re.DOTALL
)
@classmethod
def can_handle_url(cls, url):
- return BFMTV._url_re.match(url)
+ return cls._url_re.match(url) is not None
def _get_streams(self):
# Retrieve URL page and search for Brightcove video data
res = self.session.http.get(self.url)
- match = self._brightcove_video_re.search(res.text) or self._brightcove_video_alt_re.search(res.text)
+ match = self._brightcove_video_re.search(res.text)
if match is not None:
account_id = match.group('account_id')
+ log.debug(f'Account ID: {account_id}')
video_id = match.group('video_id')
+ log.debug(f'Video ID: {video_id}')
player = BrightcovePlayer(self.session, account_id)
- for stream in player.get_streams(video_id):
- yield stream
+ yield from player.get_streams(video_id)
else:
- # Try to get the stream URL in the page
- match = self._embed_video_url_re.search(res.text)
+ # Try to find the Dailymotion video ID
+ match = self._embed_video_id_re.search(res.text)
if match is not None:
- video_url = match.group('video_url')
- if '.m3u8' in video_url:
- yield from HLSStream.parse_variant_playlist(self.session, video_url).items()
+ video_id = match.group('video_id')
+ log.debug(f'Video ID: {video_id}')
+ yield from self.session.streams(self._dailymotion_url.format(video_id)).items()
__plugin__ = BFMTV
|
{"golden_diff": "diff --git a/src/streamlink/plugins/bfmtv.py b/src/streamlink/plugins/bfmtv.py\n--- a/src/streamlink/plugins/bfmtv.py\n+++ b/src/streamlink/plugins/bfmtv.py\n@@ -1,45 +1,46 @@\n+import logging\n import re\n \n from streamlink.plugin import Plugin\n from streamlink.plugins.brightcove import BrightcovePlayer\n-from streamlink.stream import HLSStream\n+\n+log = logging.getLogger(__name__)\n \n \n class BFMTV(Plugin):\n _url_re = re.compile(r'https://.+\\.(?:bfmtv|01net)\\.com')\n+ _dailymotion_url = 'https://www.dailymotion.com/embed/video/{}'\n _brightcove_video_re = re.compile(\n- r'data-holder=\"video(?P<video_id>[0-9]+)\" data-account=\"(?P<account_id>[0-9]+)\"'\n- )\n- _brightcove_video_alt_re = re.compile(\n- r'data-account=\"(?P<account_id>[0-9]+).*?data-video-id=\"(?P<video_id>[0-9]+)\"',\n+ r'accountid=\"(?P<account_id>[0-9]+).*?videoid=\"(?P<video_id>[0-9]+)\"',\n re.DOTALL\n )\n- _embed_video_url_re = re.compile(\n- r\"\\$YOPLAYER\\('liveStitching', {.+?file: '(?P<video_url>[^\\\"]+?)'.+?}\\);\",\n+ _embed_video_id_re = re.compile(\n+ r'<iframe.*?src=\".*?/(?P<video_id>\\w+)\"',\n re.DOTALL\n )\n \n @classmethod\n def can_handle_url(cls, url):\n- return BFMTV._url_re.match(url)\n+ return cls._url_re.match(url) is not None\n \n def _get_streams(self):\n # Retrieve URL page and search for Brightcove video data\n res = self.session.http.get(self.url)\n- match = self._brightcove_video_re.search(res.text) or self._brightcove_video_alt_re.search(res.text)\n+ match = self._brightcove_video_re.search(res.text)\n if match is not None:\n account_id = match.group('account_id')\n+ log.debug(f'Account ID: {account_id}')\n video_id = match.group('video_id')\n+ log.debug(f'Video ID: {video_id}')\n player = BrightcovePlayer(self.session, account_id)\n- for stream in player.get_streams(video_id):\n- yield stream\n+ yield from player.get_streams(video_id)\n else:\n- # Try to get the stream URL in the page\n- match = self._embed_video_url_re.search(res.text)\n+ # Try to find the Dailymotion video ID\n+ match = self._embed_video_id_re.search(res.text)\n if match is not None:\n- video_url = match.group('video_url')\n- if '.m3u8' in video_url:\n- yield from HLSStream.parse_variant_playlist(self.session, video_url).items()\n+ video_id = match.group('video_id')\n+ log.debug(f'Video ID: {video_id}')\n+ yield from self.session.streams(self._dailymotion_url.format(video_id)).items()\n \n \n __plugin__ = BFMTV\n", "issue": "bfmtv - No playable streams found on this URL\n<!--\r\nThanks for reporting a plugin issue!\r\nUSE THE TEMPLATE. Otherwise your plugin issue may be rejected.\r\n\r\nFirst, see the contribution guidelines:\r\nhttps://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink\r\n\r\nAlso check the list of open and closed plugin issues:\r\nhttps://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22\r\n\r\nPlease see the text preview to avoid unnecessary formatting errors.\r\n-->\r\n\r\n\r\n## Plugin Issue\r\n\r\n<!-- Replace [ ] with [x] in order to check the box -->\r\n- [x] This is a plugin issue and I have read the contribution guidelines.\r\n\r\n\r\n### Description\r\n\r\n<!-- Explain the plugin issue as thoroughly as you can. -->\r\nThere is no playable streams for bfmtv\r\n\r\n### Reproduction steps / Explicit stream URLs to test\r\n\r\n<!-- How can we reproduce this? Please note the exact steps below using the list format supplied. If you need more steps please add them. -->\r\n\r\n`streamlink https://www.bfmtv.com/en-direct/ best`\r\n\r\nor `streamlink https://www.bfmtv.com/paris/en-direct/ best`\r\nor any other channels supported by this plugin\r\n\r\n\r\n### Log output\r\n\r\n<!--\r\nTEXT LOG OUTPUT IS REQUIRED for a plugin issue!\r\nUse the `--loglevel debug` parameter and avoid using parameters which suppress log output.\r\nhttps://streamlink.github.io/cli.html#cmdoption-l\r\n\r\nMake sure to **remove usernames and passwords**\r\nYou can copy the output to https://gist.github.com/ or paste it below.\r\n\r\nDon't post screenshots of the log output and instead copy the text from your terminal application.\r\n-->\r\n\r\n```\r\nstreamlink --loglevel debug https://www.bfmtv.com/en-direct/ best\r\n[cli][debug] OS: Linux-5.8.15-201.fc32.x86_64-x86_64-with-glibc2.2.5\r\n[cli][debug] Python: 3.8.6\r\n[cli][debug] Streamlink: 1.7.0\r\n[cli][debug] Requests(2.24.0), Socks(1.7.1), Websocket(0.57.0)\r\n[cli][info] Found matching plugin bfmtv for URL https://www.bfmtv.com/en-direct/\r\nerror: No playable streams found on this URL: https://www.bfmtv.com/en-direct/\r\n```\r\n\r\n\r\n### Additional comments, etc.\r\n\r\nAs a workaround you can use their dailymotion stream: `streamlink https://www.dailymotion.com/embed/video/xgz4t1 best`\n", "before_files": [{"content": "import re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugins.brightcove import BrightcovePlayer\nfrom streamlink.stream import HLSStream\n\n\nclass BFMTV(Plugin):\n _url_re = re.compile(r'https://.+\\.(?:bfmtv|01net)\\.com')\n _brightcove_video_re = re.compile(\n r'data-holder=\"video(?P<video_id>[0-9]+)\" data-account=\"(?P<account_id>[0-9]+)\"'\n )\n _brightcove_video_alt_re = re.compile(\n r'data-account=\"(?P<account_id>[0-9]+).*?data-video-id=\"(?P<video_id>[0-9]+)\"',\n re.DOTALL\n )\n _embed_video_url_re = re.compile(\n r\"\\$YOPLAYER\\('liveStitching', {.+?file: '(?P<video_url>[^\\\"]+?)'.+?}\\);\",\n re.DOTALL\n )\n\n @classmethod\n def can_handle_url(cls, url):\n return BFMTV._url_re.match(url)\n\n def _get_streams(self):\n # Retrieve URL page and search for Brightcove video data\n res = self.session.http.get(self.url)\n match = self._brightcove_video_re.search(res.text) or self._brightcove_video_alt_re.search(res.text)\n if match is not None:\n account_id = match.group('account_id')\n video_id = match.group('video_id')\n player = BrightcovePlayer(self.session, account_id)\n for stream in player.get_streams(video_id):\n yield stream\n else:\n # Try to get the stream URL in the page\n match = self._embed_video_url_re.search(res.text)\n if match is not None:\n video_url = match.group('video_url')\n if '.m3u8' in video_url:\n yield from HLSStream.parse_variant_playlist(self.session, video_url).items()\n\n\n__plugin__ = BFMTV\n", "path": "src/streamlink/plugins/bfmtv.py"}], "after_files": [{"content": "import logging\nimport re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugins.brightcove import BrightcovePlayer\n\nlog = logging.getLogger(__name__)\n\n\nclass BFMTV(Plugin):\n _url_re = re.compile(r'https://.+\\.(?:bfmtv|01net)\\.com')\n _dailymotion_url = 'https://www.dailymotion.com/embed/video/{}'\n _brightcove_video_re = re.compile(\n r'accountid=\"(?P<account_id>[0-9]+).*?videoid=\"(?P<video_id>[0-9]+)\"',\n re.DOTALL\n )\n _embed_video_id_re = re.compile(\n r'<iframe.*?src=\".*?/(?P<video_id>\\w+)\"',\n re.DOTALL\n )\n\n @classmethod\n def can_handle_url(cls, url):\n return cls._url_re.match(url) is not None\n\n def _get_streams(self):\n # Retrieve URL page and search for Brightcove video data\n res = self.session.http.get(self.url)\n match = self._brightcove_video_re.search(res.text)\n if match is not None:\n account_id = match.group('account_id')\n log.debug(f'Account ID: {account_id}')\n video_id = match.group('video_id')\n log.debug(f'Video ID: {video_id}')\n player = BrightcovePlayer(self.session, account_id)\n yield from player.get_streams(video_id)\n else:\n # Try to find the Dailymotion video ID\n match = self._embed_video_id_re.search(res.text)\n if match is not None:\n video_id = match.group('video_id')\n log.debug(f'Video ID: {video_id}')\n yield from self.session.streams(self._dailymotion_url.format(video_id)).items()\n\n\n__plugin__ = BFMTV\n", "path": "src/streamlink/plugins/bfmtv.py"}]}
| 1,353 | 748 |
gh_patches_debug_29603
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-3976
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Suppression comment in Dockerfile skips more checks than it should
**Describe the issue**
I tried to skip CKV_DOCKER_9 by adding a suppression comment into my Dockerfile, but it turned out that CKV_DOCKER_5 was also skipped.
**Examples**
Here's my Dockerfile with comment to skip CKV_DOCKER_9:
```dockerfile
#checkov:skip=CKV_DOCKER_9
FROM alpine:3.16
RUN ls
```
As you can see, CKV_DOCKER_5 is also marked as SKIPPED in the results:
```
$ docker run -it --rm -v "$(pwd):$(pwd)" -w "$(pwd)" bridgecrew/checkov:2.2.104 --framework dockerfile --skip-download --file Dockerfile
_ _
___| |__ ___ ___| | _______ __
/ __| '_ \ / _ \/ __| |/ / _ \ \ / /
| (__| | | | __/ (__| < (_) \ V /
\___|_| |_|\___|\___|_|\_\___/ \_/
By bridgecrew.io | version: 2.2.104
dockerfile scan results:
Passed checks: 2, Failed checks: 2, Skipped checks: 2
Check: CKV_DOCKER_11: "Ensure From Alias are unique for multistage builds."
PASSED for resource: Dockerfile.
File: Dockerfile:1-4
Check: CKV_DOCKER_7: "Ensure the base image uses a non latest version tag"
PASSED for resource: Dockerfile.
File: Dockerfile:1-4
Check: CKV_DOCKER_2: "Ensure that HEALTHCHECK instructions have been added to container images"
FAILED for resource: Dockerfile.
File: Dockerfile:1-4
1 | #checkov:skip=CKV_DOCKER_9
2 | FROM alpine:3.16
3 | RUN ls
4 |
Check: CKV_DOCKER_3: "Ensure that a user for the container has been created"
FAILED for resource: Dockerfile.
File: Dockerfile:1-4
1 | #checkov:skip=CKV_DOCKER_9
2 | FROM alpine:3.16
3 | RUN ls
4 |
Check: CKV_DOCKER_9: "Ensure that APT isn't used"
SKIPPED for resource: Dockerfile.
Suppress comment: No comment provided
File: Dockerfile:1-4
Check: CKV_DOCKER_5: "Ensure update instructions are not use alone in the Dockerfile"
SKIPPED for resource: Dockerfile.
Suppress comment: No comment provided
File: Dockerfile:1-4
```
**Version (please complete the following information):**
- Checkov Version: 2.2.104
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/dockerfile/base_registry.py`
Content:
```
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING, Any, cast
4
5 from checkov.common.bridgecrew.check_type import CheckType
6 from checkov.common.checks.base_check_registry import BaseCheckRegistry
7 from checkov.common.models.enums import CheckResult
8
9 if TYPE_CHECKING:
10 from dockerfile_parse.parser import _Instruction # only in extra_stubs
11 from checkov.common.checks.base_check import BaseCheck
12 from checkov.common.typing import _SkippedCheck, _CheckResult
13 from checkov.runner_filter import RunnerFilter
14
15
16 class Registry(BaseCheckRegistry):
17 def __init__(self) -> None:
18 super().__init__(CheckType.DOCKERFILE)
19
20 def extract_entity_details(self, entity: dict[str, Any]) -> tuple[str, str, dict[str, Any]]:
21 # not needed
22 return "", "", {}
23
24 def scan(
25 self,
26 scanned_file: str,
27 entity: dict[str, list[_Instruction]],
28 skipped_checks: list[_SkippedCheck],
29 runner_filter: RunnerFilter,
30 report_type: str | None = None,
31 ) -> dict[BaseCheck, _CheckResult]:
32
33 results: "dict[BaseCheck, _CheckResult]" = {}
34 if not entity:
35 return results
36 for instruction, checks in self.checks.items():
37 skip_info: _SkippedCheck = {}
38 if instruction in entity:
39
40 for check in checks:
41 if check.id in [x['id'] for x in skipped_checks]:
42 skip_info = [x for x in skipped_checks if x['id'] == check.id][0]
43
44 if runner_filter.should_run_check(check, report_type=CheckType.DOCKERFILE):
45 self.update_result(
46 check=check,
47 entity_configuration=entity[instruction],
48 entity_name=instruction,
49 entity_type=instruction,
50 results=results,
51 scanned_file=scanned_file,
52 skip_info=skip_info,
53 )
54
55 for check in self.wildcard_checks["*"]:
56 skip_info = {}
57 if skipped_checks:
58 if check.id in [x['id'] for x in skipped_checks]:
59 skip_info = [x for x in skipped_checks if x['id'] == check.id][0]
60
61 if runner_filter.should_run_check(check, report_type=CheckType.DOCKERFILE):
62 self.update_result(
63 check=check,
64 entity_configuration=entity,
65 entity_name=scanned_file,
66 entity_type="*",
67 results=results,
68 scanned_file=scanned_file,
69 skip_info=skip_info,
70 )
71 return results
72
73 def update_result(
74 self,
75 check: BaseCheck,
76 entity_configuration: list[_Instruction] | dict[str, list[_Instruction]],
77 entity_name: str,
78 entity_type: str,
79 results: dict[BaseCheck, _CheckResult],
80 scanned_file: str,
81 skip_info: _SkippedCheck
82 ) -> None:
83 result = self.run_check(
84 check=check,
85 entity_configuration=entity_configuration, # type:ignore[arg-type] # special Dockerfile runner behaviour
86 entity_name=entity_name,
87 entity_type=entity_type,
88 scanned_file=scanned_file,
89 skip_info=skip_info,
90 )
91 results[check] = {}
92 if result['result'] == CheckResult.SKIPPED:
93 results[check]['result'] = result['result']
94 results[check]['suppress_comment'] = result['suppress_comment']
95 results[check]['results_configuration'] = None
96 else:
97 results[check]['result'] = cast("CheckResult", result['result'][0])
98 results[check]['results_configuration'] = cast("dict[str, Any]", result['result'][1])
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/checkov/dockerfile/base_registry.py b/checkov/dockerfile/base_registry.py
--- a/checkov/dockerfile/base_registry.py
+++ b/checkov/dockerfile/base_registry.py
@@ -33,13 +33,16 @@
results: "dict[BaseCheck, _CheckResult]" = {}
if not entity:
return results
+
+ skipped_check_ids = {skipped_check["id"]: skipped_check for skipped_check in skipped_checks}
+
for instruction, checks in self.checks.items():
- skip_info: _SkippedCheck = {}
if instruction in entity:
-
for check in checks:
- if check.id in [x['id'] for x in skipped_checks]:
- skip_info = [x for x in skipped_checks if x['id'] == check.id][0]
+ skip_info: "_SkippedCheck" = {}
+ if skipped_check_ids:
+ if check.id in skipped_check_ids:
+ skip_info = skipped_check_ids[check.id]
if runner_filter.should_run_check(check, report_type=CheckType.DOCKERFILE):
self.update_result(
@@ -54,9 +57,9 @@
for check in self.wildcard_checks["*"]:
skip_info = {}
- if skipped_checks:
- if check.id in [x['id'] for x in skipped_checks]:
- skip_info = [x for x in skipped_checks if x['id'] == check.id][0]
+ if skipped_check_ids:
+ if check.id in skipped_check_ids:
+ skip_info = skipped_check_ids[check.id]
if runner_filter.should_run_check(check, report_type=CheckType.DOCKERFILE):
self.update_result(
|
{"golden_diff": "diff --git a/checkov/dockerfile/base_registry.py b/checkov/dockerfile/base_registry.py\n--- a/checkov/dockerfile/base_registry.py\n+++ b/checkov/dockerfile/base_registry.py\n@@ -33,13 +33,16 @@\n results: \"dict[BaseCheck, _CheckResult]\" = {}\n if not entity:\n return results\n+\n+ skipped_check_ids = {skipped_check[\"id\"]: skipped_check for skipped_check in skipped_checks}\n+\n for instruction, checks in self.checks.items():\n- skip_info: _SkippedCheck = {}\n if instruction in entity:\n-\n for check in checks:\n- if check.id in [x['id'] for x in skipped_checks]:\n- skip_info = [x for x in skipped_checks if x['id'] == check.id][0]\n+ skip_info: \"_SkippedCheck\" = {}\n+ if skipped_check_ids:\n+ if check.id in skipped_check_ids:\n+ skip_info = skipped_check_ids[check.id]\n \n if runner_filter.should_run_check(check, report_type=CheckType.DOCKERFILE):\n self.update_result(\n@@ -54,9 +57,9 @@\n \n for check in self.wildcard_checks[\"*\"]:\n skip_info = {}\n- if skipped_checks:\n- if check.id in [x['id'] for x in skipped_checks]:\n- skip_info = [x for x in skipped_checks if x['id'] == check.id][0]\n+ if skipped_check_ids:\n+ if check.id in skipped_check_ids:\n+ skip_info = skipped_check_ids[check.id]\n \n if runner_filter.should_run_check(check, report_type=CheckType.DOCKERFILE):\n self.update_result(\n", "issue": "Suppression comment in Dockerfile skips more checks than it should\n**Describe the issue**\r\nI tried to skip CKV_DOCKER_9 by adding a suppression comment into my Dockerfile, but it turned out that CKV_DOCKER_5 was also skipped.\r\n\r\n**Examples**\r\nHere's my Dockerfile with comment to skip CKV_DOCKER_9:\r\n```dockerfile\r\n#checkov:skip=CKV_DOCKER_9\r\nFROM alpine:3.16\r\nRUN ls\r\n```\r\nAs you can see, CKV_DOCKER_5 is also marked as SKIPPED in the results:\r\n```\r\n$ docker run -it --rm -v \"$(pwd):$(pwd)\" -w \"$(pwd)\" bridgecrew/checkov:2.2.104 --framework dockerfile --skip-download --file Dockerfile\r\n\r\n _ _ \r\n ___| |__ ___ ___| | _______ __\r\n / __| '_ \\ / _ \\/ __| |/ / _ \\ \\ / /\r\n | (__| | | | __/ (__| < (_) \\ V / \r\n \\___|_| |_|\\___|\\___|_|\\_\\___/ \\_/ \r\n \r\nBy bridgecrew.io | version: 2.2.104 \r\n\r\ndockerfile scan results:\r\n\r\nPassed checks: 2, Failed checks: 2, Skipped checks: 2\r\n\r\nCheck: CKV_DOCKER_11: \"Ensure From Alias are unique for multistage builds.\"\r\n\tPASSED for resource: Dockerfile.\r\n\tFile: Dockerfile:1-4\r\nCheck: CKV_DOCKER_7: \"Ensure the base image uses a non latest version tag\"\r\n\tPASSED for resource: Dockerfile.\r\n\tFile: Dockerfile:1-4\r\nCheck: CKV_DOCKER_2: \"Ensure that HEALTHCHECK instructions have been added to container images\"\r\n\tFAILED for resource: Dockerfile.\r\n\tFile: Dockerfile:1-4\r\n\r\n\t\t1 | #checkov:skip=CKV_DOCKER_9\r\n\t\t2 | FROM alpine:3.16\r\n\t\t3 | RUN ls\r\n\t\t4 | \r\n\r\nCheck: CKV_DOCKER_3: \"Ensure that a user for the container has been created\"\r\n\tFAILED for resource: Dockerfile.\r\n\tFile: Dockerfile:1-4\r\n\r\n\t\t1 | #checkov:skip=CKV_DOCKER_9\r\n\t\t2 | FROM alpine:3.16\r\n\t\t3 | RUN ls\r\n\t\t4 | \r\n\r\nCheck: CKV_DOCKER_9: \"Ensure that APT isn't used\"\r\n\tSKIPPED for resource: Dockerfile.\r\n\tSuppress comment: No comment provided\r\n\tFile: Dockerfile:1-4\r\nCheck: CKV_DOCKER_5: \"Ensure update instructions are not use alone in the Dockerfile\"\r\n\tSKIPPED for resource: Dockerfile.\r\n\tSuppress comment: No comment provided\r\n\tFile: Dockerfile:1-4\r\n```\r\n\r\n**Version (please complete the following information):**\r\n - Checkov Version: 2.2.104\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any, cast\n\nfrom checkov.common.bridgecrew.check_type import CheckType\nfrom checkov.common.checks.base_check_registry import BaseCheckRegistry\nfrom checkov.common.models.enums import CheckResult\n\nif TYPE_CHECKING:\n from dockerfile_parse.parser import _Instruction # only in extra_stubs\n from checkov.common.checks.base_check import BaseCheck\n from checkov.common.typing import _SkippedCheck, _CheckResult\n from checkov.runner_filter import RunnerFilter\n\n\nclass Registry(BaseCheckRegistry):\n def __init__(self) -> None:\n super().__init__(CheckType.DOCKERFILE)\n\n def extract_entity_details(self, entity: dict[str, Any]) -> tuple[str, str, dict[str, Any]]:\n # not needed\n return \"\", \"\", {}\n\n def scan(\n self,\n scanned_file: str,\n entity: dict[str, list[_Instruction]],\n skipped_checks: list[_SkippedCheck],\n runner_filter: RunnerFilter,\n report_type: str | None = None,\n ) -> dict[BaseCheck, _CheckResult]:\n\n results: \"dict[BaseCheck, _CheckResult]\" = {}\n if not entity:\n return results\n for instruction, checks in self.checks.items():\n skip_info: _SkippedCheck = {}\n if instruction in entity:\n\n for check in checks:\n if check.id in [x['id'] for x in skipped_checks]:\n skip_info = [x for x in skipped_checks if x['id'] == check.id][0]\n\n if runner_filter.should_run_check(check, report_type=CheckType.DOCKERFILE):\n self.update_result(\n check=check,\n entity_configuration=entity[instruction],\n entity_name=instruction,\n entity_type=instruction,\n results=results,\n scanned_file=scanned_file,\n skip_info=skip_info,\n )\n\n for check in self.wildcard_checks[\"*\"]:\n skip_info = {}\n if skipped_checks:\n if check.id in [x['id'] for x in skipped_checks]:\n skip_info = [x for x in skipped_checks if x['id'] == check.id][0]\n\n if runner_filter.should_run_check(check, report_type=CheckType.DOCKERFILE):\n self.update_result(\n check=check,\n entity_configuration=entity,\n entity_name=scanned_file,\n entity_type=\"*\",\n results=results,\n scanned_file=scanned_file,\n skip_info=skip_info,\n )\n return results\n\n def update_result(\n self,\n check: BaseCheck,\n entity_configuration: list[_Instruction] | dict[str, list[_Instruction]],\n entity_name: str,\n entity_type: str,\n results: dict[BaseCheck, _CheckResult],\n scanned_file: str,\n skip_info: _SkippedCheck\n ) -> None:\n result = self.run_check(\n check=check,\n entity_configuration=entity_configuration, # type:ignore[arg-type] # special Dockerfile runner behaviour\n entity_name=entity_name,\n entity_type=entity_type,\n scanned_file=scanned_file,\n skip_info=skip_info,\n )\n results[check] = {}\n if result['result'] == CheckResult.SKIPPED:\n results[check]['result'] = result['result']\n results[check]['suppress_comment'] = result['suppress_comment']\n results[check]['results_configuration'] = None\n else:\n results[check]['result'] = cast(\"CheckResult\", result['result'][0])\n results[check]['results_configuration'] = cast(\"dict[str, Any]\", result['result'][1])\n", "path": "checkov/dockerfile/base_registry.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any, cast\n\nfrom checkov.common.bridgecrew.check_type import CheckType\nfrom checkov.common.checks.base_check_registry import BaseCheckRegistry\nfrom checkov.common.models.enums import CheckResult\n\nif TYPE_CHECKING:\n from dockerfile_parse.parser import _Instruction # only in extra_stubs\n from checkov.common.checks.base_check import BaseCheck\n from checkov.common.typing import _SkippedCheck, _CheckResult\n from checkov.runner_filter import RunnerFilter\n\n\nclass Registry(BaseCheckRegistry):\n def __init__(self) -> None:\n super().__init__(CheckType.DOCKERFILE)\n\n def extract_entity_details(self, entity: dict[str, Any]) -> tuple[str, str, dict[str, Any]]:\n # not needed\n return \"\", \"\", {}\n\n def scan(\n self,\n scanned_file: str,\n entity: dict[str, list[_Instruction]],\n skipped_checks: list[_SkippedCheck],\n runner_filter: RunnerFilter,\n report_type: str | None = None,\n ) -> dict[BaseCheck, _CheckResult]:\n\n results: \"dict[BaseCheck, _CheckResult]\" = {}\n if not entity:\n return results\n\n skipped_check_ids = {skipped_check[\"id\"]: skipped_check for skipped_check in skipped_checks}\n\n for instruction, checks in self.checks.items():\n if instruction in entity:\n for check in checks:\n skip_info: \"_SkippedCheck\" = {}\n if skipped_check_ids:\n if check.id in skipped_check_ids:\n skip_info = skipped_check_ids[check.id]\n\n if runner_filter.should_run_check(check, report_type=CheckType.DOCKERFILE):\n self.update_result(\n check=check,\n entity_configuration=entity[instruction],\n entity_name=instruction,\n entity_type=instruction,\n results=results,\n scanned_file=scanned_file,\n skip_info=skip_info,\n )\n\n for check in self.wildcard_checks[\"*\"]:\n skip_info = {}\n if skipped_check_ids:\n if check.id in skipped_check_ids:\n skip_info = skipped_check_ids[check.id]\n\n if runner_filter.should_run_check(check, report_type=CheckType.DOCKERFILE):\n self.update_result(\n check=check,\n entity_configuration=entity,\n entity_name=scanned_file,\n entity_type=\"*\",\n results=results,\n scanned_file=scanned_file,\n skip_info=skip_info,\n )\n return results\n\n def update_result(\n self,\n check: BaseCheck,\n entity_configuration: list[_Instruction] | dict[str, list[_Instruction]],\n entity_name: str,\n entity_type: str,\n results: dict[BaseCheck, _CheckResult],\n scanned_file: str,\n skip_info: _SkippedCheck\n ) -> None:\n result = self.run_check(\n check=check,\n entity_configuration=entity_configuration, # type:ignore[arg-type] # special Dockerfile runner behaviour\n entity_name=entity_name,\n entity_type=entity_type,\n scanned_file=scanned_file,\n skip_info=skip_info,\n )\n results[check] = {}\n if result['result'] == CheckResult.SKIPPED:\n results[check]['result'] = result['result']\n results[check]['suppress_comment'] = result['suppress_comment']\n results[check]['results_configuration'] = None\n else:\n results[check]['result'] = cast(\"CheckResult\", result['result'][0])\n results[check]['results_configuration'] = cast(\"dict[str, Any]\", result['result'][1])\n", "path": "checkov/dockerfile/base_registry.py"}]}
| 1,908 | 373 |
gh_patches_debug_27707
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-8345
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
puregym_gb spider broken
It looks like the PureGym spider (puregym_gb.py) is broken. For the last few weeks it was consistently returning 303 results, but then on 2024-05-04 the number dropped to 1. Looking at the Spider stats, I think it's found 401 branches, but then dropped all but one of them for having duplicate refs.
If I've understood what's going on correctly, the spider code is obtaining a ref from a "gymId" parameter on each individual branch page. However, I don't think this parameter exists any more on those pages. Hence each branch gets the same null ref value, and then all the duplicates are dropped.
Hopefully removing the custom ref code will fix the spider. I think there should be around 400 branches in total. (There's currently 399 unique https://www.puregym.com/gyms/*/ URLs in the sitemap.) So probably even the 303 results being returned consistently before was as a result of incorrectly dropping ~100 gyms.
I've just checked a sample of 15 of the URLs from the sitemap. I found one "temporarily closed" https://www.puregym.com/gyms/tonbridge/ and one "coming soon" https://www.puregym.com/gyms/washington/ . I don't know if it's worth trying to catch these cases or not.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/puregym_gb.py`
Content:
```
1 from scrapy.spiders import SitemapSpider
2
3 from locations.google_url import extract_google_position
4 from locations.structured_data_spider import StructuredDataSpider
5
6
7 class PureGymGBSpider(SitemapSpider, StructuredDataSpider):
8 name = "puregym_gb"
9 item_attributes = {
10 "brand": "PureGym",
11 "brand_wikidata": "Q18345898",
12 "country": "GB",
13 }
14 allowed_domains = ["www.puregym.com"]
15 sitemap_urls = ["https://www.puregym.com/sitemap.xml"]
16 sitemap_rules = [
17 (
18 r"https:\/\/www\.puregym\.com\/gyms\/([\w-]+)\/$",
19 "parse_sd",
20 ),
21 ]
22 wanted_types = ["HealthClub"]
23
24 def inspect_item(self, item, response):
25 item["ref"] = response.xpath('//meta[@itemprop="gymId"]/@content').get()
26 extract_google_position(item, response)
27
28 yield item
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/locations/spiders/puregym_gb.py b/locations/spiders/puregym_gb.py
--- a/locations/spiders/puregym_gb.py
+++ b/locations/spiders/puregym_gb.py
@@ -1,28 +1,28 @@
+from urllib.parse import parse_qs, urlparse
+
from scrapy.spiders import SitemapSpider
-from locations.google_url import extract_google_position
from locations.structured_data_spider import StructuredDataSpider
class PureGymGBSpider(SitemapSpider, StructuredDataSpider):
name = "puregym_gb"
- item_attributes = {
- "brand": "PureGym",
- "brand_wikidata": "Q18345898",
- "country": "GB",
- }
+ item_attributes = {"brand": "PureGym", "brand_wikidata": "Q18345898", "country": "GB"}
allowed_domains = ["www.puregym.com"]
sitemap_urls = ["https://www.puregym.com/sitemap.xml"]
- sitemap_rules = [
- (
- r"https:\/\/www\.puregym\.com\/gyms\/([\w-]+)\/$",
- "parse_sd",
- ),
- ]
+ sitemap_rules = [(r"/gyms/([^/]+)/$", "parse_sd")]
wanted_types = ["HealthClub"]
- def inspect_item(self, item, response):
- item["ref"] = response.xpath('//meta[@itemprop="gymId"]/@content').get()
- extract_google_position(item, response)
+ def pre_process_data(self, ld_data, **kwargs):
+ ld_data["address"] = ld_data.get("location", {}).get("address")
+
+ def post_process_item(self, item, response, ld_data, **kwargs):
+ item["branch"] = item.pop("name")
+ item["image"] = None
+
+ if img := response.xpath('//img[contains(@src, "tiles.stadiamaps.com")]/@src').get():
+ q = parse_qs(urlparse(img)[4])
+ if "center" in q:
+ item["lat"], item["lon"] = q["center"][0].split(",", 1)
yield item
|
{"golden_diff": "diff --git a/locations/spiders/puregym_gb.py b/locations/spiders/puregym_gb.py\n--- a/locations/spiders/puregym_gb.py\n+++ b/locations/spiders/puregym_gb.py\n@@ -1,28 +1,28 @@\n+from urllib.parse import parse_qs, urlparse\n+\n from scrapy.spiders import SitemapSpider\n \n-from locations.google_url import extract_google_position\n from locations.structured_data_spider import StructuredDataSpider\n \n \n class PureGymGBSpider(SitemapSpider, StructuredDataSpider):\n name = \"puregym_gb\"\n- item_attributes = {\n- \"brand\": \"PureGym\",\n- \"brand_wikidata\": \"Q18345898\",\n- \"country\": \"GB\",\n- }\n+ item_attributes = {\"brand\": \"PureGym\", \"brand_wikidata\": \"Q18345898\", \"country\": \"GB\"}\n allowed_domains = [\"www.puregym.com\"]\n sitemap_urls = [\"https://www.puregym.com/sitemap.xml\"]\n- sitemap_rules = [\n- (\n- r\"https:\\/\\/www\\.puregym\\.com\\/gyms\\/([\\w-]+)\\/$\",\n- \"parse_sd\",\n- ),\n- ]\n+ sitemap_rules = [(r\"/gyms/([^/]+)/$\", \"parse_sd\")]\n wanted_types = [\"HealthClub\"]\n \n- def inspect_item(self, item, response):\n- item[\"ref\"] = response.xpath('//meta[@itemprop=\"gymId\"]/@content').get()\n- extract_google_position(item, response)\n+ def pre_process_data(self, ld_data, **kwargs):\n+ ld_data[\"address\"] = ld_data.get(\"location\", {}).get(\"address\")\n+\n+ def post_process_item(self, item, response, ld_data, **kwargs):\n+ item[\"branch\"] = item.pop(\"name\")\n+ item[\"image\"] = None\n+\n+ if img := response.xpath('//img[contains(@src, \"tiles.stadiamaps.com\")]/@src').get():\n+ q = parse_qs(urlparse(img)[4])\n+ if \"center\" in q:\n+ item[\"lat\"], item[\"lon\"] = q[\"center\"][0].split(\",\", 1)\n \n yield item\n", "issue": "puregym_gb spider broken\nIt looks like the PureGym spider (puregym_gb.py) is broken. For the last few weeks it was consistently returning 303 results, but then on 2024-05-04 the number dropped to 1. Looking at the Spider stats, I think it's found 401 branches, but then dropped all but one of them for having duplicate refs.\r\n\r\nIf I've understood what's going on correctly, the spider code is obtaining a ref from a \"gymId\" parameter on each individual branch page. However, I don't think this parameter exists any more on those pages. Hence each branch gets the same null ref value, and then all the duplicates are dropped.\r\n\r\nHopefully removing the custom ref code will fix the spider. I think there should be around 400 branches in total. (There's currently 399 unique https://www.puregym.com/gyms/*/ URLs in the sitemap.) So probably even the 303 results being returned consistently before was as a result of incorrectly dropping ~100 gyms.\r\n\r\nI've just checked a sample of 15 of the URLs from the sitemap. I found one \"temporarily closed\" https://www.puregym.com/gyms/tonbridge/ and one \"coming soon\" https://www.puregym.com/gyms/washington/ . I don't know if it's worth trying to catch these cases or not.\n", "before_files": [{"content": "from scrapy.spiders import SitemapSpider\n\nfrom locations.google_url import extract_google_position\nfrom locations.structured_data_spider import StructuredDataSpider\n\n\nclass PureGymGBSpider(SitemapSpider, StructuredDataSpider):\n name = \"puregym_gb\"\n item_attributes = {\n \"brand\": \"PureGym\",\n \"brand_wikidata\": \"Q18345898\",\n \"country\": \"GB\",\n }\n allowed_domains = [\"www.puregym.com\"]\n sitemap_urls = [\"https://www.puregym.com/sitemap.xml\"]\n sitemap_rules = [\n (\n r\"https:\\/\\/www\\.puregym\\.com\\/gyms\\/([\\w-]+)\\/$\",\n \"parse_sd\",\n ),\n ]\n wanted_types = [\"HealthClub\"]\n\n def inspect_item(self, item, response):\n item[\"ref\"] = response.xpath('//meta[@itemprop=\"gymId\"]/@content').get()\n extract_google_position(item, response)\n\n yield item\n", "path": "locations/spiders/puregym_gb.py"}], "after_files": [{"content": "from urllib.parse import parse_qs, urlparse\n\nfrom scrapy.spiders import SitemapSpider\n\nfrom locations.structured_data_spider import StructuredDataSpider\n\n\nclass PureGymGBSpider(SitemapSpider, StructuredDataSpider):\n name = \"puregym_gb\"\n item_attributes = {\"brand\": \"PureGym\", \"brand_wikidata\": \"Q18345898\", \"country\": \"GB\"}\n allowed_domains = [\"www.puregym.com\"]\n sitemap_urls = [\"https://www.puregym.com/sitemap.xml\"]\n sitemap_rules = [(r\"/gyms/([^/]+)/$\", \"parse_sd\")]\n wanted_types = [\"HealthClub\"]\n\n def pre_process_data(self, ld_data, **kwargs):\n ld_data[\"address\"] = ld_data.get(\"location\", {}).get(\"address\")\n\n def post_process_item(self, item, response, ld_data, **kwargs):\n item[\"branch\"] = item.pop(\"name\")\n item[\"image\"] = None\n\n if img := response.xpath('//img[contains(@src, \"tiles.stadiamaps.com\")]/@src').get():\n q = parse_qs(urlparse(img)[4])\n if \"center\" in q:\n item[\"lat\"], item[\"lon\"] = q[\"center\"][0].split(\",\", 1)\n\n yield item\n", "path": "locations/spiders/puregym_gb.py"}]}
| 848 | 511 |
gh_patches_debug_15301
|
rasdani/github-patches
|
git_diff
|
conda__conda-5404
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`cli.python_api` captures `stdout` from `cli.common.confirm_yn`
`cli.python_api.run_command` captures any output to `stdout`/`stderr` via `common.io.captured`. This causes the user confirmation messages from `cli.common.confirm_yn`, i.e.,
```bash
Proceed ([y]/n)?
```
to also be captured, i.e., not being output/accessible until the command finishes (which might be never if the user does not interact). The examples in the doc string are
```python
Examples:
>> run_command(Commands.CREATE, "-n newenv python=3 flask", use_exception_handler=True)
>> run_command(Commands.CREATE, "-n newenv", "python=3", "flask")
>> run_command(Commands.CREATE, ["-n newenv", "python=3", "flask"], search_path=())
```
and show exactly such use cases.
Due to this, `run_command` is only generally usable if any of `--json`, `--yes` or `--dry-run` is supplied.
There could be two solutions to this:
1. Force the user to include at least one of those arguments.
2. Instead of only capturing the output, it could be recorded but also forwarded to `stdout`/`stderr`. This could/should be made optional via some `kwarg` for `run_command`.
Possibility 1. is of course much simpler and could at least serve as a temporary solution.
`cli.python_api` captures `stdout` from `cli.common.confirm_yn`
`cli.python_api.run_command` captures any output to `stdout`/`stderr` via `common.io.captured`. This causes the user confirmation messages from `cli.common.confirm_yn`, i.e.,
```bash
Proceed ([y]/n)?
```
to also be captured, i.e., not being output/accessible until the command finishes (which might be never if the user does not interact). The examples in the doc string are
```python
Examples:
>> run_command(Commands.CREATE, "-n newenv python=3 flask", use_exception_handler=True)
>> run_command(Commands.CREATE, "-n newenv", "python=3", "flask")
>> run_command(Commands.CREATE, ["-n newenv", "python=3", "flask"], search_path=())
```
and show exactly such use cases.
Due to this, `run_command` is only generally usable if any of `--json`, `--yes` or `--dry-run` is supplied.
There could be two solutions to this:
1. Force the user to include at least one of those arguments.
2. Instead of only capturing the output, it could be recorded but also forwarded to `stdout`/`stderr`. This could/should be made optional via some `kwarg` for `run_command`.
Possibility 1. is of course much simpler and could at least serve as a temporary solution.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conda/cli/python_api.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 from importlib import import_module
5 from logging import getLogger
6 from shlex import split
7
8 from ..base.constants import APP_NAME, SEARCH_PATH
9 from ..base.context import context
10 from ..cli.main import generate_parser
11 from ..common.io import captured, replace_log_streams
12 from ..common.path import win_path_double_escape
13 from ..exceptions import conda_exception_handler
14 from ..gateways import initialize_logging
15
16 initialize_logging()
17 log = getLogger(__name__)
18
19
20 class Commands:
21 CONFIG = "config"
22 CLEAN = "clean"
23 CREATE = "create"
24 INFO = "info"
25 INSTALL = "install"
26 LIST = "list"
27 REMOVE = "remove"
28 SEARCH = "search"
29 UPDATE = "update"
30
31
32 def get_configure_parser_function(command):
33 module = 'conda.cli.main_' + command
34 return import_module(module).configure_parser
35
36
37 def run_command(command, *arguments, **kwargs):
38 """
39
40 Args:
41 command: one of the Commands.X
42 *arguments: instructions you would normally pass to the conda comamnd on the command line
43 see below for examples
44 **kwargs: special instructions for programmatic overrides
45 use_exception_handler: defaults to False. False will let the code calling
46 `run_command` handle all exceptions. True won't raise when an exception
47 has occured, and instead give a non-zero return code
48 search_path: an optional non-standard search path for configuration information
49 that overrides the default SEARCH_PATH
50
51 Returns: a tuple of stdout, stderr, and return_code
52
53 Examples:
54 >> run_command(Commands.CREATE, "-n newenv python=3 flask", use_exception_handler=True)
55 >> run_command(Commands.CREATE, "-n newenv", "python=3", "flask")
56 >> run_command(Commands.CREATE, ["-n newenv", "python=3", "flask"], search_path=())
57
58
59 """
60 use_exception_handler = kwargs.get('use_exception_handler', False)
61 configuration_search_path = kwargs.get('search_path', SEARCH_PATH)
62 p, sub_parsers = generate_parser()
63 get_configure_parser_function(command)(sub_parsers)
64
65 arguments = map(win_path_double_escape, arguments)
66 command_line = "%s %s" % (command, " ".join(arguments))
67 split_command_line = split(command_line)
68
69 args = p.parse_args(split_command_line)
70 context.__init__(
71 search_path=configuration_search_path,
72 app_name=APP_NAME,
73 argparse_args=args,
74 )
75 log.debug("executing command >>> conda %s", command_line)
76 try:
77 with captured() as c, replace_log_streams():
78 if use_exception_handler:
79 return_code = conda_exception_handler(args.func, args, p)
80 else:
81 return_code = args.func(args, p)
82 except Exception as e:
83 log.debug("\n stdout: %s\n stderr: %s", c.stdout, c.stderr)
84 e.stdout, e.stderr = c.stdout, c.stderr
85 raise e
86 log.debug("\n stdout: %s\n stderr: %s\n return_code: %s", c.stdout, c.stderr, return_code)
87 return c.stdout, c.stderr, return_code
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/conda/cli/python_api.py b/conda/cli/python_api.py
--- a/conda/cli/python_api.py
+++ b/conda/cli/python_api.py
@@ -35,7 +35,10 @@
def run_command(command, *arguments, **kwargs):
- """
+ """Runs a conda command in-process with a given set of command-line interface arguments.
+
+ Differences from the command-line interface:
+ Always uses --yes flag, thus does not ask for confirmation.
Args:
command: one of the Commands.X
@@ -67,6 +70,7 @@
split_command_line = split(command_line)
args = p.parse_args(split_command_line)
+ args.yes = True # always skip user confirmation, force setting context.always_yes
context.__init__(
search_path=configuration_search_path,
app_name=APP_NAME,
|
{"golden_diff": "diff --git a/conda/cli/python_api.py b/conda/cli/python_api.py\n--- a/conda/cli/python_api.py\n+++ b/conda/cli/python_api.py\n@@ -35,7 +35,10 @@\n \n \n def run_command(command, *arguments, **kwargs):\n- \"\"\"\n+ \"\"\"Runs a conda command in-process with a given set of command-line interface arguments.\n+\n+ Differences from the command-line interface:\n+ Always uses --yes flag, thus does not ask for confirmation.\n \n Args:\n command: one of the Commands.X\n@@ -67,6 +70,7 @@\n split_command_line = split(command_line)\n \n args = p.parse_args(split_command_line)\n+ args.yes = True # always skip user confirmation, force setting context.always_yes\n context.__init__(\n search_path=configuration_search_path,\n app_name=APP_NAME,\n", "issue": "`cli.python_api` captures `stdout` from `cli.common.confirm_yn`\n`cli.python_api.run_command` captures any output to `stdout`/`stderr` via `common.io.captured`. This causes the user confirmation messages from `cli.common.confirm_yn`, i.e.,\r\n```bash\r\nProceed ([y]/n)? \r\n```\r\nto also be captured, i.e., not being output/accessible until the command finishes (which might be never if the user does not interact). The examples in the doc string are\r\n```python\r\n Examples:\r\n >> run_command(Commands.CREATE, \"-n newenv python=3 flask\", use_exception_handler=True)\r\n >> run_command(Commands.CREATE, \"-n newenv\", \"python=3\", \"flask\")\r\n >> run_command(Commands.CREATE, [\"-n newenv\", \"python=3\", \"flask\"], search_path=())\r\n```\r\nand show exactly such use cases.\r\n\r\nDue to this, `run_command` is only generally usable if any of `--json`, `--yes` or `--dry-run` is supplied.\r\nThere could be two solutions to this:\r\n 1. Force the user to include at least one of those arguments.\r\n 2. Instead of only capturing the output, it could be recorded but also forwarded to `stdout`/`stderr`. This could/should be made optional via some `kwarg` for `run_command`.\r\n\r\nPossibility 1. is of course much simpler and could at least serve as a temporary solution.\n`cli.python_api` captures `stdout` from `cli.common.confirm_yn`\n`cli.python_api.run_command` captures any output to `stdout`/`stderr` via `common.io.captured`. This causes the user confirmation messages from `cli.common.confirm_yn`, i.e.,\r\n```bash\r\nProceed ([y]/n)? \r\n```\r\nto also be captured, i.e., not being output/accessible until the command finishes (which might be never if the user does not interact). The examples in the doc string are\r\n```python\r\n Examples:\r\n >> run_command(Commands.CREATE, \"-n newenv python=3 flask\", use_exception_handler=True)\r\n >> run_command(Commands.CREATE, \"-n newenv\", \"python=3\", \"flask\")\r\n >> run_command(Commands.CREATE, [\"-n newenv\", \"python=3\", \"flask\"], search_path=())\r\n```\r\nand show exactly such use cases.\r\n\r\nDue to this, `run_command` is only generally usable if any of `--json`, `--yes` or `--dry-run` is supplied.\r\nThere could be two solutions to this:\r\n 1. Force the user to include at least one of those arguments.\r\n 2. Instead of only capturing the output, it could be recorded but also forwarded to `stdout`/`stderr`. This could/should be made optional via some `kwarg` for `run_command`.\r\n\r\nPossibility 1. is of course much simpler and could at least serve as a temporary solution.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom importlib import import_module\nfrom logging import getLogger\nfrom shlex import split\n\nfrom ..base.constants import APP_NAME, SEARCH_PATH\nfrom ..base.context import context\nfrom ..cli.main import generate_parser\nfrom ..common.io import captured, replace_log_streams\nfrom ..common.path import win_path_double_escape\nfrom ..exceptions import conda_exception_handler\nfrom ..gateways import initialize_logging\n\ninitialize_logging()\nlog = getLogger(__name__)\n\n\nclass Commands:\n CONFIG = \"config\"\n CLEAN = \"clean\"\n CREATE = \"create\"\n INFO = \"info\"\n INSTALL = \"install\"\n LIST = \"list\"\n REMOVE = \"remove\"\n SEARCH = \"search\"\n UPDATE = \"update\"\n\n\ndef get_configure_parser_function(command):\n module = 'conda.cli.main_' + command\n return import_module(module).configure_parser\n\n\ndef run_command(command, *arguments, **kwargs):\n \"\"\"\n\n Args:\n command: one of the Commands.X\n *arguments: instructions you would normally pass to the conda comamnd on the command line\n see below for examples\n **kwargs: special instructions for programmatic overrides\n use_exception_handler: defaults to False. False will let the code calling\n `run_command` handle all exceptions. True won't raise when an exception\n has occured, and instead give a non-zero return code\n search_path: an optional non-standard search path for configuration information\n that overrides the default SEARCH_PATH\n\n Returns: a tuple of stdout, stderr, and return_code\n\n Examples:\n >> run_command(Commands.CREATE, \"-n newenv python=3 flask\", use_exception_handler=True)\n >> run_command(Commands.CREATE, \"-n newenv\", \"python=3\", \"flask\")\n >> run_command(Commands.CREATE, [\"-n newenv\", \"python=3\", \"flask\"], search_path=())\n\n\n \"\"\"\n use_exception_handler = kwargs.get('use_exception_handler', False)\n configuration_search_path = kwargs.get('search_path', SEARCH_PATH)\n p, sub_parsers = generate_parser()\n get_configure_parser_function(command)(sub_parsers)\n\n arguments = map(win_path_double_escape, arguments)\n command_line = \"%s %s\" % (command, \" \".join(arguments))\n split_command_line = split(command_line)\n\n args = p.parse_args(split_command_line)\n context.__init__(\n search_path=configuration_search_path,\n app_name=APP_NAME,\n argparse_args=args,\n )\n log.debug(\"executing command >>> conda %s\", command_line)\n try:\n with captured() as c, replace_log_streams():\n if use_exception_handler:\n return_code = conda_exception_handler(args.func, args, p)\n else:\n return_code = args.func(args, p)\n except Exception as e:\n log.debug(\"\\n stdout: %s\\n stderr: %s\", c.stdout, c.stderr)\n e.stdout, e.stderr = c.stdout, c.stderr\n raise e\n log.debug(\"\\n stdout: %s\\n stderr: %s\\n return_code: %s\", c.stdout, c.stderr, return_code)\n return c.stdout, c.stderr, return_code\n", "path": "conda/cli/python_api.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom importlib import import_module\nfrom logging import getLogger\nfrom shlex import split\n\nfrom ..base.constants import APP_NAME, SEARCH_PATH\nfrom ..base.context import context\nfrom ..cli.main import generate_parser\nfrom ..common.io import captured, replace_log_streams\nfrom ..common.path import win_path_double_escape\nfrom ..exceptions import conda_exception_handler\nfrom ..gateways import initialize_logging\n\ninitialize_logging()\nlog = getLogger(__name__)\n\n\nclass Commands:\n CONFIG = \"config\"\n CLEAN = \"clean\"\n CREATE = \"create\"\n INFO = \"info\"\n INSTALL = \"install\"\n LIST = \"list\"\n REMOVE = \"remove\"\n SEARCH = \"search\"\n UPDATE = \"update\"\n\n\ndef get_configure_parser_function(command):\n module = 'conda.cli.main_' + command\n return import_module(module).configure_parser\n\n\ndef run_command(command, *arguments, **kwargs):\n \"\"\"Runs a conda command in-process with a given set of command-line interface arguments.\n\n Differences from the command-line interface:\n Always uses --yes flag, thus does not ask for confirmation.\n\n Args:\n command: one of the Commands.X\n *arguments: instructions you would normally pass to the conda comamnd on the command line\n see below for examples\n **kwargs: special instructions for programmatic overrides\n use_exception_handler: defaults to False. False will let the code calling\n `run_command` handle all exceptions. True won't raise when an exception\n has occured, and instead give a non-zero return code\n search_path: an optional non-standard search path for configuration information\n that overrides the default SEARCH_PATH\n\n Returns: a tuple of stdout, stderr, and return_code\n\n Examples:\n >> run_command(Commands.CREATE, \"-n newenv python=3 flask\", use_exception_handler=True)\n >> run_command(Commands.CREATE, \"-n newenv\", \"python=3\", \"flask\")\n >> run_command(Commands.CREATE, [\"-n newenv\", \"python=3\", \"flask\"], search_path=())\n\n\n \"\"\"\n use_exception_handler = kwargs.get('use_exception_handler', False)\n configuration_search_path = kwargs.get('search_path', SEARCH_PATH)\n p, sub_parsers = generate_parser()\n get_configure_parser_function(command)(sub_parsers)\n\n arguments = map(win_path_double_escape, arguments)\n command_line = \"%s %s\" % (command, \" \".join(arguments))\n split_command_line = split(command_line)\n\n args = p.parse_args(split_command_line)\n args.yes = True # always skip user confirmation, force setting context.always_yes\n context.__init__(\n search_path=configuration_search_path,\n app_name=APP_NAME,\n argparse_args=args,\n )\n log.debug(\"executing command >>> conda %s\", command_line)\n try:\n with captured() as c, replace_log_streams():\n if use_exception_handler:\n return_code = conda_exception_handler(args.func, args, p)\n else:\n return_code = args.func(args, p)\n except Exception as e:\n log.debug(\"\\n stdout: %s\\n stderr: %s\", c.stdout, c.stderr)\n e.stdout, e.stderr = c.stdout, c.stderr\n raise e\n log.debug(\"\\n stdout: %s\\n stderr: %s\\n return_code: %s\", c.stdout, c.stderr, return_code)\n return c.stdout, c.stderr, return_code\n", "path": "conda/cli/python_api.py"}]}
| 1,784 | 195 |
gh_patches_debug_34298
|
rasdani/github-patches
|
git_diff
|
electricitymaps__electricitymaps-contrib-1398
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add price to Turkey
I am not sure which of the ticker [prices](https://www.epias.com.tr/en) to use but section 6.7.3 BLOCK ORDER BOOK in [this guide](https://www.epias.com.tr/wp-content/uploads/2017/06/INTRADAY_MARKET_USER_GUIDE_ENG_18.09.2017.pdf) may help.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsers/TR.py`
Content:
```
1 #!/usr/bin/env python3
2
3 import requests
4 import re
5 import json
6 import arrow
7
8 SEARCH_DATA = re.compile(r'var gunlukUretimEgrisiData = (?P<data>.*);')
9 TIMEZONE = 'Europe/Istanbul'
10 URL = 'https://ytbs.teias.gov.tr/ytbs/frm_login.jsf'
11 EMPTY_DAY = -1
12
13 MAP_GENERATION = {
14 'akarsu': 'hydro',
15 'barajli': 'hydro',
16 'dogalgaz': 'gas',
17 'lng': 'gas',
18 'lpg': 'gas',
19 'jeotermal': 'geothermal',
20 'taskomur': 'coal',
21 'asfaltitkomur': 'coal',
22 'linyit': 'coal',
23 'ithalkomur': 'coal',
24 'ruzgar': 'wind',
25 'fueloil': 'oil',
26 'biyokutle': 'biomass',
27 'nafta': 'oil',
28 'gunes': 'solar',
29 'nukleer': 'nuclear',
30 'kojenerasyon': 'unknown',
31 'motorin': 'oil',
32 }
33
34
35 def as_float(prod):
36 """Convert json values to float and sum all production for a further use"""
37 prod['total'] = 0.0
38 if isinstance(prod, dict) and 'yuk' not in prod.keys():
39 for prod_type, prod_val in prod.items():
40 prod[prod_type] = float(prod_val)
41 prod['total'] += prod[prod_type]
42 return prod
43
44
45 def get_last_data_idx(productions):
46 """
47 Find index of the last production
48 :param productions: list of 24 production dict objects
49 :return: (int) index of the newest data or -1 if no data (empty day)
50 """
51 for i in range(len(productions)):
52 if productions[i]['total'] < 1000:
53 return i - 1
54 return len(productions) - 1 # full day
55
56
57 def fetch_production(zone_key='TR', session=None, target_datetime=None, logger=None):
58 """
59 Requests the last known production mix (in MW) of a given country
60 Arguments:
61 zone_key (optional) -- used in case a parser is able to fetch multiple countries
62 session (optional) -- request session passed in order to re-use an existing session
63 Return:
64 A list of dictionaries in the form:
65 {
66 'zoneKey': 'FR',
67 'datetime': '2017-01-01T00:00:00Z',
68 'production': {
69 'biomass': 0.0,
70 'coal': 0.0,
71 'gas': 0.0,
72 'hydro': 0.0,
73 'nuclear': null,
74 'oil': 0.0,
75 'solar': 0.0,
76 'wind': 0.0,
77 'geothermal': 0.0,
78 'unknown': 0.0
79 },
80 'storage': {
81 'hydro': -10.0,
82 },
83 'source': 'mysource.com'
84 }
85 """
86 if target_datetime:
87 raise NotImplementedError('This parser is not yet able to parse past dates')
88
89 session = None # Explicitely make a new session to avoid caching from their server...
90 r = session or requests.session()
91 tr_datetime = arrow.now().to('Europe/Istanbul').floor('day')
92 response = r.get(URL, verify=False)
93 str_data = re.search(SEARCH_DATA, response.text)
94
95 production_by_hour = []
96 if str_data:
97 productions = json.loads(str_data.group('data'), object_hook=as_float)
98 last_data_index = get_last_data_idx(productions)
99 valid_production = productions[:last_data_index + 1]
100 if last_data_index != EMPTY_DAY:
101 for datapoint in valid_production:
102 data = {
103 'zoneKey': zone_key,
104 'production': {},
105 'storage': {},
106 'source': 'ytbs.teias.gov.tr',
107 'datetime': None
108 }
109 data['production'] = dict(zip(MAP_GENERATION.values(), [0] * len(MAP_GENERATION)))
110 for prod_type, prod_val in datapoint.items():
111 if prod_type in MAP_GENERATION.keys():
112 data['production'][MAP_GENERATION[prod_type]] += prod_val
113 elif prod_type not in ['total', 'uluslarasi', 'saat']:
114 logger.warning('Warning: %s (%d) is missing in mapping!' % (prod_type, prod_val))
115
116 try:
117 data['datetime'] = tr_datetime.replace(hour=int(datapoint['saat'])).datetime
118 except ValueError:
119 # 24 is not a valid hour!
120 data['datetime'] = tr_datetime.datetime
121
122 production_by_hour.append(data)
123 else:
124 raise Exception('Extracted data was None')
125
126 return production_by_hour
127
128
129 if __name__ == '__main__':
130 """Main method, never used by the Electricity Map backend, but handy for testing."""
131
132 print('fetch_production() ->')
133 print(fetch_production())
134
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/parsers/TR.py b/parsers/TR.py
--- a/parsers/TR.py
+++ b/parsers/TR.py
@@ -4,12 +4,17 @@
import re
import json
import arrow
+from bs4 import BeautifulSoup
+import datetime as dt
SEARCH_DATA = re.compile(r'var gunlukUretimEgrisiData = (?P<data>.*);')
TIMEZONE = 'Europe/Istanbul'
URL = 'https://ytbs.teias.gov.tr/ytbs/frm_login.jsf'
EMPTY_DAY = -1
+PRICE_URL = 'https://seffaflik.epias.com.tr/transparency/piyasalar/' \
+ 'gop/ptf.xhtml'
+
MAP_GENERATION = {
'akarsu': 'hydro',
'barajli': 'hydro',
@@ -54,6 +59,35 @@
return len(productions) - 1 # full day
+def fetch_price():
+ soup = BeautifulSoup(requests.get(PRICE_URL).text, 'html.parser')
+ cells = soup.select('.TexAlCenter')
+
+ # data is in td elements with class "TexAlCenter" and role "gridcell"
+ data = list()
+ for cell in cells:
+ if cell.attrs.get('role', '') != 'gridcell':
+ continue
+ data.append(cell.text)
+
+ dates = [dt.datetime.strptime(val, '%d/%m/%Y').date()
+ for i, val in enumerate(data) if i % 3 == 0]
+ times = [dt.datetime.strptime(val, '%H:%M').time()
+ for i, val in enumerate(data) if i % 3 == 1]
+ prices = [float(val.replace(',', '.'))
+ for i, val in enumerate(data) if i % 3 == 2]
+
+ datapoints = [{
+ 'zoneKey': 'TR',
+ 'currency': 'TRY',
+ 'datetime': arrow.get(
+ dt.datetime.combine(date, time)).to('Europe/Istanbul').datetime,
+ 'price': price,
+ 'source': 'epias.com.tr'
+ } for date, time, price in zip(dates, times, prices)]
+ return datapoints
+
+
def fetch_production(zone_key='TR', session=None, target_datetime=None, logger=None):
"""
Requests the last known production mix (in MW) of a given country
@@ -127,7 +161,10 @@
if __name__ == '__main__':
- """Main method, never used by the Electricity Map backend, but handy for testing."""
+ """Main method, never used by the Electricity Map backend, but handy for
+ testing."""
print('fetch_production() ->')
print(fetch_production())
+ print('fetch_price() ->')
+ print(fetch_price())
|
{"golden_diff": "diff --git a/parsers/TR.py b/parsers/TR.py\n--- a/parsers/TR.py\n+++ b/parsers/TR.py\n@@ -4,12 +4,17 @@\n import re\n import json\n import arrow\n+from bs4 import BeautifulSoup\n+import datetime as dt\n \n SEARCH_DATA = re.compile(r'var gunlukUretimEgrisiData = (?P<data>.*);')\n TIMEZONE = 'Europe/Istanbul'\n URL = 'https://ytbs.teias.gov.tr/ytbs/frm_login.jsf'\n EMPTY_DAY = -1\n \n+PRICE_URL = 'https://seffaflik.epias.com.tr/transparency/piyasalar/' \\\n+ 'gop/ptf.xhtml'\n+\n MAP_GENERATION = {\n 'akarsu': 'hydro',\n 'barajli': 'hydro',\n@@ -54,6 +59,35 @@\n return len(productions) - 1 # full day\n \n \n+def fetch_price():\n+ soup = BeautifulSoup(requests.get(PRICE_URL).text, 'html.parser')\n+ cells = soup.select('.TexAlCenter')\n+\n+ # data is in td elements with class \"TexAlCenter\" and role \"gridcell\"\n+ data = list()\n+ for cell in cells:\n+ if cell.attrs.get('role', '') != 'gridcell':\n+ continue\n+ data.append(cell.text)\n+\n+ dates = [dt.datetime.strptime(val, '%d/%m/%Y').date()\n+ for i, val in enumerate(data) if i % 3 == 0]\n+ times = [dt.datetime.strptime(val, '%H:%M').time()\n+ for i, val in enumerate(data) if i % 3 == 1]\n+ prices = [float(val.replace(',', '.'))\n+ for i, val in enumerate(data) if i % 3 == 2]\n+\n+ datapoints = [{\n+ 'zoneKey': 'TR',\n+ 'currency': 'TRY',\n+ 'datetime': arrow.get(\n+ dt.datetime.combine(date, time)).to('Europe/Istanbul').datetime,\n+ 'price': price,\n+ 'source': 'epias.com.tr'\n+ } for date, time, price in zip(dates, times, prices)]\n+ return datapoints\n+\n+\n def fetch_production(zone_key='TR', session=None, target_datetime=None, logger=None):\n \"\"\"\n Requests the last known production mix (in MW) of a given country\n@@ -127,7 +161,10 @@\n \n \n if __name__ == '__main__':\n- \"\"\"Main method, never used by the Electricity Map backend, but handy for testing.\"\"\"\n+ \"\"\"Main method, never used by the Electricity Map backend, but handy for\n+ testing.\"\"\"\n \n print('fetch_production() ->')\n print(fetch_production())\n+ print('fetch_price() ->')\n+ print(fetch_price())\n", "issue": "Add price to Turkey\nI am not sure which of the ticker [prices](https://www.epias.com.tr/en) to use but section 6.7.3 BLOCK ORDER BOOK in [this guide](https://www.epias.com.tr/wp-content/uploads/2017/06/INTRADAY_MARKET_USER_GUIDE_ENG_18.09.2017.pdf) may help.\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport requests\nimport re\nimport json\nimport arrow\n\nSEARCH_DATA = re.compile(r'var gunlukUretimEgrisiData = (?P<data>.*);')\nTIMEZONE = 'Europe/Istanbul'\nURL = 'https://ytbs.teias.gov.tr/ytbs/frm_login.jsf'\nEMPTY_DAY = -1\n\nMAP_GENERATION = {\n 'akarsu': 'hydro',\n 'barajli': 'hydro',\n 'dogalgaz': 'gas',\n 'lng': 'gas',\n 'lpg': 'gas',\n 'jeotermal': 'geothermal',\n 'taskomur': 'coal',\n 'asfaltitkomur': 'coal',\n 'linyit': 'coal',\n 'ithalkomur': 'coal',\n 'ruzgar': 'wind',\n 'fueloil': 'oil',\n 'biyokutle': 'biomass',\n 'nafta': 'oil',\n 'gunes': 'solar',\n 'nukleer': 'nuclear',\n 'kojenerasyon': 'unknown',\n 'motorin': 'oil',\n}\n\n\ndef as_float(prod):\n \"\"\"Convert json values to float and sum all production for a further use\"\"\"\n prod['total'] = 0.0\n if isinstance(prod, dict) and 'yuk' not in prod.keys():\n for prod_type, prod_val in prod.items():\n prod[prod_type] = float(prod_val)\n prod['total'] += prod[prod_type]\n return prod\n\n\ndef get_last_data_idx(productions):\n \"\"\"\n Find index of the last production\n :param productions: list of 24 production dict objects\n :return: (int) index of the newest data or -1 if no data (empty day)\n \"\"\"\n for i in range(len(productions)):\n if productions[i]['total'] < 1000:\n return i - 1\n return len(productions) - 1 # full day\n\n\ndef fetch_production(zone_key='TR', session=None, target_datetime=None, logger=None):\n \"\"\"\n Requests the last known production mix (in MW) of a given country\n Arguments:\n zone_key (optional) -- used in case a parser is able to fetch multiple countries\n session (optional) -- request session passed in order to re-use an existing session\n Return:\n A list of dictionaries in the form:\n {\n 'zoneKey': 'FR',\n 'datetime': '2017-01-01T00:00:00Z',\n 'production': {\n 'biomass': 0.0,\n 'coal': 0.0,\n 'gas': 0.0,\n 'hydro': 0.0,\n 'nuclear': null,\n 'oil': 0.0,\n 'solar': 0.0,\n 'wind': 0.0,\n 'geothermal': 0.0,\n 'unknown': 0.0\n },\n 'storage': {\n 'hydro': -10.0,\n },\n 'source': 'mysource.com'\n }\n \"\"\"\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n session = None # Explicitely make a new session to avoid caching from their server...\n r = session or requests.session()\n tr_datetime = arrow.now().to('Europe/Istanbul').floor('day')\n response = r.get(URL, verify=False)\n str_data = re.search(SEARCH_DATA, response.text)\n\n production_by_hour = []\n if str_data:\n productions = json.loads(str_data.group('data'), object_hook=as_float)\n last_data_index = get_last_data_idx(productions)\n valid_production = productions[:last_data_index + 1]\n if last_data_index != EMPTY_DAY:\n for datapoint in valid_production:\n data = {\n 'zoneKey': zone_key,\n 'production': {},\n 'storage': {},\n 'source': 'ytbs.teias.gov.tr',\n 'datetime': None\n }\n data['production'] = dict(zip(MAP_GENERATION.values(), [0] * len(MAP_GENERATION)))\n for prod_type, prod_val in datapoint.items():\n if prod_type in MAP_GENERATION.keys():\n data['production'][MAP_GENERATION[prod_type]] += prod_val\n elif prod_type not in ['total', 'uluslarasi', 'saat']:\n logger.warning('Warning: %s (%d) is missing in mapping!' % (prod_type, prod_val))\n\n try:\n data['datetime'] = tr_datetime.replace(hour=int(datapoint['saat'])).datetime\n except ValueError:\n # 24 is not a valid hour!\n data['datetime'] = tr_datetime.datetime\n\n production_by_hour.append(data)\n else:\n raise Exception('Extracted data was None')\n\n return production_by_hour\n\n\nif __name__ == '__main__':\n \"\"\"Main method, never used by the Electricity Map backend, but handy for testing.\"\"\"\n\n print('fetch_production() ->')\n print(fetch_production())\n", "path": "parsers/TR.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nimport requests\nimport re\nimport json\nimport arrow\nfrom bs4 import BeautifulSoup\nimport datetime as dt\n\nSEARCH_DATA = re.compile(r'var gunlukUretimEgrisiData = (?P<data>.*);')\nTIMEZONE = 'Europe/Istanbul'\nURL = 'https://ytbs.teias.gov.tr/ytbs/frm_login.jsf'\nEMPTY_DAY = -1\n\nPRICE_URL = 'https://seffaflik.epias.com.tr/transparency/piyasalar/' \\\n 'gop/ptf.xhtml'\n\nMAP_GENERATION = {\n 'akarsu': 'hydro',\n 'barajli': 'hydro',\n 'dogalgaz': 'gas',\n 'lng': 'gas',\n 'lpg': 'gas',\n 'jeotermal': 'geothermal',\n 'taskomur': 'coal',\n 'asfaltitkomur': 'coal',\n 'linyit': 'coal',\n 'ithalkomur': 'coal',\n 'ruzgar': 'wind',\n 'fueloil': 'oil',\n 'biyokutle': 'biomass',\n 'nafta': 'oil',\n 'gunes': 'solar',\n 'nukleer': 'nuclear',\n 'kojenerasyon': 'unknown',\n 'motorin': 'oil',\n}\n\n\ndef as_float(prod):\n \"\"\"Convert json values to float and sum all production for a further use\"\"\"\n prod['total'] = 0.0\n if isinstance(prod, dict) and 'yuk' not in prod.keys():\n for prod_type, prod_val in prod.items():\n prod[prod_type] = float(prod_val)\n prod['total'] += prod[prod_type]\n return prod\n\n\ndef get_last_data_idx(productions):\n \"\"\"\n Find index of the last production\n :param productions: list of 24 production dict objects\n :return: (int) index of the newest data or -1 if no data (empty day)\n \"\"\"\n for i in range(len(productions)):\n if productions[i]['total'] < 1000:\n return i - 1\n return len(productions) - 1 # full day\n\n\ndef fetch_price():\n soup = BeautifulSoup(requests.get(PRICE_URL).text, 'html.parser')\n cells = soup.select('.TexAlCenter')\n\n # data is in td elements with class \"TexAlCenter\" and role \"gridcell\"\n data = list()\n for cell in cells:\n if cell.attrs.get('role', '') != 'gridcell':\n continue\n data.append(cell.text)\n\n dates = [dt.datetime.strptime(val, '%d/%m/%Y').date()\n for i, val in enumerate(data) if i % 3 == 0]\n times = [dt.datetime.strptime(val, '%H:%M').time()\n for i, val in enumerate(data) if i % 3 == 1]\n prices = [float(val.replace(',', '.'))\n for i, val in enumerate(data) if i % 3 == 2]\n\n datapoints = [{\n 'zoneKey': 'TR',\n 'currency': 'TRY',\n 'datetime': arrow.get(\n dt.datetime.combine(date, time)).to('Europe/Istanbul').datetime,\n 'price': price,\n 'source': 'epias.com.tr'\n } for date, time, price in zip(dates, times, prices)]\n return datapoints\n\n\ndef fetch_production(zone_key='TR', session=None, target_datetime=None, logger=None):\n \"\"\"\n Requests the last known production mix (in MW) of a given country\n Arguments:\n zone_key (optional) -- used in case a parser is able to fetch multiple countries\n session (optional) -- request session passed in order to re-use an existing session\n Return:\n A list of dictionaries in the form:\n {\n 'zoneKey': 'FR',\n 'datetime': '2017-01-01T00:00:00Z',\n 'production': {\n 'biomass': 0.0,\n 'coal': 0.0,\n 'gas': 0.0,\n 'hydro': 0.0,\n 'nuclear': null,\n 'oil': 0.0,\n 'solar': 0.0,\n 'wind': 0.0,\n 'geothermal': 0.0,\n 'unknown': 0.0\n },\n 'storage': {\n 'hydro': -10.0,\n },\n 'source': 'mysource.com'\n }\n \"\"\"\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n session = None # Explicitely make a new session to avoid caching from their server...\n r = session or requests.session()\n tr_datetime = arrow.now().to('Europe/Istanbul').floor('day')\n response = r.get(URL, verify=False)\n str_data = re.search(SEARCH_DATA, response.text)\n\n production_by_hour = []\n if str_data:\n productions = json.loads(str_data.group('data'), object_hook=as_float)\n last_data_index = get_last_data_idx(productions)\n valid_production = productions[:last_data_index + 1]\n if last_data_index != EMPTY_DAY:\n for datapoint in valid_production:\n data = {\n 'zoneKey': zone_key,\n 'production': {},\n 'storage': {},\n 'source': 'ytbs.teias.gov.tr',\n 'datetime': None\n }\n data['production'] = dict(zip(MAP_GENERATION.values(), [0] * len(MAP_GENERATION)))\n for prod_type, prod_val in datapoint.items():\n if prod_type in MAP_GENERATION.keys():\n data['production'][MAP_GENERATION[prod_type]] += prod_val\n elif prod_type not in ['total', 'uluslarasi', 'saat']:\n logger.warning('Warning: %s (%d) is missing in mapping!' % (prod_type, prod_val))\n\n try:\n data['datetime'] = tr_datetime.replace(hour=int(datapoint['saat'])).datetime\n except ValueError:\n # 24 is not a valid hour!\n data['datetime'] = tr_datetime.datetime\n\n production_by_hour.append(data)\n else:\n raise Exception('Extracted data was None')\n\n return production_by_hour\n\n\nif __name__ == '__main__':\n \"\"\"Main method, never used by the Electricity Map backend, but handy for\n testing.\"\"\"\n\n print('fetch_production() ->')\n print(fetch_production())\n print('fetch_price() ->')\n print(fetch_price())\n", "path": "parsers/TR.py"}]}
| 1,772 | 630 |
gh_patches_debug_13560
|
rasdani/github-patches
|
git_diff
|
ydataai__ydata-profiling-1109
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cramer correlation matrix is not computed
/usr/local/lib/python3.7/dist-packages/pandas_profiling/model/correlations.py:61: UserWarning:There was an attempt to calculate the cramers correlation, but this failed.
To hide this warning, disable the calculation
(using `df.profile_report(correlations={"cramers": {"calculate": False}})`
If this is problematic for your use case, please report this as an issue:
https://github.com/ydataai/pandas-profiling/issues
(include the error message: 'No data; `observed` has size 0.')
pls help. My dataset is https://busan302.mycourses.work/data/house_price_train.csv
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pandas_profiling/model/pandas/correlations_pandas.py`
Content:
```
1 """Correlations between variables."""
2 import itertools
3 import warnings
4 from typing import Optional
5
6 import numpy as np
7 import pandas as pd
8 from scipy import stats
9
10 from pandas_profiling.config import Settings
11 from pandas_profiling.model.correlations import (
12 Cramers,
13 Kendall,
14 Pearson,
15 PhiK,
16 Spearman,
17 )
18
19
20 @Spearman.compute.register(Settings, pd.DataFrame, dict)
21 def pandas_spearman_compute(
22 config: Settings, df: pd.DataFrame, summary: dict
23 ) -> Optional[pd.DataFrame]:
24 return df.corr(method="spearman")
25
26
27 @Pearson.compute.register(Settings, pd.DataFrame, dict)
28 def pandas_pearson_compute(
29 config: Settings, df: pd.DataFrame, summary: dict
30 ) -> Optional[pd.DataFrame]:
31 return df.corr(method="pearson")
32
33
34 @Kendall.compute.register(Settings, pd.DataFrame, dict)
35 def pandas_kendall_compute(
36 config: Settings, df: pd.DataFrame, summary: dict
37 ) -> Optional[pd.DataFrame]:
38 return df.corr(method="kendall")
39
40
41 def _cramers_corrected_stat(confusion_matrix: pd.DataFrame, correction: bool) -> float:
42 """Calculate the Cramer's V corrected stat for two variables.
43
44 Args:
45 confusion_matrix: Crosstab between two variables.
46 correction: Should the correction be applied?
47
48 Returns:
49 The Cramer's V corrected stat for the two variables.
50 """
51 chi2 = stats.chi2_contingency(confusion_matrix, correction=correction)[0]
52 n = confusion_matrix.sum().sum()
53 phi2 = chi2 / n
54 r = confusion_matrix.shape[0]
55 k = confusion_matrix.shape[1] if len(confusion_matrix.shape) > 1 else 1
56
57 # Deal with NaNs later on
58 with np.errstate(divide="ignore", invalid="ignore"):
59 phi2corr = max(0.0, phi2 - ((k - 1.0) * (r - 1.0)) / (n - 1.0))
60 rcorr = r - ((r - 1.0) ** 2.0) / (n - 1.0)
61 kcorr = k - ((k - 1.0) ** 2.0) / (n - 1.0)
62 rkcorr = min((kcorr - 1.0), (rcorr - 1.0))
63 if rkcorr == 0.0:
64 corr = 1.0
65 else:
66 corr = np.sqrt(phi2corr / rkcorr)
67 return corr
68
69
70 @Cramers.compute.register(Settings, pd.DataFrame, dict)
71 def pandas_cramers_compute(
72 config: Settings, df: pd.DataFrame, summary: dict
73 ) -> Optional[pd.DataFrame]:
74 threshold = config.categorical_maximum_correlation_distinct
75
76 # `index` and `columns` must not be a set since Pandas 1.5,
77 # so convert it to a list. The order of the list is arbitrary.
78 categoricals = list(
79 {
80 key
81 for key, value in summary.items()
82 if value["type"] in {"Categorical", "Boolean"}
83 and value["n_distinct"] <= threshold
84 }
85 )
86
87 if len(categoricals) <= 1:
88 return None
89
90 matrix = np.zeros((len(categoricals), len(categoricals)))
91 np.fill_diagonal(matrix, 1.0)
92 correlation_matrix = pd.DataFrame(
93 matrix,
94 index=categoricals,
95 columns=categoricals,
96 )
97
98 for name1, name2 in itertools.combinations(categoricals, 2):
99 confusion_matrix = pd.crosstab(df[name1], df[name2])
100 correlation_matrix.loc[name2, name1] = _cramers_corrected_stat(
101 confusion_matrix, correction=True
102 )
103 correlation_matrix.loc[name1, name2] = correlation_matrix.loc[name2, name1]
104 return correlation_matrix
105
106
107 @PhiK.compute.register(Settings, pd.DataFrame, dict)
108 def pandas_phik_compute(
109 config: Settings, df: pd.DataFrame, summary: dict
110 ) -> Optional[pd.DataFrame]:
111 df_cols_dict = {i: list(df.columns).index(i) for i in df.columns}
112
113 intcols = {
114 key
115 for key, value in summary.items()
116 # DateTime currently excluded
117 # In some use cases, it makes sense to convert it to interval
118 # See https://github.com/KaveIO/PhiK/issues/7
119 if value["type"] == "Numeric" and 1 < value["n_distinct"]
120 }
121
122 selcols = {
123 key
124 for key, value in summary.items()
125 if value["type"] != "Unsupported"
126 and 1 < value["n_distinct"] <= config.categorical_maximum_correlation_distinct
127 }
128 selcols = selcols.union(intcols)
129 selected_cols = sorted(selcols, key=lambda i: df_cols_dict[i])
130
131 if len(selected_cols) <= 1:
132 return None
133
134 with warnings.catch_warnings():
135 warnings.simplefilter("ignore")
136 from phik import phik_matrix
137
138 correlation = phik_matrix(df[selected_cols], interval_cols=list(intcols))
139
140 return correlation
141
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/pandas_profiling/model/pandas/correlations_pandas.py b/src/pandas_profiling/model/pandas/correlations_pandas.py
--- a/src/pandas_profiling/model/pandas/correlations_pandas.py
+++ b/src/pandas_profiling/model/pandas/correlations_pandas.py
@@ -97,9 +97,12 @@
for name1, name2 in itertools.combinations(categoricals, 2):
confusion_matrix = pd.crosstab(df[name1], df[name2])
- correlation_matrix.loc[name2, name1] = _cramers_corrected_stat(
- confusion_matrix, correction=True
- )
+ if confusion_matrix.empty:
+ correlation_matrix.loc[name2, name1] = np.nan
+ else:
+ correlation_matrix.loc[name2, name1] = _cramers_corrected_stat(
+ confusion_matrix, correction=True
+ )
correlation_matrix.loc[name1, name2] = correlation_matrix.loc[name2, name1]
return correlation_matrix
|
{"golden_diff": "diff --git a/src/pandas_profiling/model/pandas/correlations_pandas.py b/src/pandas_profiling/model/pandas/correlations_pandas.py\n--- a/src/pandas_profiling/model/pandas/correlations_pandas.py\n+++ b/src/pandas_profiling/model/pandas/correlations_pandas.py\n@@ -97,9 +97,12 @@\n \n for name1, name2 in itertools.combinations(categoricals, 2):\n confusion_matrix = pd.crosstab(df[name1], df[name2])\n- correlation_matrix.loc[name2, name1] = _cramers_corrected_stat(\n- confusion_matrix, correction=True\n- )\n+ if confusion_matrix.empty:\n+ correlation_matrix.loc[name2, name1] = np.nan\n+ else:\n+ correlation_matrix.loc[name2, name1] = _cramers_corrected_stat(\n+ confusion_matrix, correction=True\n+ )\n correlation_matrix.loc[name1, name2] = correlation_matrix.loc[name2, name1]\n return correlation_matrix\n", "issue": "Cramer correlation matrix is not computed\n/usr/local/lib/python3.7/dist-packages/pandas_profiling/model/correlations.py:61: UserWarning:There was an attempt to calculate the cramers correlation, but this failed.\r\nTo hide this warning, disable the calculation\r\n(using `df.profile_report(correlations={\"cramers\": {\"calculate\": False}})`\r\nIf this is problematic for your use case, please report this as an issue:\r\nhttps://github.com/ydataai/pandas-profiling/issues\r\n(include the error message: 'No data; `observed` has size 0.')\r\n pls help. My dataset is https://busan302.mycourses.work/data/house_price_train.csv\n", "before_files": [{"content": "\"\"\"Correlations between variables.\"\"\"\nimport itertools\nimport warnings\nfrom typing import Optional\n\nimport numpy as np\nimport pandas as pd\nfrom scipy import stats\n\nfrom pandas_profiling.config import Settings\nfrom pandas_profiling.model.correlations import (\n Cramers,\n Kendall,\n Pearson,\n PhiK,\n Spearman,\n)\n\n\[email protected](Settings, pd.DataFrame, dict)\ndef pandas_spearman_compute(\n config: Settings, df: pd.DataFrame, summary: dict\n) -> Optional[pd.DataFrame]:\n return df.corr(method=\"spearman\")\n\n\[email protected](Settings, pd.DataFrame, dict)\ndef pandas_pearson_compute(\n config: Settings, df: pd.DataFrame, summary: dict\n) -> Optional[pd.DataFrame]:\n return df.corr(method=\"pearson\")\n\n\[email protected](Settings, pd.DataFrame, dict)\ndef pandas_kendall_compute(\n config: Settings, df: pd.DataFrame, summary: dict\n) -> Optional[pd.DataFrame]:\n return df.corr(method=\"kendall\")\n\n\ndef _cramers_corrected_stat(confusion_matrix: pd.DataFrame, correction: bool) -> float:\n \"\"\"Calculate the Cramer's V corrected stat for two variables.\n\n Args:\n confusion_matrix: Crosstab between two variables.\n correction: Should the correction be applied?\n\n Returns:\n The Cramer's V corrected stat for the two variables.\n \"\"\"\n chi2 = stats.chi2_contingency(confusion_matrix, correction=correction)[0]\n n = confusion_matrix.sum().sum()\n phi2 = chi2 / n\n r = confusion_matrix.shape[0]\n k = confusion_matrix.shape[1] if len(confusion_matrix.shape) > 1 else 1\n\n # Deal with NaNs later on\n with np.errstate(divide=\"ignore\", invalid=\"ignore\"):\n phi2corr = max(0.0, phi2 - ((k - 1.0) * (r - 1.0)) / (n - 1.0))\n rcorr = r - ((r - 1.0) ** 2.0) / (n - 1.0)\n kcorr = k - ((k - 1.0) ** 2.0) / (n - 1.0)\n rkcorr = min((kcorr - 1.0), (rcorr - 1.0))\n if rkcorr == 0.0:\n corr = 1.0\n else:\n corr = np.sqrt(phi2corr / rkcorr)\n return corr\n\n\[email protected](Settings, pd.DataFrame, dict)\ndef pandas_cramers_compute(\n config: Settings, df: pd.DataFrame, summary: dict\n) -> Optional[pd.DataFrame]:\n threshold = config.categorical_maximum_correlation_distinct\n\n # `index` and `columns` must not be a set since Pandas 1.5,\n # so convert it to a list. The order of the list is arbitrary.\n categoricals = list(\n {\n key\n for key, value in summary.items()\n if value[\"type\"] in {\"Categorical\", \"Boolean\"}\n and value[\"n_distinct\"] <= threshold\n }\n )\n\n if len(categoricals) <= 1:\n return None\n\n matrix = np.zeros((len(categoricals), len(categoricals)))\n np.fill_diagonal(matrix, 1.0)\n correlation_matrix = pd.DataFrame(\n matrix,\n index=categoricals,\n columns=categoricals,\n )\n\n for name1, name2 in itertools.combinations(categoricals, 2):\n confusion_matrix = pd.crosstab(df[name1], df[name2])\n correlation_matrix.loc[name2, name1] = _cramers_corrected_stat(\n confusion_matrix, correction=True\n )\n correlation_matrix.loc[name1, name2] = correlation_matrix.loc[name2, name1]\n return correlation_matrix\n\n\[email protected](Settings, pd.DataFrame, dict)\ndef pandas_phik_compute(\n config: Settings, df: pd.DataFrame, summary: dict\n) -> Optional[pd.DataFrame]:\n df_cols_dict = {i: list(df.columns).index(i) for i in df.columns}\n\n intcols = {\n key\n for key, value in summary.items()\n # DateTime currently excluded\n # In some use cases, it makes sense to convert it to interval\n # See https://github.com/KaveIO/PhiK/issues/7\n if value[\"type\"] == \"Numeric\" and 1 < value[\"n_distinct\"]\n }\n\n selcols = {\n key\n for key, value in summary.items()\n if value[\"type\"] != \"Unsupported\"\n and 1 < value[\"n_distinct\"] <= config.categorical_maximum_correlation_distinct\n }\n selcols = selcols.union(intcols)\n selected_cols = sorted(selcols, key=lambda i: df_cols_dict[i])\n\n if len(selected_cols) <= 1:\n return None\n\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n from phik import phik_matrix\n\n correlation = phik_matrix(df[selected_cols], interval_cols=list(intcols))\n\n return correlation\n", "path": "src/pandas_profiling/model/pandas/correlations_pandas.py"}], "after_files": [{"content": "\"\"\"Correlations between variables.\"\"\"\nimport itertools\nimport warnings\nfrom typing import Optional\n\nimport numpy as np\nimport pandas as pd\nfrom scipy import stats\n\nfrom pandas_profiling.config import Settings\nfrom pandas_profiling.model.correlations import (\n Cramers,\n Kendall,\n Pearson,\n PhiK,\n Spearman,\n)\n\n\[email protected](Settings, pd.DataFrame, dict)\ndef pandas_spearman_compute(\n config: Settings, df: pd.DataFrame, summary: dict\n) -> Optional[pd.DataFrame]:\n return df.corr(method=\"spearman\")\n\n\[email protected](Settings, pd.DataFrame, dict)\ndef pandas_pearson_compute(\n config: Settings, df: pd.DataFrame, summary: dict\n) -> Optional[pd.DataFrame]:\n return df.corr(method=\"pearson\")\n\n\[email protected](Settings, pd.DataFrame, dict)\ndef pandas_kendall_compute(\n config: Settings, df: pd.DataFrame, summary: dict\n) -> Optional[pd.DataFrame]:\n return df.corr(method=\"kendall\")\n\n\ndef _cramers_corrected_stat(confusion_matrix: pd.DataFrame, correction: bool) -> float:\n \"\"\"Calculate the Cramer's V corrected stat for two variables.\n\n Args:\n confusion_matrix: Crosstab between two variables.\n correction: Should the correction be applied?\n\n Returns:\n The Cramer's V corrected stat for the two variables.\n \"\"\"\n chi2 = stats.chi2_contingency(confusion_matrix, correction=correction)[0]\n n = confusion_matrix.sum().sum()\n phi2 = chi2 / n\n r = confusion_matrix.shape[0]\n k = confusion_matrix.shape[1] if len(confusion_matrix.shape) > 1 else 1\n\n # Deal with NaNs later on\n with np.errstate(divide=\"ignore\", invalid=\"ignore\"):\n phi2corr = max(0.0, phi2 - ((k - 1.0) * (r - 1.0)) / (n - 1.0))\n rcorr = r - ((r - 1.0) ** 2.0) / (n - 1.0)\n kcorr = k - ((k - 1.0) ** 2.0) / (n - 1.0)\n rkcorr = min((kcorr - 1.0), (rcorr - 1.0))\n if rkcorr == 0.0:\n corr = 1.0\n else:\n corr = np.sqrt(phi2corr / rkcorr)\n return corr\n\n\[email protected](Settings, pd.DataFrame, dict)\ndef pandas_cramers_compute(\n config: Settings, df: pd.DataFrame, summary: dict\n) -> Optional[pd.DataFrame]:\n threshold = config.categorical_maximum_correlation_distinct\n\n # `index` and `columns` must not be a set since Pandas 1.5,\n # so convert it to a list. The order of the list is arbitrary.\n categoricals = list(\n {\n key\n for key, value in summary.items()\n if value[\"type\"] in {\"Categorical\", \"Boolean\"}\n and value[\"n_distinct\"] <= threshold\n }\n )\n\n if len(categoricals) <= 1:\n return None\n\n matrix = np.zeros((len(categoricals), len(categoricals)))\n np.fill_diagonal(matrix, 1.0)\n correlation_matrix = pd.DataFrame(\n matrix,\n index=categoricals,\n columns=categoricals,\n )\n\n for name1, name2 in itertools.combinations(categoricals, 2):\n confusion_matrix = pd.crosstab(df[name1], df[name2])\n if confusion_matrix.empty:\n correlation_matrix.loc[name2, name1] = np.nan\n else:\n correlation_matrix.loc[name2, name1] = _cramers_corrected_stat(\n confusion_matrix, correction=True\n )\n correlation_matrix.loc[name1, name2] = correlation_matrix.loc[name2, name1]\n return correlation_matrix\n\n\[email protected](Settings, pd.DataFrame, dict)\ndef pandas_phik_compute(\n config: Settings, df: pd.DataFrame, summary: dict\n) -> Optional[pd.DataFrame]:\n df_cols_dict = {i: list(df.columns).index(i) for i in df.columns}\n\n intcols = {\n key\n for key, value in summary.items()\n # DateTime currently excluded\n # In some use cases, it makes sense to convert it to interval\n # See https://github.com/KaveIO/PhiK/issues/7\n if value[\"type\"] == \"Numeric\" and 1 < value[\"n_distinct\"]\n }\n\n selcols = {\n key\n for key, value in summary.items()\n if value[\"type\"] != \"Unsupported\"\n and 1 < value[\"n_distinct\"] <= config.categorical_maximum_correlation_distinct\n }\n selcols = selcols.union(intcols)\n selected_cols = sorted(selcols, key=lambda i: df_cols_dict[i])\n\n if len(selected_cols) <= 1:\n return None\n\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n from phik import phik_matrix\n\n correlation = phik_matrix(df[selected_cols], interval_cols=list(intcols))\n\n return correlation\n", "path": "src/pandas_profiling/model/pandas/correlations_pandas.py"}]}
| 1,876 | 229 |
gh_patches_debug_477
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-5786
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pypy3-pinned OpenSSL error
Yet another CI issue, visible in https://github.com/scrapy/scrapy/actions/runs/3849823417/jobs/6559259481
> /home/runner/work/scrapy/scrapy/.tox/pypy3-pinned/site-packages/cryptography/hazmat/bindings/_openssl.pypy37-pp73-x86_64-linux-gnu.so: undefined symbol: FIPS_mode
This may be a problem with that specific binary distribution (it's `cryptography==3.3`), not sure why it worked before, maybe something was rebuilt recently.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from pathlib import Path
2 from pkg_resources import parse_version
3 from setuptools import setup, find_packages, __version__ as setuptools_version
4
5
6 version = (Path(__file__).parent / 'scrapy/VERSION').read_text('ascii').strip()
7
8
9 def has_environment_marker_platform_impl_support():
10 """Code extracted from 'pytest/setup.py'
11 https://github.com/pytest-dev/pytest/blob/7538680c/setup.py#L31
12
13 The first known release to support environment marker with range operators
14 it is 18.5, see:
15 https://setuptools.readthedocs.io/en/latest/history.html#id235
16 """
17 return parse_version(setuptools_version) >= parse_version('18.5')
18
19
20 install_requires = [
21 'Twisted>=18.9.0',
22 'cryptography>=3.3',
23 'cssselect>=0.9.1',
24 'itemloaders>=1.0.1',
25 'parsel>=1.5.0',
26 'pyOpenSSL>=21.0.0',
27 'queuelib>=1.4.2',
28 'service_identity>=18.1.0',
29 'w3lib>=1.17.0',
30 'zope.interface>=5.1.0',
31 'protego>=0.1.15',
32 'itemadapter>=0.1.0',
33 'setuptools',
34 'packaging',
35 'tldextract',
36 'lxml>=4.3.0',
37 ]
38 extras_require = {}
39 cpython_dependencies = [
40 'PyDispatcher>=2.0.5',
41 ]
42 if has_environment_marker_platform_impl_support():
43 extras_require[':platform_python_implementation == "CPython"'] = cpython_dependencies
44 extras_require[':platform_python_implementation == "PyPy"'] = [
45 'PyPyDispatcher>=2.1.0',
46 ]
47 else:
48 install_requires.extend(cpython_dependencies)
49
50
51 setup(
52 name='Scrapy',
53 version=version,
54 url='https://scrapy.org',
55 project_urls={
56 'Documentation': 'https://docs.scrapy.org/',
57 'Source': 'https://github.com/scrapy/scrapy',
58 'Tracker': 'https://github.com/scrapy/scrapy/issues',
59 },
60 description='A high-level Web Crawling and Web Scraping framework',
61 long_description=open('README.rst', encoding="utf-8").read(),
62 author='Scrapy developers',
63 author_email='[email protected]',
64 maintainer='Pablo Hoffman',
65 maintainer_email='[email protected]',
66 license='BSD',
67 packages=find_packages(exclude=('tests', 'tests.*')),
68 include_package_data=True,
69 zip_safe=False,
70 entry_points={
71 'console_scripts': ['scrapy = scrapy.cmdline:execute']
72 },
73 classifiers=[
74 'Framework :: Scrapy',
75 'Development Status :: 5 - Production/Stable',
76 'Environment :: Console',
77 'Intended Audience :: Developers',
78 'License :: OSI Approved :: BSD License',
79 'Operating System :: OS Independent',
80 'Programming Language :: Python',
81 'Programming Language :: Python :: 3',
82 'Programming Language :: Python :: 3.7',
83 'Programming Language :: Python :: 3.8',
84 'Programming Language :: Python :: 3.9',
85 'Programming Language :: Python :: 3.10',
86 'Programming Language :: Python :: 3.11',
87 'Programming Language :: Python :: Implementation :: CPython',
88 'Programming Language :: Python :: Implementation :: PyPy',
89 'Topic :: Internet :: WWW/HTTP',
90 'Topic :: Software Development :: Libraries :: Application Frameworks',
91 'Topic :: Software Development :: Libraries :: Python Modules',
92 ],
93 python_requires='>=3.7',
94 install_requires=install_requires,
95 extras_require=extras_require,
96 )
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -19,7 +19,7 @@
install_requires = [
'Twisted>=18.9.0',
- 'cryptography>=3.3',
+ 'cryptography>=3.4.6',
'cssselect>=0.9.1',
'itemloaders>=1.0.1',
'parsel>=1.5.0',
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -19,7 +19,7 @@\n \n install_requires = [\n 'Twisted>=18.9.0',\n- 'cryptography>=3.3',\n+ 'cryptography>=3.4.6',\n 'cssselect>=0.9.1',\n 'itemloaders>=1.0.1',\n 'parsel>=1.5.0',\n", "issue": "pypy3-pinned OpenSSL error\nYet another CI issue, visible in https://github.com/scrapy/scrapy/actions/runs/3849823417/jobs/6559259481\r\n\r\n> /home/runner/work/scrapy/scrapy/.tox/pypy3-pinned/site-packages/cryptography/hazmat/bindings/_openssl.pypy37-pp73-x86_64-linux-gnu.so: undefined symbol: FIPS_mode\r\n\r\nThis may be a problem with that specific binary distribution (it's `cryptography==3.3`), not sure why it worked before, maybe something was rebuilt recently. \n", "before_files": [{"content": "from pathlib import Path\nfrom pkg_resources import parse_version\nfrom setuptools import setup, find_packages, __version__ as setuptools_version\n\n\nversion = (Path(__file__).parent / 'scrapy/VERSION').read_text('ascii').strip()\n\n\ndef has_environment_marker_platform_impl_support():\n \"\"\"Code extracted from 'pytest/setup.py'\n https://github.com/pytest-dev/pytest/blob/7538680c/setup.py#L31\n\n The first known release to support environment marker with range operators\n it is 18.5, see:\n https://setuptools.readthedocs.io/en/latest/history.html#id235\n \"\"\"\n return parse_version(setuptools_version) >= parse_version('18.5')\n\n\ninstall_requires = [\n 'Twisted>=18.9.0',\n 'cryptography>=3.3',\n 'cssselect>=0.9.1',\n 'itemloaders>=1.0.1',\n 'parsel>=1.5.0',\n 'pyOpenSSL>=21.0.0',\n 'queuelib>=1.4.2',\n 'service_identity>=18.1.0',\n 'w3lib>=1.17.0',\n 'zope.interface>=5.1.0',\n 'protego>=0.1.15',\n 'itemadapter>=0.1.0',\n 'setuptools',\n 'packaging',\n 'tldextract',\n 'lxml>=4.3.0',\n]\nextras_require = {}\ncpython_dependencies = [\n 'PyDispatcher>=2.0.5',\n]\nif has_environment_marker_platform_impl_support():\n extras_require[':platform_python_implementation == \"CPython\"'] = cpython_dependencies\n extras_require[':platform_python_implementation == \"PyPy\"'] = [\n 'PyPyDispatcher>=2.1.0',\n ]\nelse:\n install_requires.extend(cpython_dependencies)\n\n\nsetup(\n name='Scrapy',\n version=version,\n url='https://scrapy.org',\n project_urls={\n 'Documentation': 'https://docs.scrapy.org/',\n 'Source': 'https://github.com/scrapy/scrapy',\n 'Tracker': 'https://github.com/scrapy/scrapy/issues',\n },\n description='A high-level Web Crawling and Web Scraping framework',\n long_description=open('README.rst', encoding=\"utf-8\").read(),\n author='Scrapy developers',\n author_email='[email protected]',\n maintainer='Pablo Hoffman',\n maintainer_email='[email protected]',\n license='BSD',\n packages=find_packages(exclude=('tests', 'tests.*')),\n include_package_data=True,\n zip_safe=False,\n entry_points={\n 'console_scripts': ['scrapy = scrapy.cmdline:execute']\n },\n classifiers=[\n 'Framework :: Scrapy',\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Programming Language :: Python :: 3.10',\n 'Programming Language :: Python :: 3.11',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Software Development :: Libraries :: Application Frameworks',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=3.7',\n install_requires=install_requires,\n extras_require=extras_require,\n)\n", "path": "setup.py"}], "after_files": [{"content": "from pathlib import Path\nfrom pkg_resources import parse_version\nfrom setuptools import setup, find_packages, __version__ as setuptools_version\n\n\nversion = (Path(__file__).parent / 'scrapy/VERSION').read_text('ascii').strip()\n\n\ndef has_environment_marker_platform_impl_support():\n \"\"\"Code extracted from 'pytest/setup.py'\n https://github.com/pytest-dev/pytest/blob/7538680c/setup.py#L31\n\n The first known release to support environment marker with range operators\n it is 18.5, see:\n https://setuptools.readthedocs.io/en/latest/history.html#id235\n \"\"\"\n return parse_version(setuptools_version) >= parse_version('18.5')\n\n\ninstall_requires = [\n 'Twisted>=18.9.0',\n 'cryptography>=3.4.6',\n 'cssselect>=0.9.1',\n 'itemloaders>=1.0.1',\n 'parsel>=1.5.0',\n 'pyOpenSSL>=21.0.0',\n 'queuelib>=1.4.2',\n 'service_identity>=18.1.0',\n 'w3lib>=1.17.0',\n 'zope.interface>=5.1.0',\n 'protego>=0.1.15',\n 'itemadapter>=0.1.0',\n 'setuptools',\n 'packaging',\n 'tldextract',\n 'lxml>=4.3.0',\n]\nextras_require = {}\ncpython_dependencies = [\n 'PyDispatcher>=2.0.5',\n]\nif has_environment_marker_platform_impl_support():\n extras_require[':platform_python_implementation == \"CPython\"'] = cpython_dependencies\n extras_require[':platform_python_implementation == \"PyPy\"'] = [\n 'PyPyDispatcher>=2.1.0',\n ]\nelse:\n install_requires.extend(cpython_dependencies)\n\n\nsetup(\n name='Scrapy',\n version=version,\n url='https://scrapy.org',\n project_urls={\n 'Documentation': 'https://docs.scrapy.org/',\n 'Source': 'https://github.com/scrapy/scrapy',\n 'Tracker': 'https://github.com/scrapy/scrapy/issues',\n },\n description='A high-level Web Crawling and Web Scraping framework',\n long_description=open('README.rst', encoding=\"utf-8\").read(),\n author='Scrapy developers',\n author_email='[email protected]',\n maintainer='Pablo Hoffman',\n maintainer_email='[email protected]',\n license='BSD',\n packages=find_packages(exclude=('tests', 'tests.*')),\n include_package_data=True,\n zip_safe=False,\n entry_points={\n 'console_scripts': ['scrapy = scrapy.cmdline:execute']\n },\n classifiers=[\n 'Framework :: Scrapy',\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Programming Language :: Python :: 3.10',\n 'Programming Language :: Python :: 3.11',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Software Development :: Libraries :: Application Frameworks',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=3.7',\n install_requires=install_requires,\n extras_require=extras_require,\n)\n", "path": "setup.py"}]}
| 1,421 | 105 |
gh_patches_debug_28254
|
rasdani/github-patches
|
git_diff
|
airctic__icevision-722
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Rename EfficientDet Backbones
rename EfficientDet Backbones by deleting the `efficientdet_` part from the names: no need to reference that because the model already suggests that
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `icevision/models/ross/efficientdet/backbones.py`
Content:
```
1 __all__ = [
2 "tf_efficientdet_lite0",
3 "efficientdet_d0",
4 "efficientdet_d1",
5 "efficientdet_d2",
6 "efficientdet_d3",
7 "efficientdet_d4",
8 "efficientdet_d5",
9 "efficientdet_d6",
10 "efficientdet_d7",
11 "efficientdet_d7x",
12 ]
13
14 from icevision.models.ross.efficientdet.utils import *
15
16
17 tf_efficientdet_lite0 = EfficientDetBackboneConfig(model_name="tf_efficientdet_lite0")
18
19 efficientdet_d0 = EfficientDetBackboneConfig(model_name="efficientdet_d0")
20
21 efficientdet_d1 = EfficientDetBackboneConfig(model_name="efficientdet_d1")
22
23 efficientdet_d2 = EfficientDetBackboneConfig(model_name="efficientdet_d2")
24
25 efficientdet_d3 = EfficientDetBackboneConfig(model_name="efficientdet_d3")
26
27 efficientdet_d4 = EfficientDetBackboneConfig(model_name="efficientdet_d4")
28
29 efficientdet_d5 = EfficientDetBackboneConfig(model_name="efficientdet_d5")
30
31 efficientdet_d6 = EfficientDetBackboneConfig(model_name="efficientdet_d6")
32
33 efficientdet_d7 = EfficientDetBackboneConfig(model_name="efficientdet_d7")
34
35 efficientdet_d7x = EfficientDetBackboneConfig(model_name="efficientdet_d7x")
36
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/icevision/models/ross/efficientdet/backbones.py b/icevision/models/ross/efficientdet/backbones.py
--- a/icevision/models/ross/efficientdet/backbones.py
+++ b/icevision/models/ross/efficientdet/backbones.py
@@ -1,35 +1,35 @@
__all__ = [
- "tf_efficientdet_lite0",
- "efficientdet_d0",
- "efficientdet_d1",
- "efficientdet_d2",
- "efficientdet_d3",
- "efficientdet_d4",
- "efficientdet_d5",
- "efficientdet_d6",
- "efficientdet_d7",
- "efficientdet_d7x",
+ "tf_lite0",
+ "d0",
+ "d1",
+ "d2",
+ "d3",
+ "d4",
+ "d5",
+ "d6",
+ "d7",
+ "d7x",
]
from icevision.models.ross.efficientdet.utils import *
-tf_efficientdet_lite0 = EfficientDetBackboneConfig(model_name="tf_efficientdet_lite0")
+tf_lite0 = EfficientDetBackboneConfig(model_name="tf_efficientdet_lite0")
-efficientdet_d0 = EfficientDetBackboneConfig(model_name="efficientdet_d0")
+d0 = EfficientDetBackboneConfig(model_name="efficientdet_d0")
-efficientdet_d1 = EfficientDetBackboneConfig(model_name="efficientdet_d1")
+d1 = EfficientDetBackboneConfig(model_name="efficientdet_d1")
-efficientdet_d2 = EfficientDetBackboneConfig(model_name="efficientdet_d2")
+d2 = EfficientDetBackboneConfig(model_name="efficientdet_d2")
-efficientdet_d3 = EfficientDetBackboneConfig(model_name="efficientdet_d3")
+d3 = EfficientDetBackboneConfig(model_name="efficientdet_d3")
-efficientdet_d4 = EfficientDetBackboneConfig(model_name="efficientdet_d4")
+d4 = EfficientDetBackboneConfig(model_name="efficientdet_d4")
-efficientdet_d5 = EfficientDetBackboneConfig(model_name="efficientdet_d5")
+d5 = EfficientDetBackboneConfig(model_name="efficientdet_d5")
-efficientdet_d6 = EfficientDetBackboneConfig(model_name="efficientdet_d6")
+d6 = EfficientDetBackboneConfig(model_name="efficientdet_d6")
-efficientdet_d7 = EfficientDetBackboneConfig(model_name="efficientdet_d7")
+d7 = EfficientDetBackboneConfig(model_name="efficientdet_d7")
-efficientdet_d7x = EfficientDetBackboneConfig(model_name="efficientdet_d7x")
+d7x = EfficientDetBackboneConfig(model_name="efficientdet_d7x")
|
{"golden_diff": "diff --git a/icevision/models/ross/efficientdet/backbones.py b/icevision/models/ross/efficientdet/backbones.py\n--- a/icevision/models/ross/efficientdet/backbones.py\n+++ b/icevision/models/ross/efficientdet/backbones.py\n@@ -1,35 +1,35 @@\n __all__ = [\n- \"tf_efficientdet_lite0\",\n- \"efficientdet_d0\",\n- \"efficientdet_d1\",\n- \"efficientdet_d2\",\n- \"efficientdet_d3\",\n- \"efficientdet_d4\",\n- \"efficientdet_d5\",\n- \"efficientdet_d6\",\n- \"efficientdet_d7\",\n- \"efficientdet_d7x\",\n+ \"tf_lite0\",\n+ \"d0\",\n+ \"d1\",\n+ \"d2\",\n+ \"d3\",\n+ \"d4\",\n+ \"d5\",\n+ \"d6\",\n+ \"d7\",\n+ \"d7x\",\n ]\n \n from icevision.models.ross.efficientdet.utils import *\n \n \n-tf_efficientdet_lite0 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_lite0\")\n+tf_lite0 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_lite0\")\n \n-efficientdet_d0 = EfficientDetBackboneConfig(model_name=\"efficientdet_d0\")\n+d0 = EfficientDetBackboneConfig(model_name=\"efficientdet_d0\")\n \n-efficientdet_d1 = EfficientDetBackboneConfig(model_name=\"efficientdet_d1\")\n+d1 = EfficientDetBackboneConfig(model_name=\"efficientdet_d1\")\n \n-efficientdet_d2 = EfficientDetBackboneConfig(model_name=\"efficientdet_d2\")\n+d2 = EfficientDetBackboneConfig(model_name=\"efficientdet_d2\")\n \n-efficientdet_d3 = EfficientDetBackboneConfig(model_name=\"efficientdet_d3\")\n+d3 = EfficientDetBackboneConfig(model_name=\"efficientdet_d3\")\n \n-efficientdet_d4 = EfficientDetBackboneConfig(model_name=\"efficientdet_d4\")\n+d4 = EfficientDetBackboneConfig(model_name=\"efficientdet_d4\")\n \n-efficientdet_d5 = EfficientDetBackboneConfig(model_name=\"efficientdet_d5\")\n+d5 = EfficientDetBackboneConfig(model_name=\"efficientdet_d5\")\n \n-efficientdet_d6 = EfficientDetBackboneConfig(model_name=\"efficientdet_d6\")\n+d6 = EfficientDetBackboneConfig(model_name=\"efficientdet_d6\")\n \n-efficientdet_d7 = EfficientDetBackboneConfig(model_name=\"efficientdet_d7\")\n+d7 = EfficientDetBackboneConfig(model_name=\"efficientdet_d7\")\n \n-efficientdet_d7x = EfficientDetBackboneConfig(model_name=\"efficientdet_d7x\")\n+d7x = EfficientDetBackboneConfig(model_name=\"efficientdet_d7x\")\n", "issue": "Rename EfficientDet Backbones\nrename EfficientDet Backbones by deleting the `efficientdet_` part from the names: no need to reference that because the model already suggests that\r\n\n", "before_files": [{"content": "__all__ = [\n \"tf_efficientdet_lite0\",\n \"efficientdet_d0\",\n \"efficientdet_d1\",\n \"efficientdet_d2\",\n \"efficientdet_d3\",\n \"efficientdet_d4\",\n \"efficientdet_d5\",\n \"efficientdet_d6\",\n \"efficientdet_d7\",\n \"efficientdet_d7x\",\n]\n\nfrom icevision.models.ross.efficientdet.utils import *\n\n\ntf_efficientdet_lite0 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_lite0\")\n\nefficientdet_d0 = EfficientDetBackboneConfig(model_name=\"efficientdet_d0\")\n\nefficientdet_d1 = EfficientDetBackboneConfig(model_name=\"efficientdet_d1\")\n\nefficientdet_d2 = EfficientDetBackboneConfig(model_name=\"efficientdet_d2\")\n\nefficientdet_d3 = EfficientDetBackboneConfig(model_name=\"efficientdet_d3\")\n\nefficientdet_d4 = EfficientDetBackboneConfig(model_name=\"efficientdet_d4\")\n\nefficientdet_d5 = EfficientDetBackboneConfig(model_name=\"efficientdet_d5\")\n\nefficientdet_d6 = EfficientDetBackboneConfig(model_name=\"efficientdet_d6\")\n\nefficientdet_d7 = EfficientDetBackboneConfig(model_name=\"efficientdet_d7\")\n\nefficientdet_d7x = EfficientDetBackboneConfig(model_name=\"efficientdet_d7x\")\n", "path": "icevision/models/ross/efficientdet/backbones.py"}], "after_files": [{"content": "__all__ = [\n \"tf_lite0\",\n \"d0\",\n \"d1\",\n \"d2\",\n \"d3\",\n \"d4\",\n \"d5\",\n \"d6\",\n \"d7\",\n \"d7x\",\n]\n\nfrom icevision.models.ross.efficientdet.utils import *\n\n\ntf_lite0 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_lite0\")\n\nd0 = EfficientDetBackboneConfig(model_name=\"efficientdet_d0\")\n\nd1 = EfficientDetBackboneConfig(model_name=\"efficientdet_d1\")\n\nd2 = EfficientDetBackboneConfig(model_name=\"efficientdet_d2\")\n\nd3 = EfficientDetBackboneConfig(model_name=\"efficientdet_d3\")\n\nd4 = EfficientDetBackboneConfig(model_name=\"efficientdet_d4\")\n\nd5 = EfficientDetBackboneConfig(model_name=\"efficientdet_d5\")\n\nd6 = EfficientDetBackboneConfig(model_name=\"efficientdet_d6\")\n\nd7 = EfficientDetBackboneConfig(model_name=\"efficientdet_d7\")\n\nd7x = EfficientDetBackboneConfig(model_name=\"efficientdet_d7x\")\n", "path": "icevision/models/ross/efficientdet/backbones.py"}]}
| 655 | 600 |
gh_patches_debug_32768
|
rasdani/github-patches
|
git_diff
|
cisagov__manage.get.gov-972
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fixtures loads 8 times on docker-compose up
### Current Behavior
When starting up the application, the console log shows that the fixtures file is executed 8 times in a row. However, current logic on the fixtures file is preventing duplicate entries in the database.
### Expected Behavior
Data should just be loaded into the db once when using fixtures.
### Steps to Reproduce
1. locally run docker-compose up
2. observe the logs
3. see that the logs for loading domain application, users, etc are repeated 8 times each. Also note the warning messages that show an attempt was made to enter data that already exists in the db.
### Environment
local & on sandbox (see the deploy logs)
### Additional Context
see this [slack thread](https://cisa-corp.slack.com/archives/C05BGB4L5NF/p1687988129781299)
### Issue Links
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/registrar/management/commands/load.py`
Content:
```
1 import logging
2
3 from django.core.management.base import BaseCommand
4 from auditlog.context import disable_auditlog # type: ignore
5
6 from registrar.fixtures import UserFixture, DomainApplicationFixture, DomainFixture
7
8 logger = logging.getLogger(__name__)
9
10
11 class Command(BaseCommand):
12 def handle(self, *args, **options):
13 # django-auditlog has some bugs with fixtures
14 # https://github.com/jazzband/django-auditlog/issues/17
15 with disable_auditlog():
16 UserFixture.load()
17 DomainApplicationFixture.load()
18 DomainFixture.load()
19 logger.info("All fixtures loaded.")
20
```
Path: `src/registrar/signals.py`
Content:
```
1 import logging
2
3 from django.conf import settings
4 from django.core.management import call_command
5 from django.db.models.signals import post_save, post_migrate
6 from django.dispatch import receiver
7
8 from .models import User, Contact
9
10
11 logger = logging.getLogger(__name__)
12
13
14 @receiver(post_save, sender=User)
15 def handle_profile(sender, instance, **kwargs):
16 """Method for when a User is saved.
17
18 A first time registrant may have been invited, so we'll search for a matching
19 Contact record, by email address, and associate them, if possible.
20
21 A first time registrant may not have a matching Contact, so we'll create one,
22 copying the contact values we received from Login.gov in order to initialize it.
23
24 During subsequent login, a User record may be updated with new data from Login.gov,
25 but in no case will we update contact values on an existing Contact record.
26 """
27
28 first_name = getattr(instance, "first_name", "")
29 last_name = getattr(instance, "last_name", "")
30 email = getattr(instance, "email", "")
31 phone = getattr(instance, "phone", "")
32
33 is_new_user = kwargs.get("created", False)
34
35 if is_new_user:
36 contacts = Contact.objects.filter(email=email)
37 else:
38 contacts = Contact.objects.filter(user=instance)
39
40 if len(contacts) == 0: # no matching contact
41 Contact.objects.create(
42 user=instance,
43 first_name=first_name,
44 last_name=last_name,
45 email=email,
46 phone=phone,
47 )
48
49 if len(contacts) >= 1 and is_new_user: # a matching contact
50 contacts[0].user = instance
51 contacts[0].save()
52
53 if len(contacts) > 1: # multiple matches
54 logger.warning(
55 "There are multiple Contacts with the same email address."
56 f" Picking #{contacts[0].id} for User #{instance.id}."
57 )
58
59
60 @receiver(post_migrate)
61 def handle_loaddata(**kwargs):
62 """Attempt to load test fixtures when in DEBUG mode."""
63 if settings.DEBUG:
64 try:
65 call_command("load")
66 except Exception as e:
67 logger.warning(e)
68
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/registrar/management/commands/load.py b/src/registrar/management/commands/load.py
--- a/src/registrar/management/commands/load.py
+++ b/src/registrar/management/commands/load.py
@@ -2,6 +2,7 @@
from django.core.management.base import BaseCommand
from auditlog.context import disable_auditlog # type: ignore
+from django.conf import settings
from registrar.fixtures import UserFixture, DomainApplicationFixture, DomainFixture
@@ -12,8 +13,11 @@
def handle(self, *args, **options):
# django-auditlog has some bugs with fixtures
# https://github.com/jazzband/django-auditlog/issues/17
- with disable_auditlog():
- UserFixture.load()
- DomainApplicationFixture.load()
- DomainFixture.load()
- logger.info("All fixtures loaded.")
+ if settings.DEBUG:
+ with disable_auditlog():
+ UserFixture.load()
+ DomainApplicationFixture.load()
+ DomainFixture.load()
+ logger.info("All fixtures loaded.")
+ else:
+ logger.warn("Refusing to load fixture data in a non DEBUG env")
diff --git a/src/registrar/signals.py b/src/registrar/signals.py
--- a/src/registrar/signals.py
+++ b/src/registrar/signals.py
@@ -1,8 +1,6 @@
import logging
-from django.conf import settings
-from django.core.management import call_command
-from django.db.models.signals import post_save, post_migrate
+from django.db.models.signals import post_save
from django.dispatch import receiver
from .models import User, Contact
@@ -55,13 +53,3 @@
"There are multiple Contacts with the same email address."
f" Picking #{contacts[0].id} for User #{instance.id}."
)
-
-
-@receiver(post_migrate)
-def handle_loaddata(**kwargs):
- """Attempt to load test fixtures when in DEBUG mode."""
- if settings.DEBUG:
- try:
- call_command("load")
- except Exception as e:
- logger.warning(e)
|
{"golden_diff": "diff --git a/src/registrar/management/commands/load.py b/src/registrar/management/commands/load.py\n--- a/src/registrar/management/commands/load.py\n+++ b/src/registrar/management/commands/load.py\n@@ -2,6 +2,7 @@\n \n from django.core.management.base import BaseCommand\n from auditlog.context import disable_auditlog # type: ignore\n+from django.conf import settings\n \n from registrar.fixtures import UserFixture, DomainApplicationFixture, DomainFixture\n \n@@ -12,8 +13,11 @@\n def handle(self, *args, **options):\n # django-auditlog has some bugs with fixtures\n # https://github.com/jazzband/django-auditlog/issues/17\n- with disable_auditlog():\n- UserFixture.load()\n- DomainApplicationFixture.load()\n- DomainFixture.load()\n- logger.info(\"All fixtures loaded.\")\n+ if settings.DEBUG:\n+ with disable_auditlog():\n+ UserFixture.load()\n+ DomainApplicationFixture.load()\n+ DomainFixture.load()\n+ logger.info(\"All fixtures loaded.\")\n+ else:\n+ logger.warn(\"Refusing to load fixture data in a non DEBUG env\")\ndiff --git a/src/registrar/signals.py b/src/registrar/signals.py\n--- a/src/registrar/signals.py\n+++ b/src/registrar/signals.py\n@@ -1,8 +1,6 @@\n import logging\n \n-from django.conf import settings\n-from django.core.management import call_command\n-from django.db.models.signals import post_save, post_migrate\n+from django.db.models.signals import post_save\n from django.dispatch import receiver\n \n from .models import User, Contact\n@@ -55,13 +53,3 @@\n \"There are multiple Contacts with the same email address.\"\n f\" Picking #{contacts[0].id} for User #{instance.id}.\"\n )\n-\n-\n-@receiver(post_migrate)\n-def handle_loaddata(**kwargs):\n- \"\"\"Attempt to load test fixtures when in DEBUG mode.\"\"\"\n- if settings.DEBUG:\n- try:\n- call_command(\"load\")\n- except Exception as e:\n- logger.warning(e)\n", "issue": "Fixtures loads 8 times on docker-compose up\n### Current Behavior\n\nWhen starting up the application, the console log shows that the fixtures file is executed 8 times in a row. However, current logic on the fixtures file is preventing duplicate entries in the database.\n\n### Expected Behavior\n\nData should just be loaded into the db once when using fixtures.\n\n### Steps to Reproduce\n\n1. locally run docker-compose up\r\n2. observe the logs\r\n3. see that the logs for loading domain application, users, etc are repeated 8 times each. Also note the warning messages that show an attempt was made to enter data that already exists in the db.\r\n\n\n### Environment\n\nlocal & on sandbox (see the deploy logs)\n\n### Additional Context\n\nsee this [slack thread](https://cisa-corp.slack.com/archives/C05BGB4L5NF/p1687988129781299)\n\n### Issue Links\n\n_No response_\n", "before_files": [{"content": "import logging\n\nfrom django.core.management.base import BaseCommand\nfrom auditlog.context import disable_auditlog # type: ignore\n\nfrom registrar.fixtures import UserFixture, DomainApplicationFixture, DomainFixture\n\nlogger = logging.getLogger(__name__)\n\n\nclass Command(BaseCommand):\n def handle(self, *args, **options):\n # django-auditlog has some bugs with fixtures\n # https://github.com/jazzband/django-auditlog/issues/17\n with disable_auditlog():\n UserFixture.load()\n DomainApplicationFixture.load()\n DomainFixture.load()\n logger.info(\"All fixtures loaded.\")\n", "path": "src/registrar/management/commands/load.py"}, {"content": "import logging\n\nfrom django.conf import settings\nfrom django.core.management import call_command\nfrom django.db.models.signals import post_save, post_migrate\nfrom django.dispatch import receiver\n\nfrom .models import User, Contact\n\n\nlogger = logging.getLogger(__name__)\n\n\n@receiver(post_save, sender=User)\ndef handle_profile(sender, instance, **kwargs):\n \"\"\"Method for when a User is saved.\n\n A first time registrant may have been invited, so we'll search for a matching\n Contact record, by email address, and associate them, if possible.\n\n A first time registrant may not have a matching Contact, so we'll create one,\n copying the contact values we received from Login.gov in order to initialize it.\n\n During subsequent login, a User record may be updated with new data from Login.gov,\n but in no case will we update contact values on an existing Contact record.\n \"\"\"\n\n first_name = getattr(instance, \"first_name\", \"\")\n last_name = getattr(instance, \"last_name\", \"\")\n email = getattr(instance, \"email\", \"\")\n phone = getattr(instance, \"phone\", \"\")\n\n is_new_user = kwargs.get(\"created\", False)\n\n if is_new_user:\n contacts = Contact.objects.filter(email=email)\n else:\n contacts = Contact.objects.filter(user=instance)\n\n if len(contacts) == 0: # no matching contact\n Contact.objects.create(\n user=instance,\n first_name=first_name,\n last_name=last_name,\n email=email,\n phone=phone,\n )\n\n if len(contacts) >= 1 and is_new_user: # a matching contact\n contacts[0].user = instance\n contacts[0].save()\n\n if len(contacts) > 1: # multiple matches\n logger.warning(\n \"There are multiple Contacts with the same email address.\"\n f\" Picking #{contacts[0].id} for User #{instance.id}.\"\n )\n\n\n@receiver(post_migrate)\ndef handle_loaddata(**kwargs):\n \"\"\"Attempt to load test fixtures when in DEBUG mode.\"\"\"\n if settings.DEBUG:\n try:\n call_command(\"load\")\n except Exception as e:\n logger.warning(e)\n", "path": "src/registrar/signals.py"}], "after_files": [{"content": "import logging\n\nfrom django.core.management.base import BaseCommand\nfrom auditlog.context import disable_auditlog # type: ignore\nfrom django.conf import settings\n\nfrom registrar.fixtures import UserFixture, DomainApplicationFixture, DomainFixture\n\nlogger = logging.getLogger(__name__)\n\n\nclass Command(BaseCommand):\n def handle(self, *args, **options):\n # django-auditlog has some bugs with fixtures\n # https://github.com/jazzband/django-auditlog/issues/17\n if settings.DEBUG:\n with disable_auditlog():\n UserFixture.load()\n DomainApplicationFixture.load()\n DomainFixture.load()\n logger.info(\"All fixtures loaded.\")\n else:\n logger.warn(\"Refusing to load fixture data in a non DEBUG env\")\n", "path": "src/registrar/management/commands/load.py"}, {"content": "import logging\n\nfrom django.db.models.signals import post_save\nfrom django.dispatch import receiver\n\nfrom .models import User, Contact\n\n\nlogger = logging.getLogger(__name__)\n\n\n@receiver(post_save, sender=User)\ndef handle_profile(sender, instance, **kwargs):\n \"\"\"Method for when a User is saved.\n\n A first time registrant may have been invited, so we'll search for a matching\n Contact record, by email address, and associate them, if possible.\n\n A first time registrant may not have a matching Contact, so we'll create one,\n copying the contact values we received from Login.gov in order to initialize it.\n\n During subsequent login, a User record may be updated with new data from Login.gov,\n but in no case will we update contact values on an existing Contact record.\n \"\"\"\n\n first_name = getattr(instance, \"first_name\", \"\")\n last_name = getattr(instance, \"last_name\", \"\")\n email = getattr(instance, \"email\", \"\")\n phone = getattr(instance, \"phone\", \"\")\n\n is_new_user = kwargs.get(\"created\", False)\n\n if is_new_user:\n contacts = Contact.objects.filter(email=email)\n else:\n contacts = Contact.objects.filter(user=instance)\n\n if len(contacts) == 0: # no matching contact\n Contact.objects.create(\n user=instance,\n first_name=first_name,\n last_name=last_name,\n email=email,\n phone=phone,\n )\n\n if len(contacts) >= 1 and is_new_user: # a matching contact\n contacts[0].user = instance\n contacts[0].save()\n\n if len(contacts) > 1: # multiple matches\n logger.warning(\n \"There are multiple Contacts with the same email address.\"\n f\" Picking #{contacts[0].id} for User #{instance.id}.\"\n )\n", "path": "src/registrar/signals.py"}]}
| 1,245 | 461 |
gh_patches_debug_12464
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-1700
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Rbenv default ruby issue on install
I am trying to install MDL v0.11.0
i get the following error during install
```
pre-commit
[INFO] Installing environment for git://github.com/markdownlint/markdownlint.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
An unexpected error has occurred: CalledProcessError: command: ('/usr/local/bin/bash', '/Users/abuxton/.rbenv/shims/gem', 'build', 'mdl.gemspec')
return code: 1
expected return code: 0
stdout: (none)
stderr:
rbenv: version `default' is not installed (set by RBENV_VERSION environment variable)
```
seems to be the same issue as https://github.com/pre-commit/pre-commit/issues/369 on a previous version of pre-commit
pre-commit.log https://gist.github.com/abuxton/d29e6265f3230f838a746d7a5defd5e3
i have tried, ensuring local rbenv, I have tried pre-commit clean/uninstall and reinstall to no success, mdl will `brew install mdl` successfully and works fine.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/languages/ruby.py`
Content:
```
1 import contextlib
2 import functools
3 import os.path
4 import shutil
5 import tarfile
6 from typing import Generator
7 from typing import Sequence
8 from typing import Tuple
9
10 import pre_commit.constants as C
11 from pre_commit.envcontext import envcontext
12 from pre_commit.envcontext import PatchesT
13 from pre_commit.envcontext import UNSET
14 from pre_commit.envcontext import Var
15 from pre_commit.hook import Hook
16 from pre_commit.languages import helpers
17 from pre_commit.prefix import Prefix
18 from pre_commit.util import CalledProcessError
19 from pre_commit.util import clean_path_on_failure
20 from pre_commit.util import resource_bytesio
21
22 ENVIRONMENT_DIR = 'rbenv'
23 healthy = helpers.basic_healthy
24
25
26 @functools.lru_cache(maxsize=1)
27 def get_default_version() -> str:
28 if all(helpers.exe_exists(exe) for exe in ('ruby', 'gem')):
29 return 'system'
30 else:
31 return C.DEFAULT
32
33
34 def get_env_patch(
35 venv: str,
36 language_version: str,
37 ) -> PatchesT:
38 patches: PatchesT = (
39 ('GEM_HOME', os.path.join(venv, 'gems')),
40 ('GEM_PATH', UNSET),
41 ('BUNDLE_IGNORE_CONFIG', '1'),
42 )
43 if language_version == 'system':
44 patches += (
45 (
46 'PATH', (
47 os.path.join(venv, 'gems', 'bin'), os.pathsep,
48 Var('PATH'),
49 ),
50 ),
51 )
52 else: # pragma: win32 no cover
53 patches += (
54 ('RBENV_ROOT', venv),
55 ('RBENV_VERSION', language_version),
56 (
57 'PATH', (
58 os.path.join(venv, 'gems', 'bin'), os.pathsep,
59 os.path.join(venv, 'shims'), os.pathsep,
60 os.path.join(venv, 'bin'), os.pathsep, Var('PATH'),
61 ),
62 ),
63 )
64 return patches
65
66
67 @contextlib.contextmanager
68 def in_env(
69 prefix: Prefix,
70 language_version: str,
71 ) -> Generator[None, None, None]:
72 envdir = prefix.path(
73 helpers.environment_dir(ENVIRONMENT_DIR, language_version),
74 )
75 with envcontext(get_env_patch(envdir, language_version)):
76 yield
77
78
79 def _extract_resource(filename: str, dest: str) -> None:
80 with resource_bytesio(filename) as bio:
81 with tarfile.open(fileobj=bio) as tf:
82 tf.extractall(dest)
83
84
85 def _install_rbenv(
86 prefix: Prefix,
87 version: str,
88 ) -> None: # pragma: win32 no cover
89 directory = helpers.environment_dir(ENVIRONMENT_DIR, version)
90
91 _extract_resource('rbenv.tar.gz', prefix.path('.'))
92 shutil.move(prefix.path('rbenv'), prefix.path(directory))
93
94 # Only install ruby-build if the version is specified
95 if version != C.DEFAULT:
96 plugins_dir = prefix.path(directory, 'plugins')
97 _extract_resource('ruby-download.tar.gz', plugins_dir)
98 _extract_resource('ruby-build.tar.gz', plugins_dir)
99
100
101 def _install_ruby(
102 prefix: Prefix,
103 version: str,
104 ) -> None: # pragma: win32 no cover
105 try:
106 helpers.run_setup_cmd(prefix, ('rbenv', 'download', version))
107 except CalledProcessError: # pragma: no cover (usually find with download)
108 # Failed to download from mirror for some reason, build it instead
109 helpers.run_setup_cmd(prefix, ('rbenv', 'install', version))
110
111
112 def install_environment(
113 prefix: Prefix, version: str, additional_dependencies: Sequence[str],
114 ) -> None:
115 additional_dependencies = tuple(additional_dependencies)
116 directory = helpers.environment_dir(ENVIRONMENT_DIR, version)
117 with clean_path_on_failure(prefix.path(directory)):
118 if version != 'system': # pragma: win32 no cover
119 _install_rbenv(prefix, version)
120 with in_env(prefix, version):
121 # Need to call this before installing so rbenv's directories
122 # are set up
123 helpers.run_setup_cmd(prefix, ('rbenv', 'init', '-'))
124 if version != C.DEFAULT:
125 _install_ruby(prefix, version)
126 # Need to call this after installing to set up the shims
127 helpers.run_setup_cmd(prefix, ('rbenv', 'rehash'))
128
129 with in_env(prefix, version):
130 helpers.run_setup_cmd(
131 prefix, ('gem', 'build', *prefix.star('.gemspec')),
132 )
133 helpers.run_setup_cmd(
134 prefix,
135 (
136 'gem', 'install',
137 '--no-document', '--no-format-executable',
138 *prefix.star('.gem'), *additional_dependencies,
139 ),
140 )
141
142
143 def run_hook(
144 hook: Hook,
145 file_args: Sequence[str],
146 color: bool,
147 ) -> Tuple[int, bytes]:
148 with in_env(hook.prefix, hook.language_version):
149 return helpers.run_xargs(hook, hook.cmd, file_args, color=color)
150
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pre_commit/languages/ruby.py b/pre_commit/languages/ruby.py
--- a/pre_commit/languages/ruby.py
+++ b/pre_commit/languages/ruby.py
@@ -52,7 +52,6 @@
else: # pragma: win32 no cover
patches += (
('RBENV_ROOT', venv),
- ('RBENV_VERSION', language_version),
(
'PATH', (
os.path.join(venv, 'gems', 'bin'), os.pathsep,
@@ -61,6 +60,9 @@
),
),
)
+ if language_version not in {'system', 'default'}: # pragma: win32 no cover
+ patches += (('RBENV_VERSION', language_version),)
+
return patches
|
{"golden_diff": "diff --git a/pre_commit/languages/ruby.py b/pre_commit/languages/ruby.py\n--- a/pre_commit/languages/ruby.py\n+++ b/pre_commit/languages/ruby.py\n@@ -52,7 +52,6 @@\n else: # pragma: win32 no cover\n patches += (\n ('RBENV_ROOT', venv),\n- ('RBENV_VERSION', language_version),\n (\n 'PATH', (\n os.path.join(venv, 'gems', 'bin'), os.pathsep,\n@@ -61,6 +60,9 @@\n ),\n ),\n )\n+ if language_version not in {'system', 'default'}: # pragma: win32 no cover\n+ patches += (('RBENV_VERSION', language_version),)\n+\n return patches\n", "issue": "Rbenv default ruby issue on install\nI am trying to install MDL v0.11.0\r\n\r\ni get the following error during install \r\n```\r\npre-commit\r\n[INFO] Installing environment for git://github.com/markdownlint/markdownlint.\r\n[INFO] Once installed this environment will be reused.\r\n[INFO] This may take a few minutes...\r\nAn unexpected error has occurred: CalledProcessError: command: ('/usr/local/bin/bash', '/Users/abuxton/.rbenv/shims/gem', 'build', 'mdl.gemspec')\r\nreturn code: 1\r\nexpected return code: 0\r\nstdout: (none)\r\nstderr:\r\n rbenv: version `default' is not installed (set by RBENV_VERSION environment variable)\r\n```\r\n\r\nseems to be the same issue as https://github.com/pre-commit/pre-commit/issues/369 on a previous version of pre-commit\r\npre-commit.log https://gist.github.com/abuxton/d29e6265f3230f838a746d7a5defd5e3 \r\n\r\ni have tried, ensuring local rbenv, I have tried pre-commit clean/uninstall and reinstall to no success, mdl will `brew install mdl` successfully and works fine. \r\n\n", "before_files": [{"content": "import contextlib\nimport functools\nimport os.path\nimport shutil\nimport tarfile\nfrom typing import Generator\nfrom typing import Sequence\nfrom typing import Tuple\n\nimport pre_commit.constants as C\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import PatchesT\nfrom pre_commit.envcontext import UNSET\nfrom pre_commit.envcontext import Var\nfrom pre_commit.hook import Hook\nfrom pre_commit.languages import helpers\nfrom pre_commit.prefix import Prefix\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import resource_bytesio\n\nENVIRONMENT_DIR = 'rbenv'\nhealthy = helpers.basic_healthy\n\n\[email protected]_cache(maxsize=1)\ndef get_default_version() -> str:\n if all(helpers.exe_exists(exe) for exe in ('ruby', 'gem')):\n return 'system'\n else:\n return C.DEFAULT\n\n\ndef get_env_patch(\n venv: str,\n language_version: str,\n) -> PatchesT:\n patches: PatchesT = (\n ('GEM_HOME', os.path.join(venv, 'gems')),\n ('GEM_PATH', UNSET),\n ('BUNDLE_IGNORE_CONFIG', '1'),\n )\n if language_version == 'system':\n patches += (\n (\n 'PATH', (\n os.path.join(venv, 'gems', 'bin'), os.pathsep,\n Var('PATH'),\n ),\n ),\n )\n else: # pragma: win32 no cover\n patches += (\n ('RBENV_ROOT', venv),\n ('RBENV_VERSION', language_version),\n (\n 'PATH', (\n os.path.join(venv, 'gems', 'bin'), os.pathsep,\n os.path.join(venv, 'shims'), os.pathsep,\n os.path.join(venv, 'bin'), os.pathsep, Var('PATH'),\n ),\n ),\n )\n return patches\n\n\[email protected]\ndef in_env(\n prefix: Prefix,\n language_version: str,\n) -> Generator[None, None, None]:\n envdir = prefix.path(\n helpers.environment_dir(ENVIRONMENT_DIR, language_version),\n )\n with envcontext(get_env_patch(envdir, language_version)):\n yield\n\n\ndef _extract_resource(filename: str, dest: str) -> None:\n with resource_bytesio(filename) as bio:\n with tarfile.open(fileobj=bio) as tf:\n tf.extractall(dest)\n\n\ndef _install_rbenv(\n prefix: Prefix,\n version: str,\n) -> None: # pragma: win32 no cover\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n\n _extract_resource('rbenv.tar.gz', prefix.path('.'))\n shutil.move(prefix.path('rbenv'), prefix.path(directory))\n\n # Only install ruby-build if the version is specified\n if version != C.DEFAULT:\n plugins_dir = prefix.path(directory, 'plugins')\n _extract_resource('ruby-download.tar.gz', plugins_dir)\n _extract_resource('ruby-build.tar.gz', plugins_dir)\n\n\ndef _install_ruby(\n prefix: Prefix,\n version: str,\n) -> None: # pragma: win32 no cover\n try:\n helpers.run_setup_cmd(prefix, ('rbenv', 'download', version))\n except CalledProcessError: # pragma: no cover (usually find with download)\n # Failed to download from mirror for some reason, build it instead\n helpers.run_setup_cmd(prefix, ('rbenv', 'install', version))\n\n\ndef install_environment(\n prefix: Prefix, version: str, additional_dependencies: Sequence[str],\n) -> None:\n additional_dependencies = tuple(additional_dependencies)\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n with clean_path_on_failure(prefix.path(directory)):\n if version != 'system': # pragma: win32 no cover\n _install_rbenv(prefix, version)\n with in_env(prefix, version):\n # Need to call this before installing so rbenv's directories\n # are set up\n helpers.run_setup_cmd(prefix, ('rbenv', 'init', '-'))\n if version != C.DEFAULT:\n _install_ruby(prefix, version)\n # Need to call this after installing to set up the shims\n helpers.run_setup_cmd(prefix, ('rbenv', 'rehash'))\n\n with in_env(prefix, version):\n helpers.run_setup_cmd(\n prefix, ('gem', 'build', *prefix.star('.gemspec')),\n )\n helpers.run_setup_cmd(\n prefix,\n (\n 'gem', 'install',\n '--no-document', '--no-format-executable',\n *prefix.star('.gem'), *additional_dependencies,\n ),\n )\n\n\ndef run_hook(\n hook: Hook,\n file_args: Sequence[str],\n color: bool,\n) -> Tuple[int, bytes]:\n with in_env(hook.prefix, hook.language_version):\n return helpers.run_xargs(hook, hook.cmd, file_args, color=color)\n", "path": "pre_commit/languages/ruby.py"}], "after_files": [{"content": "import contextlib\nimport functools\nimport os.path\nimport shutil\nimport tarfile\nfrom typing import Generator\nfrom typing import Sequence\nfrom typing import Tuple\n\nimport pre_commit.constants as C\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import PatchesT\nfrom pre_commit.envcontext import UNSET\nfrom pre_commit.envcontext import Var\nfrom pre_commit.hook import Hook\nfrom pre_commit.languages import helpers\nfrom pre_commit.prefix import Prefix\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import resource_bytesio\n\nENVIRONMENT_DIR = 'rbenv'\nhealthy = helpers.basic_healthy\n\n\[email protected]_cache(maxsize=1)\ndef get_default_version() -> str:\n if all(helpers.exe_exists(exe) for exe in ('ruby', 'gem')):\n return 'system'\n else:\n return C.DEFAULT\n\n\ndef get_env_patch(\n venv: str,\n language_version: str,\n) -> PatchesT:\n patches: PatchesT = (\n ('GEM_HOME', os.path.join(venv, 'gems')),\n ('GEM_PATH', UNSET),\n ('BUNDLE_IGNORE_CONFIG', '1'),\n )\n if language_version == 'system':\n patches += (\n (\n 'PATH', (\n os.path.join(venv, 'gems', 'bin'), os.pathsep,\n Var('PATH'),\n ),\n ),\n )\n else: # pragma: win32 no cover\n patches += (\n ('RBENV_ROOT', venv),\n (\n 'PATH', (\n os.path.join(venv, 'gems', 'bin'), os.pathsep,\n os.path.join(venv, 'shims'), os.pathsep,\n os.path.join(venv, 'bin'), os.pathsep, Var('PATH'),\n ),\n ),\n )\n if language_version not in {'system', 'default'}: # pragma: win32 no cover\n patches += (('RBENV_VERSION', language_version),)\n\n return patches\n\n\[email protected]\ndef in_env(\n prefix: Prefix,\n language_version: str,\n) -> Generator[None, None, None]:\n envdir = prefix.path(\n helpers.environment_dir(ENVIRONMENT_DIR, language_version),\n )\n with envcontext(get_env_patch(envdir, language_version)):\n yield\n\n\ndef _extract_resource(filename: str, dest: str) -> None:\n with resource_bytesio(filename) as bio:\n with tarfile.open(fileobj=bio) as tf:\n tf.extractall(dest)\n\n\ndef _install_rbenv(\n prefix: Prefix,\n version: str,\n) -> None: # pragma: win32 no cover\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n\n _extract_resource('rbenv.tar.gz', prefix.path('.'))\n shutil.move(prefix.path('rbenv'), prefix.path(directory))\n\n # Only install ruby-build if the version is specified\n if version != C.DEFAULT:\n plugins_dir = prefix.path(directory, 'plugins')\n _extract_resource('ruby-download.tar.gz', plugins_dir)\n _extract_resource('ruby-build.tar.gz', plugins_dir)\n\n\ndef _install_ruby(\n prefix: Prefix,\n version: str,\n) -> None: # pragma: win32 no cover\n try:\n helpers.run_setup_cmd(prefix, ('rbenv', 'download', version))\n except CalledProcessError: # pragma: no cover (usually find with download)\n # Failed to download from mirror for some reason, build it instead\n helpers.run_setup_cmd(prefix, ('rbenv', 'install', version))\n\n\ndef install_environment(\n prefix: Prefix, version: str, additional_dependencies: Sequence[str],\n) -> None:\n additional_dependencies = tuple(additional_dependencies)\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n with clean_path_on_failure(prefix.path(directory)):\n if version != 'system': # pragma: win32 no cover\n _install_rbenv(prefix, version)\n with in_env(prefix, version):\n # Need to call this before installing so rbenv's directories\n # are set up\n helpers.run_setup_cmd(prefix, ('rbenv', 'init', '-'))\n if version != C.DEFAULT:\n _install_ruby(prefix, version)\n # Need to call this after installing to set up the shims\n helpers.run_setup_cmd(prefix, ('rbenv', 'rehash'))\n\n with in_env(prefix, version):\n helpers.run_setup_cmd(\n prefix, ('gem', 'build', *prefix.star('.gemspec')),\n )\n helpers.run_setup_cmd(\n prefix,\n (\n 'gem', 'install',\n '--no-document', '--no-format-executable',\n *prefix.star('.gem'), *additional_dependencies,\n ),\n )\n\n\ndef run_hook(\n hook: Hook,\n file_args: Sequence[str],\n color: bool,\n) -> Tuple[int, bytes]:\n with in_env(hook.prefix, hook.language_version):\n return helpers.run_xargs(hook, hook.cmd, file_args, color=color)\n", "path": "pre_commit/languages/ruby.py"}]}
| 1,948 | 173 |
gh_patches_debug_1646
|
rasdani/github-patches
|
git_diff
|
dotkom__onlineweb4-2553
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cant delete mails through REST API endpoints
The endpoint to remove mails are fucked :)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/authentication/api/views.py`
Content:
```
1 from django.contrib.auth.models import Group
2 from rest_framework import mixins, status, viewsets
3 from rest_framework.decorators import action
4 from rest_framework.permissions import AllowAny, IsAuthenticated
5 from rest_framework.response import Response
6
7 from apps.authentication.models import Email, GroupMember, GroupRole, OnlineGroup
8 from apps.authentication.models import OnlineUser as User
9 from apps.authentication.models import Position, SpecialPosition
10 from apps.authentication.serializers import (
11 AnonymizeUserSerializer,
12 EmailCreateSerializer,
13 EmailReadOnlySerializer,
14 EmailUpdateSerializer,
15 GroupMemberCreateSerializer,
16 GroupMemberReadOnlySerializer,
17 GroupMemberUpdateSerializer,
18 GroupReadOnlySerializer,
19 GroupRoleReadOnlySerializer,
20 OnlineGroupCreateOrUpdateSerializer,
21 OnlineGroupReadOnlySerializer,
22 PasswordUpdateSerializer,
23 PositionCreateAndUpdateSerializer,
24 PositionReadOnlySerializer,
25 SpecialPositionSerializer,
26 UserCreateSerializer,
27 UserReadOnlySerializer,
28 UserUpdateSerializer,
29 )
30 from apps.common.rest_framework.mixins import MultiSerializerMixin
31 from apps.permissions.drf_permissions import DjangoObjectPermissionOrAnonReadOnly
32
33 from .filters import UserFilter
34 from .permissions import IsSelfOrSuperUser
35 from .serializers.user_data import UserDataSerializer
36
37
38 class UserViewSet(
39 MultiSerializerMixin,
40 viewsets.GenericViewSet,
41 mixins.ListModelMixin,
42 mixins.RetrieveModelMixin,
43 mixins.CreateModelMixin,
44 mixins.UpdateModelMixin,
45 ):
46 """
47 Viewset for User serializer. Supports filtering on 'first_name', 'last_name', 'email'
48 """
49
50 permission_classes = (IsSelfOrSuperUser,)
51 filterset_class = UserFilter
52 queryset = User.objects.all()
53 serializer_classes = {
54 "create": UserCreateSerializer,
55 "update": UserUpdateSerializer,
56 "read": UserReadOnlySerializer,
57 "change_password": PasswordUpdateSerializer,
58 "anonymize_user": AnonymizeUserSerializer,
59 "dump_data": UserDataSerializer,
60 }
61
62 @action(detail=True, methods=["put"])
63 def change_password(self, request, pk=None):
64 user: User = self.get_object()
65 serializer = self.get_serializer(user, data=request.data)
66 serializer.is_valid(raise_exception=True)
67 serializer.save()
68
69 return Response(data=None, status=status.HTTP_204_NO_CONTENT)
70
71 @action(detail=True, methods=["put"])
72 def anonymize_user(self, request, pk=None):
73 user: User = self.get_object()
74 serializer = self.get_serializer(user, data=request.data)
75 serializer.is_valid(raise_exception=True)
76 serializer.save()
77
78 return Response(data=None, status=status.HTTP_204_NO_CONTENT)
79
80 @action(detail=True, methods=["get"], url_path="dump-data")
81 def dump_data(self, request, pk: int):
82 user: User = self.get_object()
83 serializer = self.get_serializer(user)
84 return Response(data=serializer.data, status=status.HTTP_200_OK)
85
86
87 class EmailViewSet(MultiSerializerMixin, viewsets.ModelViewSet):
88 permission_classes = (IsAuthenticated,)
89 serializer_classes = {
90 "create": EmailCreateSerializer,
91 "update": EmailUpdateSerializer,
92 "read": EmailReadOnlySerializer,
93 }
94
95 def get_queryset(self):
96 return Email.objects.filter(user=self.request.user)
97
98 def destroy(self, request, *args, **kwargs):
99 instance: Email = self.get_object()
100 if instance.primary:
101 return Response(
102 {
103 "message": "Du kan ikke slette en primær-epost. Du må først velge en annen epost som "
104 "primær for å kunne slette denne."
105 },
106 status=status.HTTP_400_BAD_REQUEST,
107 )
108
109
110 class PositionViewSet(MultiSerializerMixin, viewsets.ModelViewSet):
111 permission_classes = (IsAuthenticated,)
112 serializer_classes = {
113 "read": PositionReadOnlySerializer,
114 "write": PositionCreateAndUpdateSerializer,
115 }
116
117 def get_queryset(self):
118 user = self.request.user
119 return Position.objects.filter(user=user)
120
121
122 class SpecialPositionViewSet(viewsets.ReadOnlyModelViewSet):
123 serializer_class = SpecialPositionSerializer
124 permission_classes = (IsAuthenticated,)
125
126 def get_queryset(self):
127 user = self.request.user
128 return SpecialPosition.objects.filter(user=user)
129
130
131 class GroupViewSet(viewsets.ReadOnlyModelViewSet):
132 permission_classes = (AllowAny,)
133 queryset = Group.objects.all()
134 serializer_class = GroupReadOnlySerializer
135 ordering = ("name",)
136
137
138 class OnlineGroupViewSet(MultiSerializerMixin, viewsets.ModelViewSet):
139 permission_classes = (DjangoObjectPermissionOrAnonReadOnly,)
140 queryset = OnlineGroup.objects.all()
141 serializer_classes = {
142 "write": OnlineGroupCreateOrUpdateSerializer,
143 "read": OnlineGroupReadOnlySerializer,
144 }
145
146
147 class GroupMemberViewSet(MultiSerializerMixin, viewsets.ModelViewSet):
148 permission_classes = (DjangoObjectPermissionOrAnonReadOnly,)
149 queryset = GroupMember.objects.all()
150 serializer_classes = {
151 "create": GroupMemberCreateSerializer,
152 "update": GroupMemberUpdateSerializer,
153 "read": GroupMemberReadOnlySerializer,
154 }
155
156
157 class GroupRoleViewSet(viewsets.ReadOnlyModelViewSet):
158 permission_classes = (AllowAny,)
159 serializer_class = GroupRoleReadOnlySerializer
160 queryset = GroupRole.objects.all()
161
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/apps/authentication/api/views.py b/apps/authentication/api/views.py
--- a/apps/authentication/api/views.py
+++ b/apps/authentication/api/views.py
@@ -105,6 +105,7 @@
},
status=status.HTTP_400_BAD_REQUEST,
)
+ super().destroy(request, *args, **kwargs)
class PositionViewSet(MultiSerializerMixin, viewsets.ModelViewSet):
|
{"golden_diff": "diff --git a/apps/authentication/api/views.py b/apps/authentication/api/views.py\n--- a/apps/authentication/api/views.py\n+++ b/apps/authentication/api/views.py\n@@ -105,6 +105,7 @@\n },\n status=status.HTTP_400_BAD_REQUEST,\n )\n+ super().destroy(request, *args, **kwargs)\n \n \n class PositionViewSet(MultiSerializerMixin, viewsets.ModelViewSet):\n", "issue": "Cant delete mails through REST API endpoints\nThe endpoint to remove mails are fucked :)\n", "before_files": [{"content": "from django.contrib.auth.models import Group\nfrom rest_framework import mixins, status, viewsets\nfrom rest_framework.decorators import action\nfrom rest_framework.permissions import AllowAny, IsAuthenticated\nfrom rest_framework.response import Response\n\nfrom apps.authentication.models import Email, GroupMember, GroupRole, OnlineGroup\nfrom apps.authentication.models import OnlineUser as User\nfrom apps.authentication.models import Position, SpecialPosition\nfrom apps.authentication.serializers import (\n AnonymizeUserSerializer,\n EmailCreateSerializer,\n EmailReadOnlySerializer,\n EmailUpdateSerializer,\n GroupMemberCreateSerializer,\n GroupMemberReadOnlySerializer,\n GroupMemberUpdateSerializer,\n GroupReadOnlySerializer,\n GroupRoleReadOnlySerializer,\n OnlineGroupCreateOrUpdateSerializer,\n OnlineGroupReadOnlySerializer,\n PasswordUpdateSerializer,\n PositionCreateAndUpdateSerializer,\n PositionReadOnlySerializer,\n SpecialPositionSerializer,\n UserCreateSerializer,\n UserReadOnlySerializer,\n UserUpdateSerializer,\n)\nfrom apps.common.rest_framework.mixins import MultiSerializerMixin\nfrom apps.permissions.drf_permissions import DjangoObjectPermissionOrAnonReadOnly\n\nfrom .filters import UserFilter\nfrom .permissions import IsSelfOrSuperUser\nfrom .serializers.user_data import UserDataSerializer\n\n\nclass UserViewSet(\n MultiSerializerMixin,\n viewsets.GenericViewSet,\n mixins.ListModelMixin,\n mixins.RetrieveModelMixin,\n mixins.CreateModelMixin,\n mixins.UpdateModelMixin,\n):\n \"\"\"\n Viewset for User serializer. Supports filtering on 'first_name', 'last_name', 'email'\n \"\"\"\n\n permission_classes = (IsSelfOrSuperUser,)\n filterset_class = UserFilter\n queryset = User.objects.all()\n serializer_classes = {\n \"create\": UserCreateSerializer,\n \"update\": UserUpdateSerializer,\n \"read\": UserReadOnlySerializer,\n \"change_password\": PasswordUpdateSerializer,\n \"anonymize_user\": AnonymizeUserSerializer,\n \"dump_data\": UserDataSerializer,\n }\n\n @action(detail=True, methods=[\"put\"])\n def change_password(self, request, pk=None):\n user: User = self.get_object()\n serializer = self.get_serializer(user, data=request.data)\n serializer.is_valid(raise_exception=True)\n serializer.save()\n\n return Response(data=None, status=status.HTTP_204_NO_CONTENT)\n\n @action(detail=True, methods=[\"put\"])\n def anonymize_user(self, request, pk=None):\n user: User = self.get_object()\n serializer = self.get_serializer(user, data=request.data)\n serializer.is_valid(raise_exception=True)\n serializer.save()\n\n return Response(data=None, status=status.HTTP_204_NO_CONTENT)\n\n @action(detail=True, methods=[\"get\"], url_path=\"dump-data\")\n def dump_data(self, request, pk: int):\n user: User = self.get_object()\n serializer = self.get_serializer(user)\n return Response(data=serializer.data, status=status.HTTP_200_OK)\n\n\nclass EmailViewSet(MultiSerializerMixin, viewsets.ModelViewSet):\n permission_classes = (IsAuthenticated,)\n serializer_classes = {\n \"create\": EmailCreateSerializer,\n \"update\": EmailUpdateSerializer,\n \"read\": EmailReadOnlySerializer,\n }\n\n def get_queryset(self):\n return Email.objects.filter(user=self.request.user)\n\n def destroy(self, request, *args, **kwargs):\n instance: Email = self.get_object()\n if instance.primary:\n return Response(\n {\n \"message\": \"Du kan ikke slette en prim\u00e6r-epost. Du m\u00e5 f\u00f8rst velge en annen epost som \"\n \"prim\u00e6r for \u00e5 kunne slette denne.\"\n },\n status=status.HTTP_400_BAD_REQUEST,\n )\n\n\nclass PositionViewSet(MultiSerializerMixin, viewsets.ModelViewSet):\n permission_classes = (IsAuthenticated,)\n serializer_classes = {\n \"read\": PositionReadOnlySerializer,\n \"write\": PositionCreateAndUpdateSerializer,\n }\n\n def get_queryset(self):\n user = self.request.user\n return Position.objects.filter(user=user)\n\n\nclass SpecialPositionViewSet(viewsets.ReadOnlyModelViewSet):\n serializer_class = SpecialPositionSerializer\n permission_classes = (IsAuthenticated,)\n\n def get_queryset(self):\n user = self.request.user\n return SpecialPosition.objects.filter(user=user)\n\n\nclass GroupViewSet(viewsets.ReadOnlyModelViewSet):\n permission_classes = (AllowAny,)\n queryset = Group.objects.all()\n serializer_class = GroupReadOnlySerializer\n ordering = (\"name\",)\n\n\nclass OnlineGroupViewSet(MultiSerializerMixin, viewsets.ModelViewSet):\n permission_classes = (DjangoObjectPermissionOrAnonReadOnly,)\n queryset = OnlineGroup.objects.all()\n serializer_classes = {\n \"write\": OnlineGroupCreateOrUpdateSerializer,\n \"read\": OnlineGroupReadOnlySerializer,\n }\n\n\nclass GroupMemberViewSet(MultiSerializerMixin, viewsets.ModelViewSet):\n permission_classes = (DjangoObjectPermissionOrAnonReadOnly,)\n queryset = GroupMember.objects.all()\n serializer_classes = {\n \"create\": GroupMemberCreateSerializer,\n \"update\": GroupMemberUpdateSerializer,\n \"read\": GroupMemberReadOnlySerializer,\n }\n\n\nclass GroupRoleViewSet(viewsets.ReadOnlyModelViewSet):\n permission_classes = (AllowAny,)\n serializer_class = GroupRoleReadOnlySerializer\n queryset = GroupRole.objects.all()\n", "path": "apps/authentication/api/views.py"}], "after_files": [{"content": "from django.contrib.auth.models import Group\nfrom rest_framework import mixins, status, viewsets\nfrom rest_framework.decorators import action\nfrom rest_framework.permissions import AllowAny, IsAuthenticated\nfrom rest_framework.response import Response\n\nfrom apps.authentication.models import Email, GroupMember, GroupRole, OnlineGroup\nfrom apps.authentication.models import OnlineUser as User\nfrom apps.authentication.models import Position, SpecialPosition\nfrom apps.authentication.serializers import (\n AnonymizeUserSerializer,\n EmailCreateSerializer,\n EmailReadOnlySerializer,\n EmailUpdateSerializer,\n GroupMemberCreateSerializer,\n GroupMemberReadOnlySerializer,\n GroupMemberUpdateSerializer,\n GroupReadOnlySerializer,\n GroupRoleReadOnlySerializer,\n OnlineGroupCreateOrUpdateSerializer,\n OnlineGroupReadOnlySerializer,\n PasswordUpdateSerializer,\n PositionCreateAndUpdateSerializer,\n PositionReadOnlySerializer,\n SpecialPositionSerializer,\n UserCreateSerializer,\n UserReadOnlySerializer,\n UserUpdateSerializer,\n)\nfrom apps.common.rest_framework.mixins import MultiSerializerMixin\nfrom apps.permissions.drf_permissions import DjangoObjectPermissionOrAnonReadOnly\n\nfrom .filters import UserFilter\nfrom .permissions import IsSelfOrSuperUser\nfrom .serializers.user_data import UserDataSerializer\n\n\nclass UserViewSet(\n MultiSerializerMixin,\n viewsets.GenericViewSet,\n mixins.ListModelMixin,\n mixins.RetrieveModelMixin,\n mixins.CreateModelMixin,\n mixins.UpdateModelMixin,\n):\n \"\"\"\n Viewset for User serializer. Supports filtering on 'first_name', 'last_name', 'email'\n \"\"\"\n\n permission_classes = (IsSelfOrSuperUser,)\n filterset_class = UserFilter\n queryset = User.objects.all()\n serializer_classes = {\n \"create\": UserCreateSerializer,\n \"update\": UserUpdateSerializer,\n \"read\": UserReadOnlySerializer,\n \"change_password\": PasswordUpdateSerializer,\n \"anonymize_user\": AnonymizeUserSerializer,\n \"dump_data\": UserDataSerializer,\n }\n\n @action(detail=True, methods=[\"put\"])\n def change_password(self, request, pk=None):\n user: User = self.get_object()\n serializer = self.get_serializer(user, data=request.data)\n serializer.is_valid(raise_exception=True)\n serializer.save()\n\n return Response(data=None, status=status.HTTP_204_NO_CONTENT)\n\n @action(detail=True, methods=[\"put\"])\n def anonymize_user(self, request, pk=None):\n user: User = self.get_object()\n serializer = self.get_serializer(user, data=request.data)\n serializer.is_valid(raise_exception=True)\n serializer.save()\n\n return Response(data=None, status=status.HTTP_204_NO_CONTENT)\n\n @action(detail=True, methods=[\"get\"], url_path=\"dump-data\")\n def dump_data(self, request, pk: int):\n user: User = self.get_object()\n serializer = self.get_serializer(user)\n return Response(data=serializer.data, status=status.HTTP_200_OK)\n\n\nclass EmailViewSet(MultiSerializerMixin, viewsets.ModelViewSet):\n permission_classes = (IsAuthenticated,)\n serializer_classes = {\n \"create\": EmailCreateSerializer,\n \"update\": EmailUpdateSerializer,\n \"read\": EmailReadOnlySerializer,\n }\n\n def get_queryset(self):\n return Email.objects.filter(user=self.request.user)\n\n def destroy(self, request, *args, **kwargs):\n instance: Email = self.get_object()\n if instance.primary:\n return Response(\n {\n \"message\": \"Du kan ikke slette en prim\u00e6r-epost. Du m\u00e5 f\u00f8rst velge en annen epost som \"\n \"prim\u00e6r for \u00e5 kunne slette denne.\"\n },\n status=status.HTTP_400_BAD_REQUEST,\n )\n super().destroy(request, *args, **kwargs)\n\n\nclass PositionViewSet(MultiSerializerMixin, viewsets.ModelViewSet):\n permission_classes = (IsAuthenticated,)\n serializer_classes = {\n \"read\": PositionReadOnlySerializer,\n \"write\": PositionCreateAndUpdateSerializer,\n }\n\n def get_queryset(self):\n user = self.request.user\n return Position.objects.filter(user=user)\n\n\nclass SpecialPositionViewSet(viewsets.ReadOnlyModelViewSet):\n serializer_class = SpecialPositionSerializer\n permission_classes = (IsAuthenticated,)\n\n def get_queryset(self):\n user = self.request.user\n return SpecialPosition.objects.filter(user=user)\n\n\nclass GroupViewSet(viewsets.ReadOnlyModelViewSet):\n permission_classes = (AllowAny,)\n queryset = Group.objects.all()\n serializer_class = GroupReadOnlySerializer\n ordering = (\"name\",)\n\n\nclass OnlineGroupViewSet(MultiSerializerMixin, viewsets.ModelViewSet):\n permission_classes = (DjangoObjectPermissionOrAnonReadOnly,)\n queryset = OnlineGroup.objects.all()\n serializer_classes = {\n \"write\": OnlineGroupCreateOrUpdateSerializer,\n \"read\": OnlineGroupReadOnlySerializer,\n }\n\n\nclass GroupMemberViewSet(MultiSerializerMixin, viewsets.ModelViewSet):\n permission_classes = (DjangoObjectPermissionOrAnonReadOnly,)\n queryset = GroupMember.objects.all()\n serializer_classes = {\n \"create\": GroupMemberCreateSerializer,\n \"update\": GroupMemberUpdateSerializer,\n \"read\": GroupMemberReadOnlySerializer,\n }\n\n\nclass GroupRoleViewSet(viewsets.ReadOnlyModelViewSet):\n permission_classes = (AllowAny,)\n serializer_class = GroupRoleReadOnlySerializer\n queryset = GroupRole.objects.all()\n", "path": "apps/authentication/api/views.py"}]}
| 1,756 | 88 |
gh_patches_debug_30213
|
rasdani/github-patches
|
git_diff
|
lutris__lutris-528
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Lutris 0.4.5 doesn’t launch (AssertionError)
I'm using Linux Mint 18.1 (my de is Mate if that's any help) and I received the update for Lutris 0.4.5. I tried to launch Lutris after that and nothing seemed to happen. I then tried to launch it from the terminal with the command "lutris" and saw the following:
```python
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/lutris/gui/application.py", line 152, in do_activate
self.window = LutrisWindow(application=self)
File "/usr/lib/python3/dist-packages/lutris/gui/lutriswindow.py", line 84, in init
desktopapps.sync_with_lutris()
File "/usr/lib/python3/dist-packages/lutris/util/desktopapps.py", line 87, in sync_with_lutris
mark_as_installed(appid, 'linux', game_info)
File "/usr/lib/python3/dist-packages/lutris/util/desktopapps.py", line 32, in mark_as_installed
assert game_info[key]
AssertionError
```
I do hope you fix it, as I run quite a few dosbox/scummvm games, and a Wine game with it, so I kind of need it for gaming. Thanks for the understanding.
P.S. I had also posted it on the forum, but overcame my reluctance to not creating many accounts online, as it's kind of important that this gets resolved.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lutris/util/desktopapps.py`
Content:
```
1 """Get games installed as desktop applications."""
2
3 import os
4 import shutil
5 import shlex
6 import subprocess
7 import re
8
9 from gi.repository import Gio
10 from lutris import pga
11 from lutris.util.log import logger
12 from lutris.util.strings import slugify
13 from lutris.config import make_game_config_id, LutrisConfig
14
15 IGNORED_GAMES = (
16 "lutris", "mame", "dosbox", "playonlinux", "org.gnome.Games", "retroarch",
17 "steam", "steam-runtime", "steam-valve", "steam-native", "PlayOnLinux",
18 "fs-uae-arcade", "PCSX2", "ppsspp", "qchdman", "qmc2-sdlmame", "qmc2-arcade",
19 "sc-controller", "epsxe"
20 )
21 IGNORED_EXECUTABLES = (
22 "lutris", "steam"
23 )
24
25 IGNORED_CATEGORIES = (
26 "Emulator", "Development", "Utility"
27 )
28
29
30 def mark_as_installed(appid, runner_name, game_info):
31 for key in ['name', 'slug']:
32 assert game_info[key]
33 logger.info("Setting %s as installed" % game_info['name'])
34 config_id = (game_info.get('config_path') or make_game_config_id(game_info['slug']))
35 game_id = pga.add_or_update(
36 name=game_info['name'],
37 runner=runner_name,
38 slug=game_info['slug'],
39 installed=1,
40 configpath=config_id,
41 installer_slug=game_info['installer_slug']
42 )
43
44 game_config = LutrisConfig(
45 runner_slug=runner_name,
46 game_config_id=config_id,
47 )
48 game_config.raw_game_config.update({'appid': appid, 'exe': game_info['exe'], 'args': game_info['args']})
49 game_config.save()
50 return game_id
51
52
53 def mark_as_uninstalled(game_info):
54 assert 'id' in game_info
55 assert 'name' in game_info
56 logger.info('Setting %s as uninstalled' % game_info['name'])
57 game_id = pga.add_or_update(
58 id=game_info['id'],
59 runner='',
60 installed=0
61 )
62 return game_id
63
64
65 def sync_with_lutris():
66 apps = get_games()
67 desktop_games_in_lutris = pga.get_desktop_games()
68 slugs_in_lutris = set([str(game['slug']) for game in desktop_games_in_lutris])
69
70 seen_slugs = set()
71 for app in apps:
72 game_info = None
73 name = app[0]
74 slug = slugify(name)
75 appid = app[1]
76 seen_slugs.add(slug)
77
78 if slug not in slugs_in_lutris:
79 game_info = {
80 'name': name,
81 'slug': slug,
82 'config_path': slug + '-desktopapp',
83 'installer_slug': 'desktopapp',
84 'exe': app[2],
85 'args': app[3]
86 }
87 mark_as_installed(appid, 'linux', game_info)
88
89 unavailable_slugs = slugs_in_lutris.difference(seen_slugs)
90 for slug in unavailable_slugs:
91 for game in desktop_games_in_lutris:
92 if game['slug'] == slug:
93 mark_as_uninstalled(game)
94
95
96 def get_games():
97 """Return the list of games stored in the XDG menu."""
98 game_list = []
99
100 apps = Gio.AppInfo.get_all()
101 for app in apps:
102 if app.get_nodisplay() or app.get_is_hidden():
103 continue
104 appid = os.path.splitext(app.get_id())[0]
105 exe = None
106 args = []
107
108 # must be in Game category
109 categories = app.get_categories()
110 if not categories:
111 continue
112 categories = filter(None, categories.lower().split(';'))
113 if 'game' not in categories:
114 continue
115
116 # contains a blacklisted category
117 ok = True
118 for c in categories:
119 if c in map(str.lower, IGNORED_CATEGORIES):
120 ok = False
121 if not ok:
122 continue
123
124 # game is blacklisted
125 if appid.lower() in map(str.lower, IGNORED_GAMES):
126 continue
127
128 # executable is blacklisted
129 if app.get_executable().lower() in IGNORED_EXECUTABLES:
130 continue
131
132 cli = shlex.split(app.get_commandline())
133 exe = cli[0]
134 args = cli[1:]
135 # remove %U etc. and change %% to % in arguments
136 args = list(map(lambda arg: re.sub('%[^%]', '', arg).replace('%%', '%'), args))
137
138 args = subprocess.list2cmdline(args)
139
140 if not exe.startswith('/'):
141 exe = shutil.which(exe)
142 game_list.append((app.get_display_name(), appid, exe, args))
143 return game_list
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lutris/util/desktopapps.py b/lutris/util/desktopapps.py
--- a/lutris/util/desktopapps.py
+++ b/lutris/util/desktopapps.py
@@ -18,6 +18,7 @@
"fs-uae-arcade", "PCSX2", "ppsspp", "qchdman", "qmc2-sdlmame", "qmc2-arcade",
"sc-controller", "epsxe"
)
+
IGNORED_EXECUTABLES = (
"lutris", "steam"
)
@@ -41,12 +42,19 @@
installer_slug=game_info['installer_slug']
)
- game_config = LutrisConfig(
+ config = LutrisConfig(
runner_slug=runner_name,
game_config_id=config_id,
)
- game_config.raw_game_config.update({'appid': appid, 'exe': game_info['exe'], 'args': game_info['args']})
- game_config.save()
+ config.raw_game_config.update({
+ 'appid': appid,
+ 'exe': game_info['exe'],
+ 'args': game_info['args']
+ })
+ config.raw_system_config.update({
+ 'disable_runtime': True
+ })
+ config.save()
return game_id
@@ -71,8 +79,23 @@
for app in apps:
game_info = None
name = app[0]
- slug = slugify(name)
appid = app[1]
+ slug = slugify(name)
+
+ # if it fails to get slug from the name
+ if not slug:
+ slug = slugify(appid)
+
+ if not name or not slug or not appid:
+ logger.error("Failed to load desktop game "
+ "\"" + str(name) + "\" "
+ "(app: " + str(appid) + ", slug: " + slug + ")")
+ continue
+ else:
+ logger.debug("Found desktop game "
+ "\"" + str(name) + "\" "
+ "(app: " + str(appid) + ", slug: " + slug + ")")
+
seen_slugs.add(slug)
if slug not in slugs_in_lutris:
|
{"golden_diff": "diff --git a/lutris/util/desktopapps.py b/lutris/util/desktopapps.py\n--- a/lutris/util/desktopapps.py\n+++ b/lutris/util/desktopapps.py\n@@ -18,6 +18,7 @@\n \"fs-uae-arcade\", \"PCSX2\", \"ppsspp\", \"qchdman\", \"qmc2-sdlmame\", \"qmc2-arcade\",\n \"sc-controller\", \"epsxe\"\n )\n+\n IGNORED_EXECUTABLES = (\n \"lutris\", \"steam\"\n )\n@@ -41,12 +42,19 @@\n installer_slug=game_info['installer_slug']\n )\n \n- game_config = LutrisConfig(\n+ config = LutrisConfig(\n runner_slug=runner_name,\n game_config_id=config_id,\n )\n- game_config.raw_game_config.update({'appid': appid, 'exe': game_info['exe'], 'args': game_info['args']})\n- game_config.save()\n+ config.raw_game_config.update({\n+ 'appid': appid,\n+ 'exe': game_info['exe'],\n+ 'args': game_info['args']\n+ })\n+ config.raw_system_config.update({\n+ 'disable_runtime': True\n+ })\n+ config.save()\n return game_id\n \n \n@@ -71,8 +79,23 @@\n for app in apps:\n game_info = None\n name = app[0]\n- slug = slugify(name)\n appid = app[1]\n+ slug = slugify(name)\n+\n+ # if it fails to get slug from the name\n+ if not slug:\n+ slug = slugify(appid)\n+\n+ if not name or not slug or not appid:\n+ logger.error(\"Failed to load desktop game \"\n+ \"\\\"\" + str(name) + \"\\\" \"\n+ \"(app: \" + str(appid) + \", slug: \" + slug + \")\")\n+ continue\n+ else:\n+ logger.debug(\"Found desktop game \"\n+ \"\\\"\" + str(name) + \"\\\" \"\n+ \"(app: \" + str(appid) + \", slug: \" + slug + \")\")\n+\n seen_slugs.add(slug)\n \n if slug not in slugs_in_lutris:\n", "issue": "Lutris 0.4.5 doesn\u2019t launch (AssertionError)\nI'm using Linux Mint 18.1 (my de is Mate if that's any help) and I received the update for Lutris 0.4.5. I tried to launch Lutris after that and nothing seemed to happen. I then tried to launch it from the terminal with the command \"lutris\" and saw the following:\r\n\r\n```python\r\nTraceback (most recent call last):\r\nFile \"/usr/lib/python3/dist-packages/lutris/gui/application.py\", line 152, in do_activate\r\nself.window = LutrisWindow(application=self)\r\nFile \"/usr/lib/python3/dist-packages/lutris/gui/lutriswindow.py\", line 84, in init\r\ndesktopapps.sync_with_lutris()\r\nFile \"/usr/lib/python3/dist-packages/lutris/util/desktopapps.py\", line 87, in sync_with_lutris\r\nmark_as_installed(appid, 'linux', game_info)\r\nFile \"/usr/lib/python3/dist-packages/lutris/util/desktopapps.py\", line 32, in mark_as_installed\r\nassert game_info[key]\r\nAssertionError\r\n```\r\n\r\nI do hope you fix it, as I run quite a few dosbox/scummvm games, and a Wine game with it, so I kind of need it for gaming. Thanks for the understanding.\r\nP.S. I had also posted it on the forum, but overcame my reluctance to not creating many accounts online, as it's kind of important that this gets resolved.\n", "before_files": [{"content": "\"\"\"Get games installed as desktop applications.\"\"\"\n\nimport os\nimport shutil\nimport shlex\nimport subprocess\nimport re\n\nfrom gi.repository import Gio\nfrom lutris import pga\nfrom lutris.util.log import logger\nfrom lutris.util.strings import slugify\nfrom lutris.config import make_game_config_id, LutrisConfig\n\nIGNORED_GAMES = (\n \"lutris\", \"mame\", \"dosbox\", \"playonlinux\", \"org.gnome.Games\", \"retroarch\",\n \"steam\", \"steam-runtime\", \"steam-valve\", \"steam-native\", \"PlayOnLinux\",\n \"fs-uae-arcade\", \"PCSX2\", \"ppsspp\", \"qchdman\", \"qmc2-sdlmame\", \"qmc2-arcade\",\n \"sc-controller\", \"epsxe\"\n)\nIGNORED_EXECUTABLES = (\n \"lutris\", \"steam\"\n)\n\nIGNORED_CATEGORIES = (\n \"Emulator\", \"Development\", \"Utility\"\n)\n\n\ndef mark_as_installed(appid, runner_name, game_info):\n for key in ['name', 'slug']:\n assert game_info[key]\n logger.info(\"Setting %s as installed\" % game_info['name'])\n config_id = (game_info.get('config_path') or make_game_config_id(game_info['slug']))\n game_id = pga.add_or_update(\n name=game_info['name'],\n runner=runner_name,\n slug=game_info['slug'],\n installed=1,\n configpath=config_id,\n installer_slug=game_info['installer_slug']\n )\n\n game_config = LutrisConfig(\n runner_slug=runner_name,\n game_config_id=config_id,\n )\n game_config.raw_game_config.update({'appid': appid, 'exe': game_info['exe'], 'args': game_info['args']})\n game_config.save()\n return game_id\n\n\ndef mark_as_uninstalled(game_info):\n assert 'id' in game_info\n assert 'name' in game_info\n logger.info('Setting %s as uninstalled' % game_info['name'])\n game_id = pga.add_or_update(\n id=game_info['id'],\n runner='',\n installed=0\n )\n return game_id\n\n\ndef sync_with_lutris():\n apps = get_games()\n desktop_games_in_lutris = pga.get_desktop_games()\n slugs_in_lutris = set([str(game['slug']) for game in desktop_games_in_lutris])\n\n seen_slugs = set()\n for app in apps:\n game_info = None\n name = app[0]\n slug = slugify(name)\n appid = app[1]\n seen_slugs.add(slug)\n\n if slug not in slugs_in_lutris:\n game_info = {\n 'name': name,\n 'slug': slug,\n 'config_path': slug + '-desktopapp',\n 'installer_slug': 'desktopapp',\n 'exe': app[2],\n 'args': app[3]\n }\n mark_as_installed(appid, 'linux', game_info)\n\n unavailable_slugs = slugs_in_lutris.difference(seen_slugs)\n for slug in unavailable_slugs:\n for game in desktop_games_in_lutris:\n if game['slug'] == slug:\n mark_as_uninstalled(game)\n\n\ndef get_games():\n \"\"\"Return the list of games stored in the XDG menu.\"\"\"\n game_list = []\n\n apps = Gio.AppInfo.get_all()\n for app in apps:\n if app.get_nodisplay() or app.get_is_hidden():\n continue\n appid = os.path.splitext(app.get_id())[0]\n exe = None\n args = []\n\n # must be in Game category\n categories = app.get_categories()\n if not categories:\n continue\n categories = filter(None, categories.lower().split(';'))\n if 'game' not in categories:\n continue\n\n # contains a blacklisted category\n ok = True\n for c in categories:\n if c in map(str.lower, IGNORED_CATEGORIES):\n ok = False\n if not ok:\n continue\n\n # game is blacklisted\n if appid.lower() in map(str.lower, IGNORED_GAMES):\n continue\n\n # executable is blacklisted\n if app.get_executable().lower() in IGNORED_EXECUTABLES:\n continue\n\n cli = shlex.split(app.get_commandline())\n exe = cli[0]\n args = cli[1:]\n # remove %U etc. and change %% to % in arguments\n args = list(map(lambda arg: re.sub('%[^%]', '', arg).replace('%%', '%'), args))\n\n args = subprocess.list2cmdline(args)\n\n if not exe.startswith('/'):\n exe = shutil.which(exe)\n game_list.append((app.get_display_name(), appid, exe, args))\n return game_list\n", "path": "lutris/util/desktopapps.py"}], "after_files": [{"content": "\"\"\"Get games installed as desktop applications.\"\"\"\n\nimport os\nimport shutil\nimport shlex\nimport subprocess\nimport re\n\nfrom gi.repository import Gio\nfrom lutris import pga\nfrom lutris.util.log import logger\nfrom lutris.util.strings import slugify\nfrom lutris.config import make_game_config_id, LutrisConfig\n\nIGNORED_GAMES = (\n \"lutris\", \"mame\", \"dosbox\", \"playonlinux\", \"org.gnome.Games\", \"retroarch\",\n \"steam\", \"steam-runtime\", \"steam-valve\", \"steam-native\", \"PlayOnLinux\",\n \"fs-uae-arcade\", \"PCSX2\", \"ppsspp\", \"qchdman\", \"qmc2-sdlmame\", \"qmc2-arcade\",\n \"sc-controller\", \"epsxe\"\n)\n\nIGNORED_EXECUTABLES = (\n \"lutris\", \"steam\"\n)\n\nIGNORED_CATEGORIES = (\n \"Emulator\", \"Development\", \"Utility\"\n)\n\n\ndef mark_as_installed(appid, runner_name, game_info):\n for key in ['name', 'slug']:\n assert game_info[key]\n logger.info(\"Setting %s as installed\" % game_info['name'])\n config_id = (game_info.get('config_path') or make_game_config_id(game_info['slug']))\n game_id = pga.add_or_update(\n name=game_info['name'],\n runner=runner_name,\n slug=game_info['slug'],\n installed=1,\n configpath=config_id,\n installer_slug=game_info['installer_slug']\n )\n\n config = LutrisConfig(\n runner_slug=runner_name,\n game_config_id=config_id,\n )\n config.raw_game_config.update({\n 'appid': appid,\n 'exe': game_info['exe'],\n 'args': game_info['args']\n })\n config.raw_system_config.update({\n 'disable_runtime': True\n })\n config.save()\n return game_id\n\n\ndef mark_as_uninstalled(game_info):\n assert 'id' in game_info\n assert 'name' in game_info\n logger.info('Setting %s as uninstalled' % game_info['name'])\n game_id = pga.add_or_update(\n id=game_info['id'],\n runner='',\n installed=0\n )\n return game_id\n\n\ndef sync_with_lutris():\n apps = get_games()\n desktop_games_in_lutris = pga.get_desktop_games()\n slugs_in_lutris = set([str(game['slug']) for game in desktop_games_in_lutris])\n\n seen_slugs = set()\n for app in apps:\n game_info = None\n name = app[0]\n appid = app[1]\n slug = slugify(name)\n\n # if it fails to get slug from the name\n if not slug:\n slug = slugify(appid)\n\n if not name or not slug or not appid:\n logger.error(\"Failed to load desktop game \"\n \"\\\"\" + str(name) + \"\\\" \"\n \"(app: \" + str(appid) + \", slug: \" + slug + \")\")\n continue\n else:\n logger.debug(\"Found desktop game \"\n \"\\\"\" + str(name) + \"\\\" \"\n \"(app: \" + str(appid) + \", slug: \" + slug + \")\")\n\n seen_slugs.add(slug)\n\n if slug not in slugs_in_lutris:\n game_info = {\n 'name': name,\n 'slug': slug,\n 'config_path': slug + '-desktopapp',\n 'installer_slug': 'desktopapp',\n 'exe': app[2],\n 'args': app[3]\n }\n mark_as_installed(appid, 'linux', game_info)\n\n unavailable_slugs = slugs_in_lutris.difference(seen_slugs)\n for slug in unavailable_slugs:\n for game in desktop_games_in_lutris:\n if game['slug'] == slug:\n mark_as_uninstalled(game)\n\n\ndef get_games():\n \"\"\"Return the list of games stored in the XDG menu.\"\"\"\n game_list = []\n\n apps = Gio.AppInfo.get_all()\n for app in apps:\n if app.get_nodisplay() or app.get_is_hidden():\n continue\n appid = os.path.splitext(app.get_id())[0]\n exe = None\n args = []\n\n # must be in Game category\n categories = app.get_categories()\n if not categories:\n continue\n categories = filter(None, categories.lower().split(';'))\n if 'game' not in categories:\n continue\n\n # contains a blacklisted category\n ok = True\n for c in categories:\n if c in map(str.lower, IGNORED_CATEGORIES):\n ok = False\n if not ok:\n continue\n\n # game is blacklisted\n if appid.lower() in map(str.lower, IGNORED_GAMES):\n continue\n\n # executable is blacklisted\n if app.get_executable().lower() in IGNORED_EXECUTABLES:\n continue\n\n cli = shlex.split(app.get_commandline())\n exe = cli[0]\n args = cli[1:]\n # remove %U etc. and change %% to % in arguments\n args = list(map(lambda arg: re.sub('%[^%]', '', arg).replace('%%', '%'), args))\n\n args = subprocess.list2cmdline(args)\n\n if not exe.startswith('/'):\n exe = shutil.which(exe)\n game_list.append((app.get_display_name(), appid, exe, args))\n return game_list\n", "path": "lutris/util/desktopapps.py"}]}
| 1,974 | 500 |
gh_patches_debug_5467
|
rasdani/github-patches
|
git_diff
|
openstates__openstates-scrapers-1173
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TN: 109th Session, Legislators, Upper - Senator name extraction fails for district 19
Trying to run the scraper locally, I encountered the following exception for TN's [19th senate district in the 109th general assembly](http://www.capitol.tn.gov/senate/archives/109GA/members/s19.html) (Sen. Thelma Hale):
```
File "/usr/local/bin/billy-update", line 9, in <module>
load_entry_point('billy==1.8.4', 'console_scripts', 'billy-update')()
File "/opt/sunlightfoundation.com/billy/billy/bin/update.py", line 377, in main
run_record += _run_scraper(stype, args, metadata)
File "/opt/sunlightfoundation.com/billy/billy/bin/update.py", line 101, in _run_scraper
scraper.scrape(chamber, time)
File "/srv/openstates-web/openstates/tn/legislators.py", line 81, in scrape
name = member_page.xpath('//div[@id="membertitle"]/h2/text()')[0]
IndexError: list index out of range
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openstates/tn/legislators.py`
Content:
```
1 import HTMLParser
2
3 from billy.scrape.legislators import LegislatorScraper, Legislator
4 import lxml.html
5 from scrapelib import HTTPError
6 from openstates.utils import LXMLMixin
7
8 class TNLegislatorScraper(LegislatorScraper, LXMLMixin):
9 jurisdiction = 'tn'
10
11 def scrape(self, chamber, term):
12 self.validate_term(term, latest_only=False)
13 root_url = 'http://www.capitol.tn.gov/'
14 parties = {'D': 'Democratic', 'R': 'Republican',
15 'CCR': 'Carter County Republican',
16 'I': 'Independent'}
17
18 #testing for chamber
19 if chamber == 'upper':
20 url_chamber_name = 'senate'
21 abbr = 's'
22 else:
23 url_chamber_name = 'house'
24 abbr = 'h'
25 if term != self.metadata["terms"][-1]["sessions"][0]:
26 chamber_url = root_url + url_chamber_name
27 chamber_url += '/archives/' + term + 'GA/Members/index.html'
28 else:
29 chamber_url = root_url + url_chamber_name + '/members/'
30
31 page = self.lxmlize(chamber_url)
32
33 for row in page.xpath("//tr"):
34
35 # Skip any a header row.
36 if set(child.tag for child in row) == set(['th']):
37 continue
38
39 vacancy_check = row.xpath('./td/text()')[1]
40 if 'Vacant' in vacancy_check:
41 self.logger.warning("Vacant Seat")
42 continue
43
44 partyInit = row.xpath('td[3]')[0].text.split()[0]
45 party = parties[partyInit]
46 district = row.xpath('td[5]/a')[0].text.split()[1]
47 address = row.xpath('td[6]')[0].text_content()
48 # 301 6th Avenue North Suite
49 address = address.replace('LP',
50 'Legislative Plaza\nNashville, TN 37243')
51 address = address.replace('WMB',
52 'War Memorial Building\nNashville, TN 37243')
53 address = '301 6th Avenue North\nSuite ' + address
54 phone = [
55 x.strip() for x in
56 row.xpath('td[7]//text()')
57 if x.strip()
58 ][0]
59
60 email = HTMLParser.HTMLParser().unescape(
61 row.xpath('td[1]/a/@href')[0][len("mailto:"): ])
62 member_url = (root_url + url_chamber_name + '/members/' + abbr +
63 district + '.html')
64 member_photo_url = (root_url + url_chamber_name +
65 '/members/images/' + abbr + district + '.jpg')
66
67 try:
68 member_page = self.get(member_url, allow_redirects=False).text
69 except (TypeError, HTTPError):
70 try:
71 member_url = row.xpath('td[2]/a/@href')[0]
72 member_page = self.get(member_url, allow_redirects=False).text
73 except (TypeError, HTTPError):
74 self.logger.warning("Valid member page does not exist.")
75 continue
76
77 member_page = lxml.html.fromstring(member_page)
78 try:
79 name = member_page.xpath('body/div/div/h1/text()')[0]
80 except IndexError:
81 name = member_page.xpath('//div[@id="membertitle"]/h2/text()')[0]
82
83 if 'Speaker' in name:
84 full_name = name[8:len(name)]
85 elif 'Lt.' in name:
86 full_name = name[13:len(name)]
87 elif abbr == 'h':
88 full_name = name[len("Representative "): len(name)]
89 else:
90 full_name = name[8:len(name)]
91
92 leg = Legislator(term, chamber, district, full_name.strip(),
93 party=party, url=member_url,
94 photo_url=member_photo_url)
95 leg.add_source(chamber_url)
96 leg.add_source(member_url)
97
98 # TODO: add district address from this page
99
100 leg.add_office('capitol', 'Nashville Address',
101 address=address, phone=phone, email=email)
102
103 self.save_legislator(leg)
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/openstates/tn/legislators.py b/openstates/tn/legislators.py
--- a/openstates/tn/legislators.py
+++ b/openstates/tn/legislators.py
@@ -76,7 +76,7 @@
member_page = lxml.html.fromstring(member_page)
try:
- name = member_page.xpath('body/div/div/h1/text()')[0]
+ name = member_page.xpath('//div/div/h1/text()')[0]
except IndexError:
name = member_page.xpath('//div[@id="membertitle"]/h2/text()')[0]
|
{"golden_diff": "diff --git a/openstates/tn/legislators.py b/openstates/tn/legislators.py\n--- a/openstates/tn/legislators.py\n+++ b/openstates/tn/legislators.py\n@@ -76,7 +76,7 @@\n \n member_page = lxml.html.fromstring(member_page)\n try:\n- name = member_page.xpath('body/div/div/h1/text()')[0]\n+ name = member_page.xpath('//div/div/h1/text()')[0]\n except IndexError:\n name = member_page.xpath('//div[@id=\"membertitle\"]/h2/text()')[0]\n", "issue": "TN: 109th Session, Legislators, Upper - Senator name extraction fails for district 19\nTrying to run the scraper locally, I encountered the following exception for TN's [19th senate district in the 109th general assembly](http://www.capitol.tn.gov/senate/archives/109GA/members/s19.html) (Sen. Thelma Hale):\r\n\r\n```\r\nFile \"/usr/local/bin/billy-update\", line 9, in <module>\r\n load_entry_point('billy==1.8.4', 'console_scripts', 'billy-update')()\r\n File \"/opt/sunlightfoundation.com/billy/billy/bin/update.py\", line 377, in main\r\n run_record += _run_scraper(stype, args, metadata)\r\n File \"/opt/sunlightfoundation.com/billy/billy/bin/update.py\", line 101, in _run_scraper\r\n scraper.scrape(chamber, time)\r\n File \"/srv/openstates-web/openstates/tn/legislators.py\", line 81, in scrape\r\n name = member_page.xpath('//div[@id=\"membertitle\"]/h2/text()')[0]\r\nIndexError: list index out of range\r\n```\n", "before_files": [{"content": "import HTMLParser\n\nfrom billy.scrape.legislators import LegislatorScraper, Legislator\nimport lxml.html\nfrom scrapelib import HTTPError\nfrom openstates.utils import LXMLMixin\n\nclass TNLegislatorScraper(LegislatorScraper, LXMLMixin):\n jurisdiction = 'tn'\n\n def scrape(self, chamber, term):\n self.validate_term(term, latest_only=False)\n root_url = 'http://www.capitol.tn.gov/'\n parties = {'D': 'Democratic', 'R': 'Republican',\n 'CCR': 'Carter County Republican',\n 'I': 'Independent'}\n\n #testing for chamber\n if chamber == 'upper':\n url_chamber_name = 'senate'\n abbr = 's'\n else:\n url_chamber_name = 'house'\n abbr = 'h'\n if term != self.metadata[\"terms\"][-1][\"sessions\"][0]:\n chamber_url = root_url + url_chamber_name\n chamber_url += '/archives/' + term + 'GA/Members/index.html'\n else:\n chamber_url = root_url + url_chamber_name + '/members/'\n\n page = self.lxmlize(chamber_url)\n\n for row in page.xpath(\"//tr\"):\n\n # Skip any a header row.\n if set(child.tag for child in row) == set(['th']):\n continue\n\n vacancy_check = row.xpath('./td/text()')[1]\n if 'Vacant' in vacancy_check:\n self.logger.warning(\"Vacant Seat\")\n continue\n\n partyInit = row.xpath('td[3]')[0].text.split()[0]\n party = parties[partyInit]\n district = row.xpath('td[5]/a')[0].text.split()[1]\n address = row.xpath('td[6]')[0].text_content()\n # 301 6th Avenue North Suite\n address = address.replace('LP',\n 'Legislative Plaza\\nNashville, TN 37243')\n address = address.replace('WMB',\n 'War Memorial Building\\nNashville, TN 37243')\n address = '301 6th Avenue North\\nSuite ' + address\n phone = [\n x.strip() for x in\n row.xpath('td[7]//text()')\n if x.strip()\n ][0]\n\n email = HTMLParser.HTMLParser().unescape(\n row.xpath('td[1]/a/@href')[0][len(\"mailto:\"): ])\n member_url = (root_url + url_chamber_name + '/members/' + abbr +\n district + '.html')\n member_photo_url = (root_url + url_chamber_name +\n '/members/images/' + abbr + district + '.jpg')\n\n try:\n member_page = self.get(member_url, allow_redirects=False).text\n except (TypeError, HTTPError):\n try:\n member_url = row.xpath('td[2]/a/@href')[0]\n member_page = self.get(member_url, allow_redirects=False).text\n except (TypeError, HTTPError):\n self.logger.warning(\"Valid member page does not exist.\")\n continue\n\n member_page = lxml.html.fromstring(member_page)\n try:\n name = member_page.xpath('body/div/div/h1/text()')[0]\n except IndexError:\n name = member_page.xpath('//div[@id=\"membertitle\"]/h2/text()')[0]\n \n if 'Speaker' in name:\n full_name = name[8:len(name)]\n elif 'Lt.' in name:\n full_name = name[13:len(name)]\n elif abbr == 'h':\n full_name = name[len(\"Representative \"): len(name)]\n else:\n full_name = name[8:len(name)]\n\n leg = Legislator(term, chamber, district, full_name.strip(),\n party=party, url=member_url,\n photo_url=member_photo_url)\n leg.add_source(chamber_url)\n leg.add_source(member_url)\n\n # TODO: add district address from this page\n\n leg.add_office('capitol', 'Nashville Address',\n address=address, phone=phone, email=email)\n\n self.save_legislator(leg)\n", "path": "openstates/tn/legislators.py"}], "after_files": [{"content": "import HTMLParser\n\nfrom billy.scrape.legislators import LegislatorScraper, Legislator\nimport lxml.html\nfrom scrapelib import HTTPError\nfrom openstates.utils import LXMLMixin\n\nclass TNLegislatorScraper(LegislatorScraper, LXMLMixin):\n jurisdiction = 'tn'\n\n def scrape(self, chamber, term):\n self.validate_term(term, latest_only=False)\n root_url = 'http://www.capitol.tn.gov/'\n parties = {'D': 'Democratic', 'R': 'Republican',\n 'CCR': 'Carter County Republican',\n 'I': 'Independent'}\n\n #testing for chamber\n if chamber == 'upper':\n url_chamber_name = 'senate'\n abbr = 's'\n else:\n url_chamber_name = 'house'\n abbr = 'h'\n if term != self.metadata[\"terms\"][-1][\"sessions\"][0]:\n chamber_url = root_url + url_chamber_name\n chamber_url += '/archives/' + term + 'GA/Members/index.html'\n else:\n chamber_url = root_url + url_chamber_name + '/members/'\n\n page = self.lxmlize(chamber_url)\n\n for row in page.xpath(\"//tr\"):\n\n # Skip any a header row.\n if set(child.tag for child in row) == set(['th']):\n continue\n\n vacancy_check = row.xpath('./td/text()')[1]\n if 'Vacant' in vacancy_check:\n self.logger.warning(\"Vacant Seat\")\n continue\n\n partyInit = row.xpath('td[3]')[0].text.split()[0]\n party = parties[partyInit]\n district = row.xpath('td[5]/a')[0].text.split()[1]\n address = row.xpath('td[6]')[0].text_content()\n # 301 6th Avenue North Suite\n address = address.replace('LP',\n 'Legislative Plaza\\nNashville, TN 37243')\n address = address.replace('WMB',\n 'War Memorial Building\\nNashville, TN 37243')\n address = '301 6th Avenue North\\nSuite ' + address\n phone = [\n x.strip() for x in\n row.xpath('td[7]//text()')\n if x.strip()\n ][0]\n\n email = HTMLParser.HTMLParser().unescape(\n row.xpath('td[1]/a/@href')[0][len(\"mailto:\"): ])\n member_url = (root_url + url_chamber_name + '/members/' + abbr +\n district + '.html')\n member_photo_url = (root_url + url_chamber_name +\n '/members/images/' + abbr + district + '.jpg')\n\n try:\n member_page = self.get(member_url, allow_redirects=False).text\n except (TypeError, HTTPError):\n try:\n member_url = row.xpath('td[2]/a/@href')[0]\n member_page = self.get(member_url, allow_redirects=False).text\n except (TypeError, HTTPError):\n self.logger.warning(\"Valid member page does not exist.\")\n continue\n\n member_page = lxml.html.fromstring(member_page)\n try:\n name = member_page.xpath('//div/div/h1/text()')[0]\n except IndexError:\n name = member_page.xpath('//div[@id=\"membertitle\"]/h2/text()')[0]\n \n if 'Speaker' in name:\n full_name = name[8:len(name)]\n elif 'Lt.' in name:\n full_name = name[13:len(name)]\n elif abbr == 'h':\n full_name = name[len(\"Representative \"): len(name)]\n else:\n full_name = name[8:len(name)]\n\n leg = Legislator(term, chamber, district, full_name.strip(),\n party=party, url=member_url,\n photo_url=member_photo_url)\n leg.add_source(chamber_url)\n leg.add_source(member_url)\n\n # TODO: add district address from this page\n\n leg.add_office('capitol', 'Nashville Address',\n address=address, phone=phone, email=email)\n\n self.save_legislator(leg)\n", "path": "openstates/tn/legislators.py"}]}
| 1,653 | 137 |
gh_patches_debug_30388
|
rasdani/github-patches
|
git_diff
|
facebookresearch__hydra-185
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
move hydra generated files into an output directory relative to the job dir
automatically generated files like config.yaml, overrides.yaml, and others can go here.
Also additional debug information about the hydra config composition order.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hydra/plugins/common/utils.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2
3 import copy
4 import logging
5 import os
6 import re
7 import sys
8 from time import strftime, localtime
9
10 import six
11 from omegaconf import OmegaConf, DictConfig, ListConfig
12
13 # pylint: disable=C0103
14 log = logging.getLogger(__name__)
15
16
17 def configure_log(log_config, verbose_config):
18 assert isinstance(verbose_config, (bool, str, ListConfig))
19 if log_config is not None:
20 conf = log_config.to_container(resolve=True)
21 logging.config.dictConfig(conf)
22 else:
23 # default logging to stdout
24 root = logging.getLogger()
25 root.setLevel(logging.INFO)
26 handler = logging.StreamHandler(sys.stdout)
27 formatter = logging.Formatter(
28 "[%(asctime)s][%(name)s][%(levelname)s] - %(message)s"
29 )
30 handler.setFormatter(formatter)
31 root.addHandler(handler)
32 if isinstance(verbose_config, bool):
33 if verbose_config:
34 logging.getLogger().setLevel(logging.DEBUG)
35 else:
36 if isinstance(verbose_config, str):
37 verbose_config = OmegaConf.create([verbose_config])
38 for logger in verbose_config:
39 logging.getLogger(logger).setLevel(logging.DEBUG)
40
41
42 def save_config(cfg, filename):
43 with open(os.path.join(filename), "w") as file:
44 file.write(cfg.pretty())
45
46
47 def get_overrides_dirname(lst, exclude_keys=[]):
48 lst = [x for x in lst if x not in exclude_keys]
49 lst.sort()
50 return re.sub(pattern="[=]", repl="=", string=",".join(lst))
51
52
53 def filter_overrides(overrides):
54 """
55 :param overrides: overrides list
56 :return: returning a new overrides list with all the keys starting with hydra. fitlered.
57 """
58 return [x for x in overrides if not x.startswith("hydra.")]
59
60
61 def run_job(config, task_function, job_dir_key, job_subdir_key):
62 old_cwd = os.getcwd()
63 working_dir = str(config.select(job_dir_key))
64 if job_subdir_key is not None:
65 # evaluate job_subdir_key lazily.
66 # this is running on the client side in sweep and contains things such as job:id which
67 # are only available there.
68 subdir = str(config.select(job_subdir_key))
69 working_dir = os.path.join(working_dir, subdir)
70 try:
71 ret = JobReturn()
72 ret.working_dir = working_dir
73 task_cfg = copy.deepcopy(config)
74 # TODO: update this after https://github.com/omry/omegaconf/issues/42 is resolved
75 hydra_cfg = OmegaConf.create({"hydra": task_cfg["hydra"]})
76 del task_cfg["hydra"]
77 ret.cfg = task_cfg
78 ret.hydra_cfg = copy.deepcopy(HydraConfig())
79 ret.overrides = config.hydra.overrides.task.to_container()
80 if not os.path.exists(working_dir):
81 os.makedirs(working_dir)
82 os.chdir(working_dir)
83 configure_log(hydra_cfg.hydra.job_logging, hydra_cfg.hydra.verbose)
84
85 save_config(task_cfg, "config.yaml")
86 save_config(hydra_cfg, "hydra.yaml")
87 save_config(config.hydra.overrides.task, "overrides.yaml")
88 ret.return_value = task_function(task_cfg)
89 ret.task_name = JobRuntime().get("name")
90 return ret
91 finally:
92 os.chdir(old_cwd)
93
94
95 def get_valid_filename(s):
96 s = str(s).strip().replace(" ", "_")
97 return re.sub(r"(?u)[^-\w.]", "", s)
98
99
100 def setup_globals():
101 try:
102 OmegaConf.register_resolver(
103 "now", lambda pattern: strftime(pattern, localtime())
104 )
105
106 def job_error(x):
107 raise Exception(
108 "job:{} is no longer available. use hydra.job.{}".format(x, x)
109 )
110
111 OmegaConf.register_resolver("job", job_error)
112
113 except AssertionError:
114 # calling it again in no_workers mode will throw. safe to ignore.
115 pass
116
117
118 class JobReturn:
119 def __init__(self):
120 self.overrides = None
121 self.return_value = None
122 self.cfg = None
123 self.hydra_cfg = None
124 self.working_dir = None
125 self.task_name = None
126
127
128 class Singleton(type):
129 _instances = {}
130
131 def __call__(cls, *args, **kwargs):
132 if cls not in cls._instances:
133 cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
134 return cls._instances[cls]
135
136 @staticmethod
137 def get_state():
138 return Singleton._instances
139
140 @staticmethod
141 def set_state(instances):
142 Singleton._instances = instances
143
144
145 @six.add_metaclass(Singleton)
146 class JobRuntime:
147 def __init__(self):
148 self.conf = OmegaConf.create()
149 self.set("name", "UNKNOWN_NAME")
150
151 def get(self, key):
152 ret = self.conf.select(key)
153 if ret is None:
154 raise KeyError("Key not found in {}: {}".format(type(self).__name__, key))
155 return ret
156
157 def set(self, key, value):
158 log.debug("Setting {}:{}={}".format(type(self).__name__, key, value))
159 self.conf[key] = value
160
161
162 @six.add_metaclass(Singleton)
163 class HydraConfig(DictConfig):
164 def __init__(self):
165 super(HydraConfig, self).__init__(content={})
166 self.hydra = None
167
168 def set_config(self, cfg):
169 try:
170 OmegaConf.set_readonly(self, False)
171 self.hydra = copy.deepcopy(cfg.hydra)
172 finally:
173 OmegaConf.set_readonly(self, True)
174
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/hydra/plugins/common/utils.py b/hydra/plugins/common/utils.py
--- a/hydra/plugins/common/utils.py
+++ b/hydra/plugins/common/utils.py
@@ -5,6 +5,7 @@
import os
import re
import sys
+from hydra._internal.pathlib import Path
from time import strftime, localtime
import six
@@ -39,8 +40,9 @@
logging.getLogger(logger).setLevel(logging.DEBUG)
-def save_config(cfg, filename):
- with open(os.path.join(filename), "w") as file:
+def _save_config(cfg, filename, output_dir):
+ Path(str(output_dir)).mkdir(parents=True, exist_ok=True)
+ with open(str(output_dir / filename), "w") as file:
file.write(cfg.pretty())
@@ -77,14 +79,15 @@
ret.cfg = task_cfg
ret.hydra_cfg = copy.deepcopy(HydraConfig())
ret.overrides = config.hydra.overrides.task.to_container()
- if not os.path.exists(working_dir):
- os.makedirs(working_dir)
+ # handle output directories here
+ Path(str(working_dir)).mkdir(parents=True, exist_ok=True)
os.chdir(working_dir)
- configure_log(hydra_cfg.hydra.job_logging, hydra_cfg.hydra.verbose)
+ hydra_output = Path(hydra_cfg.hydra.output_subdir)
- save_config(task_cfg, "config.yaml")
- save_config(hydra_cfg, "hydra.yaml")
- save_config(config.hydra.overrides.task, "overrides.yaml")
+ configure_log(hydra_cfg.hydra.job_logging, hydra_cfg.hydra.verbose)
+ _save_config(task_cfg, "config.yaml", hydra_output)
+ _save_config(hydra_cfg, "hydra.yaml", hydra_output)
+ _save_config(config.hydra.overrides.task, "overrides.yaml", hydra_output)
ret.return_value = task_function(task_cfg)
ret.task_name = JobRuntime().get("name")
return ret
|
{"golden_diff": "diff --git a/hydra/plugins/common/utils.py b/hydra/plugins/common/utils.py\n--- a/hydra/plugins/common/utils.py\n+++ b/hydra/plugins/common/utils.py\n@@ -5,6 +5,7 @@\n import os\n import re\n import sys\n+from hydra._internal.pathlib import Path\n from time import strftime, localtime\n \n import six\n@@ -39,8 +40,9 @@\n logging.getLogger(logger).setLevel(logging.DEBUG)\n \n \n-def save_config(cfg, filename):\n- with open(os.path.join(filename), \"w\") as file:\n+def _save_config(cfg, filename, output_dir):\n+ Path(str(output_dir)).mkdir(parents=True, exist_ok=True)\n+ with open(str(output_dir / filename), \"w\") as file:\n file.write(cfg.pretty())\n \n \n@@ -77,14 +79,15 @@\n ret.cfg = task_cfg\n ret.hydra_cfg = copy.deepcopy(HydraConfig())\n ret.overrides = config.hydra.overrides.task.to_container()\n- if not os.path.exists(working_dir):\n- os.makedirs(working_dir)\n+ # handle output directories here\n+ Path(str(working_dir)).mkdir(parents=True, exist_ok=True)\n os.chdir(working_dir)\n- configure_log(hydra_cfg.hydra.job_logging, hydra_cfg.hydra.verbose)\n+ hydra_output = Path(hydra_cfg.hydra.output_subdir)\n \n- save_config(task_cfg, \"config.yaml\")\n- save_config(hydra_cfg, \"hydra.yaml\")\n- save_config(config.hydra.overrides.task, \"overrides.yaml\")\n+ configure_log(hydra_cfg.hydra.job_logging, hydra_cfg.hydra.verbose)\n+ _save_config(task_cfg, \"config.yaml\", hydra_output)\n+ _save_config(hydra_cfg, \"hydra.yaml\", hydra_output)\n+ _save_config(config.hydra.overrides.task, \"overrides.yaml\", hydra_output)\n ret.return_value = task_function(task_cfg)\n ret.task_name = JobRuntime().get(\"name\")\n return ret\n", "issue": "move hydra generated files into an output directory relative to the job dir\nautomatically generated files like config.yaml, overrides.yaml, and others can go here.\r\nAlso additional debug information about the hydra config composition order.\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\nimport copy\nimport logging\nimport os\nimport re\nimport sys\nfrom time import strftime, localtime\n\nimport six\nfrom omegaconf import OmegaConf, DictConfig, ListConfig\n\n# pylint: disable=C0103\nlog = logging.getLogger(__name__)\n\n\ndef configure_log(log_config, verbose_config):\n assert isinstance(verbose_config, (bool, str, ListConfig))\n if log_config is not None:\n conf = log_config.to_container(resolve=True)\n logging.config.dictConfig(conf)\n else:\n # default logging to stdout\n root = logging.getLogger()\n root.setLevel(logging.INFO)\n handler = logging.StreamHandler(sys.stdout)\n formatter = logging.Formatter(\n \"[%(asctime)s][%(name)s][%(levelname)s] - %(message)s\"\n )\n handler.setFormatter(formatter)\n root.addHandler(handler)\n if isinstance(verbose_config, bool):\n if verbose_config:\n logging.getLogger().setLevel(logging.DEBUG)\n else:\n if isinstance(verbose_config, str):\n verbose_config = OmegaConf.create([verbose_config])\n for logger in verbose_config:\n logging.getLogger(logger).setLevel(logging.DEBUG)\n\n\ndef save_config(cfg, filename):\n with open(os.path.join(filename), \"w\") as file:\n file.write(cfg.pretty())\n\n\ndef get_overrides_dirname(lst, exclude_keys=[]):\n lst = [x for x in lst if x not in exclude_keys]\n lst.sort()\n return re.sub(pattern=\"[=]\", repl=\"=\", string=\",\".join(lst))\n\n\ndef filter_overrides(overrides):\n \"\"\"\n :param overrides: overrides list\n :return: returning a new overrides list with all the keys starting with hydra. fitlered.\n \"\"\"\n return [x for x in overrides if not x.startswith(\"hydra.\")]\n\n\ndef run_job(config, task_function, job_dir_key, job_subdir_key):\n old_cwd = os.getcwd()\n working_dir = str(config.select(job_dir_key))\n if job_subdir_key is not None:\n # evaluate job_subdir_key lazily.\n # this is running on the client side in sweep and contains things such as job:id which\n # are only available there.\n subdir = str(config.select(job_subdir_key))\n working_dir = os.path.join(working_dir, subdir)\n try:\n ret = JobReturn()\n ret.working_dir = working_dir\n task_cfg = copy.deepcopy(config)\n # TODO: update this after https://github.com/omry/omegaconf/issues/42 is resolved\n hydra_cfg = OmegaConf.create({\"hydra\": task_cfg[\"hydra\"]})\n del task_cfg[\"hydra\"]\n ret.cfg = task_cfg\n ret.hydra_cfg = copy.deepcopy(HydraConfig())\n ret.overrides = config.hydra.overrides.task.to_container()\n if not os.path.exists(working_dir):\n os.makedirs(working_dir)\n os.chdir(working_dir)\n configure_log(hydra_cfg.hydra.job_logging, hydra_cfg.hydra.verbose)\n\n save_config(task_cfg, \"config.yaml\")\n save_config(hydra_cfg, \"hydra.yaml\")\n save_config(config.hydra.overrides.task, \"overrides.yaml\")\n ret.return_value = task_function(task_cfg)\n ret.task_name = JobRuntime().get(\"name\")\n return ret\n finally:\n os.chdir(old_cwd)\n\n\ndef get_valid_filename(s):\n s = str(s).strip().replace(\" \", \"_\")\n return re.sub(r\"(?u)[^-\\w.]\", \"\", s)\n\n\ndef setup_globals():\n try:\n OmegaConf.register_resolver(\n \"now\", lambda pattern: strftime(pattern, localtime())\n )\n\n def job_error(x):\n raise Exception(\n \"job:{} is no longer available. use hydra.job.{}\".format(x, x)\n )\n\n OmegaConf.register_resolver(\"job\", job_error)\n\n except AssertionError:\n # calling it again in no_workers mode will throw. safe to ignore.\n pass\n\n\nclass JobReturn:\n def __init__(self):\n self.overrides = None\n self.return_value = None\n self.cfg = None\n self.hydra_cfg = None\n self.working_dir = None\n self.task_name = None\n\n\nclass Singleton(type):\n _instances = {}\n\n def __call__(cls, *args, **kwargs):\n if cls not in cls._instances:\n cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)\n return cls._instances[cls]\n\n @staticmethod\n def get_state():\n return Singleton._instances\n\n @staticmethod\n def set_state(instances):\n Singleton._instances = instances\n\n\[email protected]_metaclass(Singleton)\nclass JobRuntime:\n def __init__(self):\n self.conf = OmegaConf.create()\n self.set(\"name\", \"UNKNOWN_NAME\")\n\n def get(self, key):\n ret = self.conf.select(key)\n if ret is None:\n raise KeyError(\"Key not found in {}: {}\".format(type(self).__name__, key))\n return ret\n\n def set(self, key, value):\n log.debug(\"Setting {}:{}={}\".format(type(self).__name__, key, value))\n self.conf[key] = value\n\n\[email protected]_metaclass(Singleton)\nclass HydraConfig(DictConfig):\n def __init__(self):\n super(HydraConfig, self).__init__(content={})\n self.hydra = None\n\n def set_config(self, cfg):\n try:\n OmegaConf.set_readonly(self, False)\n self.hydra = copy.deepcopy(cfg.hydra)\n finally:\n OmegaConf.set_readonly(self, True)\n", "path": "hydra/plugins/common/utils.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\nimport copy\nimport logging\nimport os\nimport re\nimport sys\nfrom hydra._internal.pathlib import Path\nfrom time import strftime, localtime\n\nimport six\nfrom omegaconf import OmegaConf, DictConfig, ListConfig\n\n# pylint: disable=C0103\nlog = logging.getLogger(__name__)\n\n\ndef configure_log(log_config, verbose_config):\n assert isinstance(verbose_config, (bool, str, ListConfig))\n if log_config is not None:\n conf = log_config.to_container(resolve=True)\n logging.config.dictConfig(conf)\n else:\n # default logging to stdout\n root = logging.getLogger()\n root.setLevel(logging.INFO)\n handler = logging.StreamHandler(sys.stdout)\n formatter = logging.Formatter(\n \"[%(asctime)s][%(name)s][%(levelname)s] - %(message)s\"\n )\n handler.setFormatter(formatter)\n root.addHandler(handler)\n if isinstance(verbose_config, bool):\n if verbose_config:\n logging.getLogger().setLevel(logging.DEBUG)\n else:\n if isinstance(verbose_config, str):\n verbose_config = OmegaConf.create([verbose_config])\n for logger in verbose_config:\n logging.getLogger(logger).setLevel(logging.DEBUG)\n\n\ndef _save_config(cfg, filename, output_dir):\n Path(str(output_dir)).mkdir(parents=True, exist_ok=True)\n with open(str(output_dir / filename), \"w\") as file:\n file.write(cfg.pretty())\n\n\ndef get_overrides_dirname(lst, exclude_keys=[]):\n lst = [x for x in lst if x not in exclude_keys]\n lst.sort()\n return re.sub(pattern=\"[=]\", repl=\"=\", string=\",\".join(lst))\n\n\ndef filter_overrides(overrides):\n \"\"\"\n :param overrides: overrides list\n :return: returning a new overrides list with all the keys starting with hydra. fitlered.\n \"\"\"\n return [x for x in overrides if not x.startswith(\"hydra.\")]\n\n\ndef run_job(config, task_function, job_dir_key, job_subdir_key):\n old_cwd = os.getcwd()\n working_dir = str(config.select(job_dir_key))\n if job_subdir_key is not None:\n # evaluate job_subdir_key lazily.\n # this is running on the client side in sweep and contains things such as job:id which\n # are only available there.\n subdir = str(config.select(job_subdir_key))\n working_dir = os.path.join(working_dir, subdir)\n try:\n ret = JobReturn()\n ret.working_dir = working_dir\n task_cfg = copy.deepcopy(config)\n # TODO: update this after https://github.com/omry/omegaconf/issues/42 is resolved\n hydra_cfg = OmegaConf.create({\"hydra\": task_cfg[\"hydra\"]})\n del task_cfg[\"hydra\"]\n ret.cfg = task_cfg\n ret.hydra_cfg = copy.deepcopy(HydraConfig())\n ret.overrides = config.hydra.overrides.task.to_container()\n # handle output directories here\n Path(str(working_dir)).mkdir(parents=True, exist_ok=True)\n os.chdir(working_dir)\n hydra_output = Path(hydra_cfg.hydra.output_subdir)\n\n configure_log(hydra_cfg.hydra.job_logging, hydra_cfg.hydra.verbose)\n _save_config(task_cfg, \"config.yaml\", hydra_output)\n _save_config(hydra_cfg, \"hydra.yaml\", hydra_output)\n _save_config(config.hydra.overrides.task, \"overrides.yaml\", hydra_output)\n ret.return_value = task_function(task_cfg)\n ret.task_name = JobRuntime().get(\"name\")\n return ret\n finally:\n os.chdir(old_cwd)\n\n\ndef get_valid_filename(s):\n s = str(s).strip().replace(\" \", \"_\")\n return re.sub(r\"(?u)[^-\\w.]\", \"\", s)\n\n\ndef setup_globals():\n try:\n OmegaConf.register_resolver(\n \"now\", lambda pattern: strftime(pattern, localtime())\n )\n\n def job_error(x):\n raise Exception(\n \"job:{} is no longer available. use hydra.job.{}\".format(x, x)\n )\n\n OmegaConf.register_resolver(\"job\", job_error)\n\n except AssertionError:\n # calling it again in no_workers mode will throw. safe to ignore.\n pass\n\n\nclass JobReturn:\n def __init__(self):\n self.overrides = None\n self.return_value = None\n self.cfg = None\n self.hydra_cfg = None\n self.working_dir = None\n self.task_name = None\n\n\nclass Singleton(type):\n _instances = {}\n\n def __call__(cls, *args, **kwargs):\n if cls not in cls._instances:\n cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)\n return cls._instances[cls]\n\n @staticmethod\n def get_state():\n return Singleton._instances\n\n @staticmethod\n def set_state(instances):\n Singleton._instances = instances\n\n\[email protected]_metaclass(Singleton)\nclass JobRuntime:\n def __init__(self):\n self.conf = OmegaConf.create()\n self.set(\"name\", \"UNKNOWN_NAME\")\n\n def get(self, key):\n ret = self.conf.select(key)\n if ret is None:\n raise KeyError(\"Key not found in {}: {}\".format(type(self).__name__, key))\n return ret\n\n def set(self, key, value):\n log.debug(\"Setting {}:{}={}\".format(type(self).__name__, key, value))\n self.conf[key] = value\n\n\[email protected]_metaclass(Singleton)\nclass HydraConfig(DictConfig):\n def __init__(self):\n super(HydraConfig, self).__init__(content={})\n self.hydra = None\n\n def set_config(self, cfg):\n try:\n OmegaConf.set_readonly(self, False)\n self.hydra = copy.deepcopy(cfg.hydra)\n finally:\n OmegaConf.set_readonly(self, True)\n", "path": "hydra/plugins/common/utils.py"}]}
| 1,951 | 454 |
gh_patches_debug_3144
|
rasdani/github-patches
|
git_diff
|
cupy__cupy-2588
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Stream in the context-manager form is not used in `ElementwiseKernel` or `ReductionKernel`
This is actually a bug reported back in #1695 that unfortunately went unnoticed.
In `examples/stream/map_reduce.py`, a list of streams was created for executing `cupy.matmul()` in parallel, which is backed by a `ReductionKernel` in this case: https://github.com/cupy/cupy/blob/1af22f57fda92ae35bde806d0c4d110faf4fed52/cupy/core/core.pyx#L2513-L2516
However, inspecting the implementation I found that `ReductionKernel` only accepts an explicit `stream` argument; it does not pick up any current stream: https://github.com/cupy/cupy/blob/32718607a7808ec6bc3a24cf9231a9351f8fc95e/cupy/core/reduction.pxi#L396
In other words, that example was misleading because those streams were not used at all and so all executions were serialized, as can be checked from nvprof + nvvp (see the circle in red):
<img width="972" alt="螢幕快照 2019-10-03 上午11 24 27" src="https://user-images.githubusercontent.com/5534781/66140715-978bf180-e5d0-11e9-8228-f613a3eba6fd.png">
The same bug also appears in `ElementwiseKernel`:
https://github.com/cupy/cupy/blob/1af22f57fda92ae35bde806d0c4d110faf4fed52/cupy/core/_kernel.pyx#L537
In my opinion, unlike `RawKernel` which is not used by any CuPy core functionalities, `ElementwiseKernel` and `ReductionKernel` should honor the current stream by checking the current stream pointer if no stream argument is explicitly given, since many CuPy functions like `cupy.matmul()` do not support passing in a stream. A similar approach is already adopted in the FFT module, see #2362.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/stream/map_reduce.py`
Content:
```
1 import cupy
2 import time
3
4 device = cupy.cuda.Device()
5 memory_pool = cupy.cuda.MemoryPool()
6 cupy.cuda.set_allocator(memory_pool.malloc)
7 rand = cupy.random.generator.RandomState(seed=1)
8
9 n = 10
10 zs = []
11 map_streams = []
12 stop_events = []
13 reduce_stream = cupy.cuda.stream.Stream()
14 for i in range(n):
15 map_streams.append(cupy.cuda.stream.Stream())
16
17 start_time = time.time()
18
19 # Map
20 for stream in map_streams:
21 with stream:
22 x = rand.normal(size=(1, 1024 * 256))
23 y = rand.normal(size=(1024 * 256, 1))
24 z = cupy.matmul(x, y)
25 zs.append(z)
26 stop_event = stream.record()
27 stop_events.append(stop_event)
28
29 # Block the `reduce_stream` until all events occur. This does not block host.
30 # This is not required when reduction is performed in the default (Stream.null)
31 # stream unless streams are created with `non_blocking=True` flag.
32 for i in range(n):
33 reduce_stream.wait_event(stop_events[i])
34
35 # Reduce
36 with reduce_stream:
37 z = sum(zs)
38
39 device.synchronize()
40 elapsed_time = time.time() - start_time
41 print('elapsed time', elapsed_time)
42 print('total bytes', memory_pool.total_bytes())
43
44 # Free all blocks in the memory pool of streams
45 for stream in map_streams:
46 memory_pool.free_all_blocks(stream=stream)
47 print('total bytes', memory_pool.total_bytes())
48
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/stream/map_reduce.py b/examples/stream/map_reduce.py
--- a/examples/stream/map_reduce.py
+++ b/examples/stream/map_reduce.py
@@ -19,8 +19,8 @@
# Map
for stream in map_streams:
with stream:
- x = rand.normal(size=(1, 1024 * 256))
- y = rand.normal(size=(1024 * 256, 1))
+ x = rand.normal(size=(1, 1024**2))
+ y = rand.normal(size=(1024**2, 1))
z = cupy.matmul(x, y)
zs.append(z)
stop_event = stream.record()
|
{"golden_diff": "diff --git a/examples/stream/map_reduce.py b/examples/stream/map_reduce.py\n--- a/examples/stream/map_reduce.py\n+++ b/examples/stream/map_reduce.py\n@@ -19,8 +19,8 @@\n # Map\n for stream in map_streams:\n with stream:\n- x = rand.normal(size=(1, 1024 * 256))\n- y = rand.normal(size=(1024 * 256, 1))\n+ x = rand.normal(size=(1, 1024**2))\n+ y = rand.normal(size=(1024**2, 1))\n z = cupy.matmul(x, y)\n zs.append(z)\n stop_event = stream.record()\n", "issue": "Stream in the context-manager form is not used in `ElementwiseKernel` or `ReductionKernel`\nThis is actually a bug reported back in #1695 that unfortunately went unnoticed. \r\n\r\nIn `examples/stream/map_reduce.py`, a list of streams was created for executing `cupy.matmul()` in parallel, which is backed by a `ReductionKernel` in this case: https://github.com/cupy/cupy/blob/1af22f57fda92ae35bde806d0c4d110faf4fed52/cupy/core/core.pyx#L2513-L2516\r\nHowever, inspecting the implementation I found that `ReductionKernel` only accepts an explicit `stream` argument; it does not pick up any current stream: https://github.com/cupy/cupy/blob/32718607a7808ec6bc3a24cf9231a9351f8fc95e/cupy/core/reduction.pxi#L396\r\nIn other words, that example was misleading because those streams were not used at all and so all executions were serialized, as can be checked from nvprof + nvvp (see the circle in red):\r\n<img width=\"972\" alt=\"\u87a2\u5e55\u5feb\u7167 2019-10-03 \u4e0a\u534811 24 27\" src=\"https://user-images.githubusercontent.com/5534781/66140715-978bf180-e5d0-11e9-8228-f613a3eba6fd.png\">\r\n\r\nThe same bug also appears in `ElementwiseKernel`:\r\nhttps://github.com/cupy/cupy/blob/1af22f57fda92ae35bde806d0c4d110faf4fed52/cupy/core/_kernel.pyx#L537\r\n\r\nIn my opinion, unlike `RawKernel` which is not used by any CuPy core functionalities, `ElementwiseKernel` and `ReductionKernel` should honor the current stream by checking the current stream pointer if no stream argument is explicitly given, since many CuPy functions like `cupy.matmul()` do not support passing in a stream. A similar approach is already adopted in the FFT module, see #2362.\n", "before_files": [{"content": "import cupy\nimport time\n\ndevice = cupy.cuda.Device()\nmemory_pool = cupy.cuda.MemoryPool()\ncupy.cuda.set_allocator(memory_pool.malloc)\nrand = cupy.random.generator.RandomState(seed=1)\n\nn = 10\nzs = []\nmap_streams = []\nstop_events = []\nreduce_stream = cupy.cuda.stream.Stream()\nfor i in range(n):\n map_streams.append(cupy.cuda.stream.Stream())\n\nstart_time = time.time()\n\n# Map\nfor stream in map_streams:\n with stream:\n x = rand.normal(size=(1, 1024 * 256))\n y = rand.normal(size=(1024 * 256, 1))\n z = cupy.matmul(x, y)\n zs.append(z)\n stop_event = stream.record()\n stop_events.append(stop_event)\n\n# Block the `reduce_stream` until all events occur. This does not block host.\n# This is not required when reduction is performed in the default (Stream.null)\n# stream unless streams are created with `non_blocking=True` flag.\nfor i in range(n):\n reduce_stream.wait_event(stop_events[i])\n\n# Reduce\nwith reduce_stream:\n z = sum(zs)\n\ndevice.synchronize()\nelapsed_time = time.time() - start_time\nprint('elapsed time', elapsed_time)\nprint('total bytes', memory_pool.total_bytes())\n\n# Free all blocks in the memory pool of streams\nfor stream in map_streams:\n memory_pool.free_all_blocks(stream=stream)\nprint('total bytes', memory_pool.total_bytes())\n", "path": "examples/stream/map_reduce.py"}], "after_files": [{"content": "import cupy\nimport time\n\ndevice = cupy.cuda.Device()\nmemory_pool = cupy.cuda.MemoryPool()\ncupy.cuda.set_allocator(memory_pool.malloc)\nrand = cupy.random.generator.RandomState(seed=1)\n\nn = 10\nzs = []\nmap_streams = []\nstop_events = []\nreduce_stream = cupy.cuda.stream.Stream()\nfor i in range(n):\n map_streams.append(cupy.cuda.stream.Stream())\n\nstart_time = time.time()\n\n# Map\nfor stream in map_streams:\n with stream:\n x = rand.normal(size=(1, 1024**2))\n y = rand.normal(size=(1024**2, 1))\n z = cupy.matmul(x, y)\n zs.append(z)\n stop_event = stream.record()\n stop_events.append(stop_event)\n\n# Block the `reduce_stream` until all events occur. This does not block host.\n# This is not required when reduction is performed in the default (Stream.null)\n# stream unless streams are created with `non_blocking=True` flag.\nfor i in range(n):\n reduce_stream.wait_event(stop_events[i])\n\n# Reduce\nwith reduce_stream:\n z = sum(zs)\n\ndevice.synchronize()\nelapsed_time = time.time() - start_time\nprint('elapsed time', elapsed_time)\nprint('total bytes', memory_pool.total_bytes())\n\n# Free all blocks in the memory pool of streams\nfor stream in map_streams:\n memory_pool.free_all_blocks(stream=stream)\nprint('total bytes', memory_pool.total_bytes())\n", "path": "examples/stream/map_reduce.py"}]}
| 1,192 | 157 |
gh_patches_debug_38311
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-center-index-3991
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[package] resiprocate/1.12.0: Error in autotools configuration
There are several errors in the recipe, especially in autotools configuration step.
https://github.com/conan-io/conan-center-index/blob/c68b60af3de84cf460c7512efce75290f1b395cb/recipes/resiprocate/all/conanfile.py#L50-L58
First, in the lines 54~57, values passed to each options are the inverse of what a consumer actually gave. For example, if `with_ssl` is true, then configuration option is passed as `--with-ssl=no`.
Second, the configure script of the project doesn't understand value assignments to the following options : ssl, mysql, postgresql. In other words, `--with-ssl=yes` and `--with-ssl=no` would make no difference in the configuration step and eventually the library is built with ssl feature on in both cases. You can check this out either from resiprocate project's [`configure.ac` file](https://github.com/resiprocate/resiprocate/blob/6b2756ba8516726cfb04e2b2fa5f4e3e67598a31/configure.ac#L113-L119), or from `configure` file autogenerated from `configure.ac` as below.
```
# Check whether --with-ssl was given.
if test "${with_ssl+set}" = set; then :
withval=$with_ssl;
cat >>confdefs.h <<_ACEOF
#define USE_SSL /**/
_ACEOF
LIBSSL_LIBADD="-lssl -lcrypto"
if true; then
USE_SSL_TRUE=
USE_SSL_FALSE='#'
else
USE_SSL_TRUE='#'
USE_SSL_FALSE=
fi
else
LIBSSL_LIBADD=""
fi
```
And lastly, the project requires pthread, and the recipe didn't include that. When I turn off ssl, mysql, postgresql options, linking the library causes pthread missing error. I guess pthread happened to be secondary dependency from ssl or mysql or postgresql library, and that's why it worked when the options were on even though the recipe didn't explicitly include pthread. I suggest we add pthread in `cpp_info.system_libs`
I'm really thankful to those who had worked in this package. I didn't expect it to be on conan ever, but now it's so nice and easy to use. Happy holidays guys!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/resiprocate/all/conanfile.py`
Content:
```
1 import os
2 from conans import ConanFile, AutoToolsBuildEnvironment, tools
3 from conans.errors import ConanInvalidConfiguration
4
5
6 required_conan_version = ">=1.29.1"
7
8 class ResiprocateConan(ConanFile):
9 name = "resiprocate"
10 description = "The project is dedicated to maintaining a complete, correct, and commercially usable implementation of SIP and a few related protocols. "
11 topics = ("sip", "voip", "communication", "signaling")
12 url = "https://github.com/conan-io/conan-center-index"
13 homepage = "http://www.resiprocate.org"
14 license = "VSL-1.0"
15 settings = "os", "compiler", "build_type", "arch"
16 options = {"fPIC": [True, False],
17 "shared": [True, False],
18 "with_ssl": [True, False],
19 "with_postgresql": [True, False],
20 "with_mysql": [True, False]}
21 default_options = {"fPIC": True,
22 "shared": False,
23 "with_ssl": True,
24 "with_postgresql": True,
25 "with_mysql": True}
26 _autotools = None
27
28 @property
29 def _source_subfolder(self):
30 return "source_subfolder"
31
32 def requirements(self):
33 if self.settings.os in ("Windows", "Macos"):
34 raise ConanInvalidConfiguration("reSIProcate is not support on {}.".format(self.settings.os))
35 if self.options.with_ssl:
36 self.requires("openssl/1.1.1h")
37 if self.options.with_postgresql:
38 self.requires("libpq/11.5")
39 if self.options.with_mysql:
40 self.requires("libmysqlclient/8.0.17")
41
42 def source(self):
43 tools.get(**self.conan_data["sources"][self.version])
44 os.rename("{}-{}".format(self.name, self.version), self._source_subfolder)
45
46 def _configure_autotools(self):
47 if self._autotools:
48 return self._autotools
49 self._autotools = AutoToolsBuildEnvironment(self)
50 yes_no = lambda v: "yes" if v else "no"
51 configure_args = [
52 "--enable-shared={}".format(yes_no(self.options.shared)),
53 "--enable-static={}".format(yes_no(not self.options.shared)),
54 "--with-ssl={}".format(yes_no(not self.options.with_ssl)),
55 "--with-mysql={}".format(yes_no(not self.options.with_mysql)),
56 "--with-postgresql={}".format(yes_no(not self.options.with_postgresql)),
57 "--with-pic={}".format(yes_no(not self.options.fPIC))
58 ]
59
60 self._autotools.configure(configure_dir=self._source_subfolder, args=configure_args)
61 return self._autotools
62
63 def build(self):
64 autotools = self._configure_autotools()
65 autotools.make()
66
67 def package(self):
68 self.copy("COPYING", src=self._source_subfolder, dst="licenses")
69 autotools = self._configure_autotools()
70 autotools.install()
71 tools.rmdir(os.path.join(os.path.join(self.package_folder, "share")))
72 tools.remove_files_by_mask(os.path.join(self.package_folder), "*.la")
73
74 def package_info(self):
75 self.cpp_info.libs = ["resip", "rutil", "dum", "resipares"]
76 bin_path = os.path.join(self.package_folder, "bin")
77 self.output.info("Appending PATH environment variable: {}".format(bin_path))
78 self.env_info.PATH.append(os.path.join(self.package_folder, "bin"))
79
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/recipes/resiprocate/all/conanfile.py b/recipes/resiprocate/all/conanfile.py
--- a/recipes/resiprocate/all/conanfile.py
+++ b/recipes/resiprocate/all/conanfile.py
@@ -29,13 +29,22 @@
def _source_subfolder(self):
return "source_subfolder"
- def requirements(self):
+ def config_options(self):
+ if self.settings.os == 'Windows':
+ del self.options.fPIC
+
+ def configure(self):
if self.settings.os in ("Windows", "Macos"):
- raise ConanInvalidConfiguration("reSIProcate is not support on {}.".format(self.settings.os))
+ # FIXME: Visual Studio project & Mac support seems available in resiprocate
+ raise ConanInvalidConfiguration("reSIProcate recipe does not currently support {}.".format(self.settings.os))
+ if self.options.shared:
+ del self.options.fPIC
+
+ def requirements(self):
if self.options.with_ssl:
- self.requires("openssl/1.1.1h")
+ self.requires("openssl/1.1.1i")
if self.options.with_postgresql:
- self.requires("libpq/11.5")
+ self.requires("libpq/11.9")
if self.options.with_mysql:
self.requires("libmysqlclient/8.0.17")
@@ -51,12 +60,17 @@
configure_args = [
"--enable-shared={}".format(yes_no(self.options.shared)),
"--enable-static={}".format(yes_no(not self.options.shared)),
- "--with-ssl={}".format(yes_no(not self.options.with_ssl)),
- "--with-mysql={}".format(yes_no(not self.options.with_mysql)),
- "--with-postgresql={}".format(yes_no(not self.options.with_postgresql)),
- "--with-pic={}".format(yes_no(not self.options.fPIC))
+ "--with-pic={}".format(yes_no(self.options.get_safe("fPIC", True)))
]
+ # These options do not support yes/no
+ if self.options.with_ssl:
+ configure_args.append("--with-ssl")
+ if self.options.with_mysql:
+ configure_args.append("--with-mysql")
+ if self.options.with_postgresql:
+ configure_args.append("--with-postgresql")
+
self._autotools.configure(configure_dir=self._source_subfolder, args=configure_args)
return self._autotools
@@ -73,6 +87,8 @@
def package_info(self):
self.cpp_info.libs = ["resip", "rutil", "dum", "resipares"]
+ if self.settings.os in ("Linux", "FreeBSD"):
+ self.cpp_info.system_libs = ["pthread"]
bin_path = os.path.join(self.package_folder, "bin")
self.output.info("Appending PATH environment variable: {}".format(bin_path))
self.env_info.PATH.append(os.path.join(self.package_folder, "bin"))
|
{"golden_diff": "diff --git a/recipes/resiprocate/all/conanfile.py b/recipes/resiprocate/all/conanfile.py\n--- a/recipes/resiprocate/all/conanfile.py\n+++ b/recipes/resiprocate/all/conanfile.py\n@@ -29,13 +29,22 @@\n def _source_subfolder(self):\n return \"source_subfolder\"\n \n- def requirements(self):\n+ def config_options(self):\n+ if self.settings.os == 'Windows':\n+ del self.options.fPIC\n+\n+ def configure(self):\n if self.settings.os in (\"Windows\", \"Macos\"):\n- raise ConanInvalidConfiguration(\"reSIProcate is not support on {}.\".format(self.settings.os))\n+ # FIXME: Visual Studio project & Mac support seems available in resiprocate\n+ raise ConanInvalidConfiguration(\"reSIProcate recipe does not currently support {}.\".format(self.settings.os))\n+ if self.options.shared:\n+ del self.options.fPIC\n+\n+ def requirements(self):\n if self.options.with_ssl:\n- self.requires(\"openssl/1.1.1h\")\n+ self.requires(\"openssl/1.1.1i\")\n if self.options.with_postgresql:\n- self.requires(\"libpq/11.5\")\n+ self.requires(\"libpq/11.9\")\n if self.options.with_mysql:\n self.requires(\"libmysqlclient/8.0.17\")\n \n@@ -51,12 +60,17 @@\n configure_args = [\n \"--enable-shared={}\".format(yes_no(self.options.shared)),\n \"--enable-static={}\".format(yes_no(not self.options.shared)),\n- \"--with-ssl={}\".format(yes_no(not self.options.with_ssl)),\n- \"--with-mysql={}\".format(yes_no(not self.options.with_mysql)),\n- \"--with-postgresql={}\".format(yes_no(not self.options.with_postgresql)),\n- \"--with-pic={}\".format(yes_no(not self.options.fPIC))\n+ \"--with-pic={}\".format(yes_no(self.options.get_safe(\"fPIC\", True)))\n ]\n \n+ # These options do not support yes/no\n+ if self.options.with_ssl:\n+ configure_args.append(\"--with-ssl\")\n+ if self.options.with_mysql:\n+ configure_args.append(\"--with-mysql\")\n+ if self.options.with_postgresql:\n+ configure_args.append(\"--with-postgresql\")\n+ \n self._autotools.configure(configure_dir=self._source_subfolder, args=configure_args)\n return self._autotools\n \n@@ -73,6 +87,8 @@\n \n def package_info(self):\n self.cpp_info.libs = [\"resip\", \"rutil\", \"dum\", \"resipares\"]\n+ if self.settings.os in (\"Linux\", \"FreeBSD\"):\n+ self.cpp_info.system_libs = [\"pthread\"]\n bin_path = os.path.join(self.package_folder, \"bin\")\n self.output.info(\"Appending PATH environment variable: {}\".format(bin_path))\n self.env_info.PATH.append(os.path.join(self.package_folder, \"bin\"))\n", "issue": "[package] resiprocate/1.12.0: Error in autotools configuration\nThere are several errors in the recipe, especially in autotools configuration step.\r\n\r\nhttps://github.com/conan-io/conan-center-index/blob/c68b60af3de84cf460c7512efce75290f1b395cb/recipes/resiprocate/all/conanfile.py#L50-L58\r\n\r\nFirst, in the lines 54~57, values passed to each options are the inverse of what a consumer actually gave. For example, if `with_ssl` is true, then configuration option is passed as `--with-ssl=no`.\r\n\r\nSecond, the configure script of the project doesn't understand value assignments to the following options : ssl, mysql, postgresql. In other words, `--with-ssl=yes` and `--with-ssl=no` would make no difference in the configuration step and eventually the library is built with ssl feature on in both cases. You can check this out either from resiprocate project's [`configure.ac` file](https://github.com/resiprocate/resiprocate/blob/6b2756ba8516726cfb04e2b2fa5f4e3e67598a31/configure.ac#L113-L119), or from `configure` file autogenerated from `configure.ac` as below. \r\n\r\n```\r\n# Check whether --with-ssl was given.\r\nif test \"${with_ssl+set}\" = set; then :\r\n withval=$with_ssl;\r\ncat >>confdefs.h <<_ACEOF\r\n#define USE_SSL /**/\r\n_ACEOF\r\n\r\n LIBSSL_LIBADD=\"-lssl -lcrypto\"\r\n\r\n if true; then\r\n USE_SSL_TRUE=\r\n USE_SSL_FALSE='#'\r\nelse\r\n USE_SSL_TRUE='#'\r\n USE_SSL_FALSE=\r\nfi\r\n\r\nelse\r\n LIBSSL_LIBADD=\"\"\r\n\r\nfi\r\n```\r\n\r\nAnd lastly, the project requires pthread, and the recipe didn't include that. When I turn off ssl, mysql, postgresql options, linking the library causes pthread missing error. I guess pthread happened to be secondary dependency from ssl or mysql or postgresql library, and that's why it worked when the options were on even though the recipe didn't explicitly include pthread. I suggest we add pthread in `cpp_info.system_libs`\r\n\r\nI'm really thankful to those who had worked in this package. I didn't expect it to be on conan ever, but now it's so nice and easy to use. Happy holidays guys! \n", "before_files": [{"content": "import os\nfrom conans import ConanFile, AutoToolsBuildEnvironment, tools\nfrom conans.errors import ConanInvalidConfiguration\n\n\nrequired_conan_version = \">=1.29.1\"\n\nclass ResiprocateConan(ConanFile):\n name = \"resiprocate\"\n description = \"The project is dedicated to maintaining a complete, correct, and commercially usable implementation of SIP and a few related protocols. \"\n topics = (\"sip\", \"voip\", \"communication\", \"signaling\")\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"http://www.resiprocate.org\"\n license = \"VSL-1.0\"\n settings = \"os\", \"compiler\", \"build_type\", \"arch\"\n options = {\"fPIC\": [True, False],\n \"shared\": [True, False],\n \"with_ssl\": [True, False],\n \"with_postgresql\": [True, False],\n \"with_mysql\": [True, False]}\n default_options = {\"fPIC\": True,\n \"shared\": False,\n \"with_ssl\": True,\n \"with_postgresql\": True,\n \"with_mysql\": True}\n _autotools = None\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n def requirements(self):\n if self.settings.os in (\"Windows\", \"Macos\"):\n raise ConanInvalidConfiguration(\"reSIProcate is not support on {}.\".format(self.settings.os))\n if self.options.with_ssl:\n self.requires(\"openssl/1.1.1h\")\n if self.options.with_postgresql:\n self.requires(\"libpq/11.5\")\n if self.options.with_mysql:\n self.requires(\"libmysqlclient/8.0.17\")\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n os.rename(\"{}-{}\".format(self.name, self.version), self._source_subfolder)\n\n def _configure_autotools(self):\n if self._autotools:\n return self._autotools\n self._autotools = AutoToolsBuildEnvironment(self)\n yes_no = lambda v: \"yes\" if v else \"no\"\n configure_args = [\n \"--enable-shared={}\".format(yes_no(self.options.shared)),\n \"--enable-static={}\".format(yes_no(not self.options.shared)),\n \"--with-ssl={}\".format(yes_no(not self.options.with_ssl)),\n \"--with-mysql={}\".format(yes_no(not self.options.with_mysql)),\n \"--with-postgresql={}\".format(yes_no(not self.options.with_postgresql)),\n \"--with-pic={}\".format(yes_no(not self.options.fPIC))\n ]\n\n self._autotools.configure(configure_dir=self._source_subfolder, args=configure_args)\n return self._autotools\n\n def build(self):\n autotools = self._configure_autotools()\n autotools.make()\n\n def package(self):\n self.copy(\"COPYING\", src=self._source_subfolder, dst=\"licenses\")\n autotools = self._configure_autotools()\n autotools.install()\n tools.rmdir(os.path.join(os.path.join(self.package_folder, \"share\")))\n tools.remove_files_by_mask(os.path.join(self.package_folder), \"*.la\")\n\n def package_info(self):\n self.cpp_info.libs = [\"resip\", \"rutil\", \"dum\", \"resipares\"]\n bin_path = os.path.join(self.package_folder, \"bin\")\n self.output.info(\"Appending PATH environment variable: {}\".format(bin_path))\n self.env_info.PATH.append(os.path.join(self.package_folder, \"bin\"))\n", "path": "recipes/resiprocate/all/conanfile.py"}], "after_files": [{"content": "import os\nfrom conans import ConanFile, AutoToolsBuildEnvironment, tools\nfrom conans.errors import ConanInvalidConfiguration\n\n\nrequired_conan_version = \">=1.29.1\"\n\nclass ResiprocateConan(ConanFile):\n name = \"resiprocate\"\n description = \"The project is dedicated to maintaining a complete, correct, and commercially usable implementation of SIP and a few related protocols. \"\n topics = (\"sip\", \"voip\", \"communication\", \"signaling\")\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"http://www.resiprocate.org\"\n license = \"VSL-1.0\"\n settings = \"os\", \"compiler\", \"build_type\", \"arch\"\n options = {\"fPIC\": [True, False],\n \"shared\": [True, False],\n \"with_ssl\": [True, False],\n \"with_postgresql\": [True, False],\n \"with_mysql\": [True, False]}\n default_options = {\"fPIC\": True,\n \"shared\": False,\n \"with_ssl\": True,\n \"with_postgresql\": True,\n \"with_mysql\": True}\n _autotools = None\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n def config_options(self):\n if self.settings.os == 'Windows':\n del self.options.fPIC\n\n def configure(self):\n if self.settings.os in (\"Windows\", \"Macos\"):\n # FIXME: Visual Studio project & Mac support seems available in resiprocate\n raise ConanInvalidConfiguration(\"reSIProcate recipe does not currently support {}.\".format(self.settings.os))\n if self.options.shared:\n del self.options.fPIC\n\n def requirements(self):\n if self.options.with_ssl:\n self.requires(\"openssl/1.1.1i\")\n if self.options.with_postgresql:\n self.requires(\"libpq/11.9\")\n if self.options.with_mysql:\n self.requires(\"libmysqlclient/8.0.17\")\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n os.rename(\"{}-{}\".format(self.name, self.version), self._source_subfolder)\n\n def _configure_autotools(self):\n if self._autotools:\n return self._autotools\n self._autotools = AutoToolsBuildEnvironment(self)\n yes_no = lambda v: \"yes\" if v else \"no\"\n configure_args = [\n \"--enable-shared={}\".format(yes_no(self.options.shared)),\n \"--enable-static={}\".format(yes_no(not self.options.shared)),\n \"--with-pic={}\".format(yes_no(self.options.get_safe(\"fPIC\", True)))\n ]\n\n # These options do not support yes/no\n if self.options.with_ssl:\n configure_args.append(\"--with-ssl\")\n if self.options.with_mysql:\n configure_args.append(\"--with-mysql\")\n if self.options.with_postgresql:\n configure_args.append(\"--with-postgresql\")\n \n self._autotools.configure(configure_dir=self._source_subfolder, args=configure_args)\n return self._autotools\n\n def build(self):\n autotools = self._configure_autotools()\n autotools.make()\n\n def package(self):\n self.copy(\"COPYING\", src=self._source_subfolder, dst=\"licenses\")\n autotools = self._configure_autotools()\n autotools.install()\n tools.rmdir(os.path.join(os.path.join(self.package_folder, \"share\")))\n tools.remove_files_by_mask(os.path.join(self.package_folder), \"*.la\")\n\n def package_info(self):\n self.cpp_info.libs = [\"resip\", \"rutil\", \"dum\", \"resipares\"]\n if self.settings.os in (\"Linux\", \"FreeBSD\"):\n self.cpp_info.system_libs = [\"pthread\"]\n bin_path = os.path.join(self.package_folder, \"bin\")\n self.output.info(\"Appending PATH environment variable: {}\".format(bin_path))\n self.env_info.PATH.append(os.path.join(self.package_folder, \"bin\"))\n", "path": "recipes/resiprocate/all/conanfile.py"}]}
| 1,760 | 670 |
gh_patches_debug_19930
|
rasdani/github-patches
|
git_diff
|
matrix-org__synapse-11757
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
sample doc in #11561 is malformatted
see https://github.com/matrix-org/synapse/pull/11561#pullrequestreview-851817482
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `synapse/config/api.py`
Content:
```
1 # Copyright 2015-2021 The Matrix.org Foundation C.I.C.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import logging
16 from typing import Iterable
17
18 from synapse.api.constants import EventTypes
19 from synapse.config._base import Config, ConfigError
20 from synapse.config._util import validate_config
21 from synapse.types import JsonDict
22
23 logger = logging.getLogger(__name__)
24
25
26 class ApiConfig(Config):
27 section = "api"
28
29 def read_config(self, config: JsonDict, **kwargs):
30 validate_config(_MAIN_SCHEMA, config, ())
31 self.room_prejoin_state = list(self._get_prejoin_state_types(config))
32 self.track_puppeted_user_ips = config.get("track_puppeted_user_ips", False)
33
34 def generate_config_section(cls, **kwargs) -> str:
35 formatted_default_state_types = "\n".join(
36 " # - %s" % (t,) for t in _DEFAULT_PREJOIN_STATE_TYPES
37 )
38
39 return """\
40 ## API Configuration ##
41
42 # Controls for the state that is shared with users who receive an invite
43 # to a room
44 #
45 room_prejoin_state:
46 # By default, the following state event types are shared with users who
47 # receive invites to the room:
48 #
49 %(formatted_default_state_types)s
50 #
51 # Uncomment the following to disable these defaults (so that only the event
52 # types listed in 'additional_event_types' are shared). Defaults to 'false'.
53 #
54 #disable_default_event_types: true
55
56 # Additional state event types to share with users when they are invited
57 # to a room.
58 #
59 # By default, this list is empty (so only the default event types are shared).
60 #
61 #additional_event_types:
62 # - org.example.custom.event.type
63
64 # By default when puppeting another user, the user who has created the
65 # access token for puppeting is tracked. If this is enabled, both
66 # requests are tracked. Implicitly enables MAU tracking for puppeted users.
67 # Uncomment to also track puppeted user IP's.
68 #
69 #track_puppeted_user_ips: true
70 """ % {
71 "formatted_default_state_types": formatted_default_state_types
72 }
73
74 def _get_prejoin_state_types(self, config: JsonDict) -> Iterable[str]:
75 """Get the event types to include in the prejoin state
76
77 Parses the config and returns an iterable of the event types to be included.
78 """
79 room_prejoin_state_config = config.get("room_prejoin_state") or {}
80
81 # backwards-compatibility support for room_invite_state_types
82 if "room_invite_state_types" in config:
83 # if both "room_invite_state_types" and "room_prejoin_state" are set, then
84 # we don't really know what to do.
85 if room_prejoin_state_config:
86 raise ConfigError(
87 "Can't specify both 'room_invite_state_types' and 'room_prejoin_state' "
88 "in config"
89 )
90
91 logger.warning(_ROOM_INVITE_STATE_TYPES_WARNING)
92
93 yield from config["room_invite_state_types"]
94 return
95
96 if not room_prejoin_state_config.get("disable_default_event_types"):
97 yield from _DEFAULT_PREJOIN_STATE_TYPES
98
99 yield from room_prejoin_state_config.get("additional_event_types", [])
100
101
102 _ROOM_INVITE_STATE_TYPES_WARNING = """\
103 WARNING: The 'room_invite_state_types' configuration setting is now deprecated,
104 and replaced with 'room_prejoin_state'. New features may not work correctly
105 unless 'room_invite_state_types' is removed. See the sample configuration file for
106 details of 'room_prejoin_state'.
107 --------------------------------------------------------------------------------
108 """
109
110 _DEFAULT_PREJOIN_STATE_TYPES = [
111 EventTypes.JoinRules,
112 EventTypes.CanonicalAlias,
113 EventTypes.RoomAvatar,
114 EventTypes.RoomEncryption,
115 EventTypes.Name,
116 # Per MSC1772.
117 EventTypes.Create,
118 # Per MSC3173.
119 EventTypes.Topic,
120 ]
121
122
123 # room_prejoin_state can either be None (as it is in the default config), or
124 # an object containing other config settings
125 _ROOM_PREJOIN_STATE_CONFIG_SCHEMA = {
126 "oneOf": [
127 {
128 "type": "object",
129 "properties": {
130 "disable_default_event_types": {"type": "boolean"},
131 "additional_event_types": {
132 "type": "array",
133 "items": {"type": "string"},
134 },
135 },
136 },
137 {"type": "null"},
138 ]
139 }
140
141 # the legacy room_invite_state_types setting
142 _ROOM_INVITE_STATE_TYPES_SCHEMA = {"type": "array", "items": {"type": "string"}}
143
144 _MAIN_SCHEMA = {
145 "type": "object",
146 "properties": {
147 "room_prejoin_state": _ROOM_PREJOIN_STATE_CONFIG_SCHEMA,
148 "room_invite_state_types": _ROOM_INVITE_STATE_TYPES_SCHEMA,
149 "track_puppeted_user_ips": {
150 "type": "boolean",
151 },
152 },
153 }
154
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/synapse/config/api.py b/synapse/config/api.py
--- a/synapse/config/api.py
+++ b/synapse/config/api.py
@@ -61,10 +61,18 @@
#additional_event_types:
# - org.example.custom.event.type
- # By default when puppeting another user, the user who has created the
- # access token for puppeting is tracked. If this is enabled, both
- # requests are tracked. Implicitly enables MAU tracking for puppeted users.
- # Uncomment to also track puppeted user IP's.
+ # We record the IP address of clients used to access the API for various
+ # reasons, including displaying it to the user in the "Where you're signed in"
+ # dialog.
+ #
+ # By default, when puppeting another user via the admin API, the client IP
+ # address is recorded against the user who created the access token (ie, the
+ # admin user), and *not* the puppeted user.
+ #
+ # Uncomment the following to also record the IP address against the puppeted
+ # user. (This also means that the puppeted user will count as an "active" user
+ # for the purpose of monthly active user tracking - see 'limit_usage_by_mau' etc
+ # above.)
#
#track_puppeted_user_ips: true
""" % {
|
{"golden_diff": "diff --git a/synapse/config/api.py b/synapse/config/api.py\n--- a/synapse/config/api.py\n+++ b/synapse/config/api.py\n@@ -61,10 +61,18 @@\n #additional_event_types:\n # - org.example.custom.event.type\n \n- # By default when puppeting another user, the user who has created the\n- # access token for puppeting is tracked. If this is enabled, both\n- # requests are tracked. Implicitly enables MAU tracking for puppeted users.\n- # Uncomment to also track puppeted user IP's.\n+ # We record the IP address of clients used to access the API for various\n+ # reasons, including displaying it to the user in the \"Where you're signed in\"\n+ # dialog.\n+ #\n+ # By default, when puppeting another user via the admin API, the client IP\n+ # address is recorded against the user who created the access token (ie, the\n+ # admin user), and *not* the puppeted user.\n+ #\n+ # Uncomment the following to also record the IP address against the puppeted\n+ # user. (This also means that the puppeted user will count as an \"active\" user\n+ # for the purpose of monthly active user tracking - see 'limit_usage_by_mau' etc\n+ # above.)\n #\n #track_puppeted_user_ips: true\n \"\"\" % {\n", "issue": "sample doc in #11561 is malformatted\nsee https://github.com/matrix-org/synapse/pull/11561#pullrequestreview-851817482\n", "before_files": [{"content": "# Copyright 2015-2021 The Matrix.org Foundation C.I.C.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\nfrom typing import Iterable\n\nfrom synapse.api.constants import EventTypes\nfrom synapse.config._base import Config, ConfigError\nfrom synapse.config._util import validate_config\nfrom synapse.types import JsonDict\n\nlogger = logging.getLogger(__name__)\n\n\nclass ApiConfig(Config):\n section = \"api\"\n\n def read_config(self, config: JsonDict, **kwargs):\n validate_config(_MAIN_SCHEMA, config, ())\n self.room_prejoin_state = list(self._get_prejoin_state_types(config))\n self.track_puppeted_user_ips = config.get(\"track_puppeted_user_ips\", False)\n\n def generate_config_section(cls, **kwargs) -> str:\n formatted_default_state_types = \"\\n\".join(\n \" # - %s\" % (t,) for t in _DEFAULT_PREJOIN_STATE_TYPES\n )\n\n return \"\"\"\\\n ## API Configuration ##\n\n # Controls for the state that is shared with users who receive an invite\n # to a room\n #\n room_prejoin_state:\n # By default, the following state event types are shared with users who\n # receive invites to the room:\n #\n%(formatted_default_state_types)s\n #\n # Uncomment the following to disable these defaults (so that only the event\n # types listed in 'additional_event_types' are shared). Defaults to 'false'.\n #\n #disable_default_event_types: true\n\n # Additional state event types to share with users when they are invited\n # to a room.\n #\n # By default, this list is empty (so only the default event types are shared).\n #\n #additional_event_types:\n # - org.example.custom.event.type\n\n # By default when puppeting another user, the user who has created the\n # access token for puppeting is tracked. If this is enabled, both\n # requests are tracked. Implicitly enables MAU tracking for puppeted users.\n # Uncomment to also track puppeted user IP's.\n #\n #track_puppeted_user_ips: true\n \"\"\" % {\n \"formatted_default_state_types\": formatted_default_state_types\n }\n\n def _get_prejoin_state_types(self, config: JsonDict) -> Iterable[str]:\n \"\"\"Get the event types to include in the prejoin state\n\n Parses the config and returns an iterable of the event types to be included.\n \"\"\"\n room_prejoin_state_config = config.get(\"room_prejoin_state\") or {}\n\n # backwards-compatibility support for room_invite_state_types\n if \"room_invite_state_types\" in config:\n # if both \"room_invite_state_types\" and \"room_prejoin_state\" are set, then\n # we don't really know what to do.\n if room_prejoin_state_config:\n raise ConfigError(\n \"Can't specify both 'room_invite_state_types' and 'room_prejoin_state' \"\n \"in config\"\n )\n\n logger.warning(_ROOM_INVITE_STATE_TYPES_WARNING)\n\n yield from config[\"room_invite_state_types\"]\n return\n\n if not room_prejoin_state_config.get(\"disable_default_event_types\"):\n yield from _DEFAULT_PREJOIN_STATE_TYPES\n\n yield from room_prejoin_state_config.get(\"additional_event_types\", [])\n\n\n_ROOM_INVITE_STATE_TYPES_WARNING = \"\"\"\\\nWARNING: The 'room_invite_state_types' configuration setting is now deprecated,\nand replaced with 'room_prejoin_state'. New features may not work correctly\nunless 'room_invite_state_types' is removed. See the sample configuration file for\ndetails of 'room_prejoin_state'.\n--------------------------------------------------------------------------------\n\"\"\"\n\n_DEFAULT_PREJOIN_STATE_TYPES = [\n EventTypes.JoinRules,\n EventTypes.CanonicalAlias,\n EventTypes.RoomAvatar,\n EventTypes.RoomEncryption,\n EventTypes.Name,\n # Per MSC1772.\n EventTypes.Create,\n # Per MSC3173.\n EventTypes.Topic,\n]\n\n\n# room_prejoin_state can either be None (as it is in the default config), or\n# an object containing other config settings\n_ROOM_PREJOIN_STATE_CONFIG_SCHEMA = {\n \"oneOf\": [\n {\n \"type\": \"object\",\n \"properties\": {\n \"disable_default_event_types\": {\"type\": \"boolean\"},\n \"additional_event_types\": {\n \"type\": \"array\",\n \"items\": {\"type\": \"string\"},\n },\n },\n },\n {\"type\": \"null\"},\n ]\n}\n\n# the legacy room_invite_state_types setting\n_ROOM_INVITE_STATE_TYPES_SCHEMA = {\"type\": \"array\", \"items\": {\"type\": \"string\"}}\n\n_MAIN_SCHEMA = {\n \"type\": \"object\",\n \"properties\": {\n \"room_prejoin_state\": _ROOM_PREJOIN_STATE_CONFIG_SCHEMA,\n \"room_invite_state_types\": _ROOM_INVITE_STATE_TYPES_SCHEMA,\n \"track_puppeted_user_ips\": {\n \"type\": \"boolean\",\n },\n },\n}\n", "path": "synapse/config/api.py"}], "after_files": [{"content": "# Copyright 2015-2021 The Matrix.org Foundation C.I.C.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\nfrom typing import Iterable\n\nfrom synapse.api.constants import EventTypes\nfrom synapse.config._base import Config, ConfigError\nfrom synapse.config._util import validate_config\nfrom synapse.types import JsonDict\n\nlogger = logging.getLogger(__name__)\n\n\nclass ApiConfig(Config):\n section = \"api\"\n\n def read_config(self, config: JsonDict, **kwargs):\n validate_config(_MAIN_SCHEMA, config, ())\n self.room_prejoin_state = list(self._get_prejoin_state_types(config))\n self.track_puppeted_user_ips = config.get(\"track_puppeted_user_ips\", False)\n\n def generate_config_section(cls, **kwargs) -> str:\n formatted_default_state_types = \"\\n\".join(\n \" # - %s\" % (t,) for t in _DEFAULT_PREJOIN_STATE_TYPES\n )\n\n return \"\"\"\\\n ## API Configuration ##\n\n # Controls for the state that is shared with users who receive an invite\n # to a room\n #\n room_prejoin_state:\n # By default, the following state event types are shared with users who\n # receive invites to the room:\n #\n%(formatted_default_state_types)s\n #\n # Uncomment the following to disable these defaults (so that only the event\n # types listed in 'additional_event_types' are shared). Defaults to 'false'.\n #\n #disable_default_event_types: true\n\n # Additional state event types to share with users when they are invited\n # to a room.\n #\n # By default, this list is empty (so only the default event types are shared).\n #\n #additional_event_types:\n # - org.example.custom.event.type\n\n # We record the IP address of clients used to access the API for various\n # reasons, including displaying it to the user in the \"Where you're signed in\"\n # dialog.\n #\n # By default, when puppeting another user via the admin API, the client IP\n # address is recorded against the user who created the access token (ie, the\n # admin user), and *not* the puppeted user.\n #\n # Uncomment the following to also record the IP address against the puppeted\n # user. (This also means that the puppeted user will count as an \"active\" user\n # for the purpose of monthly active user tracking - see 'limit_usage_by_mau' etc\n # above.)\n #\n #track_puppeted_user_ips: true\n \"\"\" % {\n \"formatted_default_state_types\": formatted_default_state_types\n }\n\n def _get_prejoin_state_types(self, config: JsonDict) -> Iterable[str]:\n \"\"\"Get the event types to include in the prejoin state\n\n Parses the config and returns an iterable of the event types to be included.\n \"\"\"\n room_prejoin_state_config = config.get(\"room_prejoin_state\") or {}\n\n # backwards-compatibility support for room_invite_state_types\n if \"room_invite_state_types\" in config:\n # if both \"room_invite_state_types\" and \"room_prejoin_state\" are set, then\n # we don't really know what to do.\n if room_prejoin_state_config:\n raise ConfigError(\n \"Can't specify both 'room_invite_state_types' and 'room_prejoin_state' \"\n \"in config\"\n )\n\n logger.warning(_ROOM_INVITE_STATE_TYPES_WARNING)\n\n yield from config[\"room_invite_state_types\"]\n return\n\n if not room_prejoin_state_config.get(\"disable_default_event_types\"):\n yield from _DEFAULT_PREJOIN_STATE_TYPES\n\n yield from room_prejoin_state_config.get(\"additional_event_types\", [])\n\n\n_ROOM_INVITE_STATE_TYPES_WARNING = \"\"\"\\\nWARNING: The 'room_invite_state_types' configuration setting is now deprecated,\nand replaced with 'room_prejoin_state'. New features may not work correctly\nunless 'room_invite_state_types' is removed. See the sample configuration file for\ndetails of 'room_prejoin_state'.\n--------------------------------------------------------------------------------\n\"\"\"\n\n_DEFAULT_PREJOIN_STATE_TYPES = [\n EventTypes.JoinRules,\n EventTypes.CanonicalAlias,\n EventTypes.RoomAvatar,\n EventTypes.RoomEncryption,\n EventTypes.Name,\n # Per MSC1772.\n EventTypes.Create,\n # Per MSC3173.\n EventTypes.Topic,\n]\n\n\n# room_prejoin_state can either be None (as it is in the default config), or\n# an object containing other config settings\n_ROOM_PREJOIN_STATE_CONFIG_SCHEMA = {\n \"oneOf\": [\n {\n \"type\": \"object\",\n \"properties\": {\n \"disable_default_event_types\": {\"type\": \"boolean\"},\n \"additional_event_types\": {\n \"type\": \"array\",\n \"items\": {\"type\": \"string\"},\n },\n },\n },\n {\"type\": \"null\"},\n ]\n}\n\n# the legacy room_invite_state_types setting\n_ROOM_INVITE_STATE_TYPES_SCHEMA = {\"type\": \"array\", \"items\": {\"type\": \"string\"}}\n\n_MAIN_SCHEMA = {\n \"type\": \"object\",\n \"properties\": {\n \"room_prejoin_state\": _ROOM_PREJOIN_STATE_CONFIG_SCHEMA,\n \"room_invite_state_types\": _ROOM_INVITE_STATE_TYPES_SCHEMA,\n \"track_puppeted_user_ips\": {\n \"type\": \"boolean\",\n },\n },\n}\n", "path": "synapse/config/api.py"}]}
| 1,848 | 320 |
gh_patches_debug_4573
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-235
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Some versions of git don't create .git/hooks directory
Noticed here: https://github.com/victorlin/bugbuzz-python/pull/1#issuecomment-104971132
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/commands/install_uninstall.py`
Content:
```
1 from __future__ import print_function
2 from __future__ import unicode_literals
3
4 import io
5 import logging
6 import os
7 import os.path
8 import stat
9 import sys
10
11 from pre_commit.logging_handler import LoggingHandler
12 from pre_commit.util import resource_filename
13
14
15 logger = logging.getLogger('pre_commit')
16
17
18 # This is used to identify the hook file we install
19 PREVIOUS_IDENTIFYING_HASHES = (
20 '4d9958c90bc262f47553e2c073f14cfe',
21 'd8ee923c46731b42cd95cc869add4062',
22 '49fd668cb42069aa1b6048464be5d395',
23 )
24
25
26 IDENTIFYING_HASH = '79f09a650522a87b0da915d0d983b2de'
27
28
29 def is_our_pre_commit(filename):
30 return IDENTIFYING_HASH in io.open(filename).read()
31
32
33 def is_previous_pre_commit(filename):
34 contents = io.open(filename).read()
35 return any(hash in contents for hash in PREVIOUS_IDENTIFYING_HASHES)
36
37
38 def make_executable(filename):
39 original_mode = os.stat(filename).st_mode
40 os.chmod(
41 filename,
42 original_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH,
43 )
44
45
46 def install(runner, overwrite=False, hooks=False, hook_type='pre-commit'):
47 """Install the pre-commit hooks."""
48 hook_path = runner.get_hook_path(hook_type)
49 legacy_path = hook_path + '.legacy'
50
51 # If we have an existing hook, move it to pre-commit.legacy
52 if (
53 os.path.exists(hook_path) and
54 not is_our_pre_commit(hook_path) and
55 not is_previous_pre_commit(hook_path)
56 ):
57 os.rename(hook_path, legacy_path)
58
59 # If we specify overwrite, we simply delete the legacy file
60 if overwrite and os.path.exists(legacy_path):
61 os.remove(legacy_path)
62 elif os.path.exists(legacy_path):
63 print(
64 'Running in migration mode with existing hooks at {0}\n'
65 'Use -f to use only pre-commit.'.format(
66 legacy_path,
67 )
68 )
69
70 with io.open(hook_path, 'w') as pre_commit_file_obj:
71 if hook_type == 'pre-push':
72 with io.open(resource_filename('pre-push-tmpl')) as fp:
73 pre_push_contents = fp.read()
74 else:
75 pre_push_contents = ''
76
77 contents = io.open(resource_filename('hook-tmpl')).read().format(
78 sys_executable=sys.executable,
79 hook_type=hook_type,
80 pre_push=pre_push_contents,
81 )
82 pre_commit_file_obj.write(contents)
83 make_executable(hook_path)
84
85 print('pre-commit installed at {0}'.format(hook_path))
86
87 # If they requested we install all of the hooks, do so.
88 if hooks:
89 # Set up our logging handler
90 logger.addHandler(LoggingHandler(False))
91 logger.setLevel(logging.INFO)
92 for repository in runner.repositories:
93 repository.require_installed()
94
95 return 0
96
97
98 def uninstall(runner, hook_type='pre-commit'):
99 """Uninstall the pre-commit hooks."""
100 hook_path = runner.get_hook_path(hook_type)
101 legacy_path = hook_path + '.legacy'
102 # If our file doesn't exist or it isn't ours, gtfo.
103 if (
104 not os.path.exists(hook_path) or (
105 not is_our_pre_commit(hook_path) and
106 not is_previous_pre_commit(hook_path)
107 )
108 ):
109 return 0
110
111 os.remove(hook_path)
112 print('{0} uninstalled'.format(hook_type))
113
114 if os.path.exists(legacy_path):
115 os.rename(legacy_path, hook_path)
116 print('Restored previous hooks to {0}'.format(hook_path))
117
118 return 0
119
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pre_commit/commands/install_uninstall.py b/pre_commit/commands/install_uninstall.py
--- a/pre_commit/commands/install_uninstall.py
+++ b/pre_commit/commands/install_uninstall.py
@@ -48,6 +48,9 @@
hook_path = runner.get_hook_path(hook_type)
legacy_path = hook_path + '.legacy'
+ if not os.path.exists(os.path.dirname(hook_path)):
+ os.makedirs(os.path.dirname(hook_path))
+
# If we have an existing hook, move it to pre-commit.legacy
if (
os.path.exists(hook_path) and
|
{"golden_diff": "diff --git a/pre_commit/commands/install_uninstall.py b/pre_commit/commands/install_uninstall.py\n--- a/pre_commit/commands/install_uninstall.py\n+++ b/pre_commit/commands/install_uninstall.py\n@@ -48,6 +48,9 @@\n hook_path = runner.get_hook_path(hook_type)\n legacy_path = hook_path + '.legacy'\n \n+ if not os.path.exists(os.path.dirname(hook_path)):\n+ os.makedirs(os.path.dirname(hook_path))\n+\n # If we have an existing hook, move it to pre-commit.legacy\n if (\n os.path.exists(hook_path) and\n", "issue": "Some versions of git don't create .git/hooks directory\nNoticed here: https://github.com/victorlin/bugbuzz-python/pull/1#issuecomment-104971132\n\n", "before_files": [{"content": "from __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport io\nimport logging\nimport os\nimport os.path\nimport stat\nimport sys\n\nfrom pre_commit.logging_handler import LoggingHandler\nfrom pre_commit.util import resource_filename\n\n\nlogger = logging.getLogger('pre_commit')\n\n\n# This is used to identify the hook file we install\nPREVIOUS_IDENTIFYING_HASHES = (\n '4d9958c90bc262f47553e2c073f14cfe',\n 'd8ee923c46731b42cd95cc869add4062',\n '49fd668cb42069aa1b6048464be5d395',\n)\n\n\nIDENTIFYING_HASH = '79f09a650522a87b0da915d0d983b2de'\n\n\ndef is_our_pre_commit(filename):\n return IDENTIFYING_HASH in io.open(filename).read()\n\n\ndef is_previous_pre_commit(filename):\n contents = io.open(filename).read()\n return any(hash in contents for hash in PREVIOUS_IDENTIFYING_HASHES)\n\n\ndef make_executable(filename):\n original_mode = os.stat(filename).st_mode\n os.chmod(\n filename,\n original_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH,\n )\n\n\ndef install(runner, overwrite=False, hooks=False, hook_type='pre-commit'):\n \"\"\"Install the pre-commit hooks.\"\"\"\n hook_path = runner.get_hook_path(hook_type)\n legacy_path = hook_path + '.legacy'\n\n # If we have an existing hook, move it to pre-commit.legacy\n if (\n os.path.exists(hook_path) and\n not is_our_pre_commit(hook_path) and\n not is_previous_pre_commit(hook_path)\n ):\n os.rename(hook_path, legacy_path)\n\n # If we specify overwrite, we simply delete the legacy file\n if overwrite and os.path.exists(legacy_path):\n os.remove(legacy_path)\n elif os.path.exists(legacy_path):\n print(\n 'Running in migration mode with existing hooks at {0}\\n'\n 'Use -f to use only pre-commit.'.format(\n legacy_path,\n )\n )\n\n with io.open(hook_path, 'w') as pre_commit_file_obj:\n if hook_type == 'pre-push':\n with io.open(resource_filename('pre-push-tmpl')) as fp:\n pre_push_contents = fp.read()\n else:\n pre_push_contents = ''\n\n contents = io.open(resource_filename('hook-tmpl')).read().format(\n sys_executable=sys.executable,\n hook_type=hook_type,\n pre_push=pre_push_contents,\n )\n pre_commit_file_obj.write(contents)\n make_executable(hook_path)\n\n print('pre-commit installed at {0}'.format(hook_path))\n\n # If they requested we install all of the hooks, do so.\n if hooks:\n # Set up our logging handler\n logger.addHandler(LoggingHandler(False))\n logger.setLevel(logging.INFO)\n for repository in runner.repositories:\n repository.require_installed()\n\n return 0\n\n\ndef uninstall(runner, hook_type='pre-commit'):\n \"\"\"Uninstall the pre-commit hooks.\"\"\"\n hook_path = runner.get_hook_path(hook_type)\n legacy_path = hook_path + '.legacy'\n # If our file doesn't exist or it isn't ours, gtfo.\n if (\n not os.path.exists(hook_path) or (\n not is_our_pre_commit(hook_path) and\n not is_previous_pre_commit(hook_path)\n )\n ):\n return 0\n\n os.remove(hook_path)\n print('{0} uninstalled'.format(hook_type))\n\n if os.path.exists(legacy_path):\n os.rename(legacy_path, hook_path)\n print('Restored previous hooks to {0}'.format(hook_path))\n\n return 0\n", "path": "pre_commit/commands/install_uninstall.py"}], "after_files": [{"content": "from __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport io\nimport logging\nimport os\nimport os.path\nimport stat\nimport sys\n\nfrom pre_commit.logging_handler import LoggingHandler\nfrom pre_commit.util import resource_filename\n\n\nlogger = logging.getLogger('pre_commit')\n\n\n# This is used to identify the hook file we install\nPREVIOUS_IDENTIFYING_HASHES = (\n '4d9958c90bc262f47553e2c073f14cfe',\n 'd8ee923c46731b42cd95cc869add4062',\n '49fd668cb42069aa1b6048464be5d395',\n)\n\n\nIDENTIFYING_HASH = '79f09a650522a87b0da915d0d983b2de'\n\n\ndef is_our_pre_commit(filename):\n return IDENTIFYING_HASH in io.open(filename).read()\n\n\ndef is_previous_pre_commit(filename):\n contents = io.open(filename).read()\n return any(hash in contents for hash in PREVIOUS_IDENTIFYING_HASHES)\n\n\ndef make_executable(filename):\n original_mode = os.stat(filename).st_mode\n os.chmod(\n filename,\n original_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH,\n )\n\n\ndef install(runner, overwrite=False, hooks=False, hook_type='pre-commit'):\n \"\"\"Install the pre-commit hooks.\"\"\"\n hook_path = runner.get_hook_path(hook_type)\n legacy_path = hook_path + '.legacy'\n\n if not os.path.exists(os.path.dirname(hook_path)):\n os.makedirs(os.path.dirname(hook_path))\n\n # If we have an existing hook, move it to pre-commit.legacy\n if (\n os.path.exists(hook_path) and\n not is_our_pre_commit(hook_path) and\n not is_previous_pre_commit(hook_path)\n ):\n os.rename(hook_path, legacy_path)\n\n # If we specify overwrite, we simply delete the legacy file\n if overwrite and os.path.exists(legacy_path):\n os.remove(legacy_path)\n elif os.path.exists(legacy_path):\n print(\n 'Running in migration mode with existing hooks at {0}\\n'\n 'Use -f to use only pre-commit.'.format(\n legacy_path,\n )\n )\n\n with io.open(hook_path, 'w') as pre_commit_file_obj:\n if hook_type == 'pre-push':\n with io.open(resource_filename('pre-push-tmpl')) as fp:\n pre_push_contents = fp.read()\n else:\n pre_push_contents = ''\n\n contents = io.open(resource_filename('hook-tmpl')).read().format(\n sys_executable=sys.executable,\n hook_type=hook_type,\n pre_push=pre_push_contents,\n )\n pre_commit_file_obj.write(contents)\n make_executable(hook_path)\n\n print('pre-commit installed at {0}'.format(hook_path))\n\n # If they requested we install all of the hooks, do so.\n if hooks:\n # Set up our logging handler\n logger.addHandler(LoggingHandler(False))\n logger.setLevel(logging.INFO)\n for repository in runner.repositories:\n repository.require_installed()\n\n return 0\n\n\ndef uninstall(runner, hook_type='pre-commit'):\n \"\"\"Uninstall the pre-commit hooks.\"\"\"\n hook_path = runner.get_hook_path(hook_type)\n legacy_path = hook_path + '.legacy'\n # If our file doesn't exist or it isn't ours, gtfo.\n if (\n not os.path.exists(hook_path) or (\n not is_our_pre_commit(hook_path) and\n not is_previous_pre_commit(hook_path)\n )\n ):\n return 0\n\n os.remove(hook_path)\n print('{0} uninstalled'.format(hook_type))\n\n if os.path.exists(legacy_path):\n os.rename(legacy_path, hook_path)\n print('Restored previous hooks to {0}'.format(hook_path))\n\n return 0\n", "path": "pre_commit/commands/install_uninstall.py"}]}
| 1,446 | 135 |
gh_patches_debug_33227
|
rasdani/github-patches
|
git_diff
|
rasterio__rasterio-1910
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Should rio clip fill nodata at borders?
Hello lovely people :hugs:,
I'm using `rio clip` to quickly cut out z10 tiles for raster image. The raster image is not divisible by the z10 tiles, so I expect the border tiles to contain nodata. But more importantly I expect all z10 tiles I cut out to be of the same dimension. This is not the case with rio clip right now.
Here is an example where I first convert my raster image to WGS84 and then cut it into z10 tiles where the file name is the z10 quad key id.
```
rio warp in.tif wgs84.tif --dst-crs EPSG:4326
rio bounds wgs84.tif | mercantile tiles 10 > z10.txt
while read tile; do
rio clip wgs84.tif $(mercantile quadkey "$tile").tif --bounds "$(mercantile shapes --extents "$tile")"
done < z10.txt
```
This results in
- many raster images of pixel size 2857x1744
- some raster images (at the borders) with smaller dimensions; the bounds are not filled with nodata
Should rio clip use boundless windows and fill with nodata, to respect the user's bounds?
(rio 1.1.3, tested in ubuntu 18.04 docker container with python 3.6)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rasterio/rio/clip.py`
Content:
```
1 """File translation command"""
2
3 import logging
4
5 import click
6 from cligj import format_opt
7
8 from .helpers import resolve_inout
9 from . import options
10 import rasterio
11 from rasterio.coords import disjoint_bounds
12 from rasterio.crs import CRS
13 from rasterio.windows import Window
14
15
16 logger = logging.getLogger(__name__)
17
18
19 # Geographic (default), projected, or Mercator switch.
20 projection_geographic_opt = click.option(
21 '--geographic',
22 'projection',
23 flag_value='geographic',
24 help="Bounds in geographic coordinates.")
25
26 projection_projected_opt = click.option(
27 '--projected',
28 'projection',
29 flag_value='projected',
30 default=True,
31 help="Bounds in input's own projected coordinates (the default).")
32
33
34 # Clip command
35 @click.command(short_help='Clip a raster to given bounds.')
36 @click.argument(
37 'files',
38 nargs=-1,
39 type=click.Path(),
40 required=True,
41 metavar="INPUT OUTPUT")
42 @options.output_opt
43 @options.bounds_opt
44 @click.option(
45 '--like',
46 type=click.Path(exists=True),
47 help='Raster dataset to use as a template for bounds')
48 @format_opt
49 @projection_geographic_opt
50 @projection_projected_opt
51 @options.overwrite_opt
52 @options.creation_options
53 @click.pass_context
54 def clip(ctx, files, output, bounds, like, driver, projection,
55 overwrite, creation_options):
56 """Clips a raster using projected or geographic bounds.
57
58 \b
59 $ rio clip input.tif output.tif --bounds xmin ymin xmax ymax
60 $ rio clip input.tif output.tif --like template.tif
61
62 The values of --bounds are presumed to be from the coordinate
63 reference system of the input dataset unless the --geographic option
64 is used, in which case the values may be longitude and latitude
65 bounds. Either JSON, for example "[west, south, east, north]", or
66 plain text "west south east north" representations of a bounding box
67 are acceptable.
68
69 If using --like, bounds will automatically be transformed to match the
70 coordinate reference system of the input.
71
72 It can also be combined to read bounds of a feature dataset using Fiona:
73
74 \b
75 $ rio clip input.tif output.tif --bounds $(fio info features.shp --bounds)
76
77 """
78 from rasterio.warp import transform_bounds
79
80 with ctx.obj['env']:
81
82 output, files = resolve_inout(files=files, output=output, overwrite=overwrite)
83 input = files[0]
84
85 with rasterio.open(input) as src:
86 if bounds:
87 if projection == 'geographic':
88 bounds = transform_bounds(CRS.from_epsg(4326), src.crs, *bounds)
89 if disjoint_bounds(bounds, src.bounds):
90 raise click.BadParameter('must overlap the extent of '
91 'the input raster',
92 param='--bounds',
93 param_hint='--bounds')
94 elif like:
95 with rasterio.open(like) as template_ds:
96 bounds = template_ds.bounds
97 if template_ds.crs != src.crs:
98 bounds = transform_bounds(template_ds.crs, src.crs,
99 *bounds)
100
101 if disjoint_bounds(bounds, src.bounds):
102 raise click.BadParameter('must overlap the extent of '
103 'the input raster',
104 param='--like',
105 param_hint='--like')
106
107 else:
108 raise click.UsageError('--bounds or --like required')
109
110 bounds_window = src.window(*bounds)
111 bounds_window = bounds_window.intersection(
112 Window(0, 0, src.width, src.height))
113
114 # Get the window with integer height
115 # and width that contains the bounds window.
116 out_window = bounds_window.round_lengths(op='ceil')
117
118 height = int(out_window.height)
119 width = int(out_window.width)
120
121 out_kwargs = src.profile
122 out_kwargs.update({
123 'driver': driver,
124 'height': height,
125 'width': width,
126 'transform': src.window_transform(out_window)})
127 out_kwargs.update(**creation_options)
128
129 if 'blockxsize' in out_kwargs and out_kwargs['blockxsize'] > width:
130 del out_kwargs['blockxsize']
131 logger.warning("Blockxsize removed from creation options to accomodate small output width")
132 if 'blockysize' in out_kwargs and out_kwargs['blockysize'] > height:
133 del out_kwargs['blockysize']
134 logger.warning("Blockysize removed from creation options to accomodate small output height")
135
136 with rasterio.open(output, 'w', **out_kwargs) as out:
137 out.write(src.read(window=out_window,
138 out_shape=(src.count, height, width)))
139
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/rasterio/rio/clip.py b/rasterio/rio/clip.py
--- a/rasterio/rio/clip.py
+++ b/rasterio/rio/clip.py
@@ -50,9 +50,24 @@
@projection_projected_opt
@options.overwrite_opt
@options.creation_options
[email protected](
+ "--with-complement/--without-complement",
+ default=False,
+ help="Include the relative complement of the raster in the given bounds (giving a larger result), else return results only from the intersection of the raster and the bounds (the default).",
+)
@click.pass_context
-def clip(ctx, files, output, bounds, like, driver, projection,
- overwrite, creation_options):
+def clip(
+ ctx,
+ files,
+ output,
+ bounds,
+ like,
+ driver,
+ projection,
+ overwrite,
+ creation_options,
+ with_complement,
+):
"""Clips a raster using projected or geographic bounds.
\b
@@ -108,8 +123,11 @@
raise click.UsageError('--bounds or --like required')
bounds_window = src.window(*bounds)
- bounds_window = bounds_window.intersection(
- Window(0, 0, src.width, src.height))
+
+ if not with_complement:
+ bounds_window = bounds_window.intersection(
+ Window(0, 0, src.width, src.height)
+ )
# Get the window with integer height
# and width that contains the bounds window.
@@ -133,6 +151,11 @@
del out_kwargs['blockysize']
logger.warning("Blockysize removed from creation options to accomodate small output height")
- with rasterio.open(output, 'w', **out_kwargs) as out:
- out.write(src.read(window=out_window,
- out_shape=(src.count, height, width)))
+ with rasterio.open(output, "w", **out_kwargs) as out:
+ out.write(
+ src.read(
+ window=out_window,
+ out_shape=(src.count, height, width),
+ boundless=True,
+ )
+ )
|
{"golden_diff": "diff --git a/rasterio/rio/clip.py b/rasterio/rio/clip.py\n--- a/rasterio/rio/clip.py\n+++ b/rasterio/rio/clip.py\n@@ -50,9 +50,24 @@\n @projection_projected_opt\n @options.overwrite_opt\n @options.creation_options\[email protected](\n+ \"--with-complement/--without-complement\",\n+ default=False,\n+ help=\"Include the relative complement of the raster in the given bounds (giving a larger result), else return results only from the intersection of the raster and the bounds (the default).\",\n+)\n @click.pass_context\n-def clip(ctx, files, output, bounds, like, driver, projection,\n- overwrite, creation_options):\n+def clip(\n+ ctx,\n+ files,\n+ output,\n+ bounds,\n+ like,\n+ driver,\n+ projection,\n+ overwrite,\n+ creation_options,\n+ with_complement,\n+):\n \"\"\"Clips a raster using projected or geographic bounds.\n \n \\b\n@@ -108,8 +123,11 @@\n raise click.UsageError('--bounds or --like required')\n \n bounds_window = src.window(*bounds)\n- bounds_window = bounds_window.intersection(\n- Window(0, 0, src.width, src.height))\n+\n+ if not with_complement:\n+ bounds_window = bounds_window.intersection(\n+ Window(0, 0, src.width, src.height)\n+ )\n \n # Get the window with integer height\n # and width that contains the bounds window.\n@@ -133,6 +151,11 @@\n del out_kwargs['blockysize']\n logger.warning(\"Blockysize removed from creation options to accomodate small output height\")\n \n- with rasterio.open(output, 'w', **out_kwargs) as out:\n- out.write(src.read(window=out_window,\n- out_shape=(src.count, height, width)))\n+ with rasterio.open(output, \"w\", **out_kwargs) as out:\n+ out.write(\n+ src.read(\n+ window=out_window,\n+ out_shape=(src.count, height, width),\n+ boundless=True,\n+ )\n+ )\n", "issue": "Should rio clip fill nodata at borders?\nHello lovely people :hugs:,\r\n\r\nI'm using `rio clip` to quickly cut out z10 tiles for raster image. The raster image is not divisible by the z10 tiles, so I expect the border tiles to contain nodata. But more importantly I expect all z10 tiles I cut out to be of the same dimension. This is not the case with rio clip right now.\r\n\r\nHere is an example where I first convert my raster image to WGS84 and then cut it into z10 tiles where the file name is the z10 quad key id.\r\n\r\n```\r\nrio warp in.tif wgs84.tif --dst-crs EPSG:4326\r\nrio bounds wgs84.tif | mercantile tiles 10 > z10.txt\r\n \r\nwhile read tile; do\r\n rio clip wgs84.tif $(mercantile quadkey \"$tile\").tif --bounds \"$(mercantile shapes --extents \"$tile\")\"\r\ndone < z10.txt\r\n```\r\n\r\nThis results in\r\n- many raster images of pixel size 2857x1744\r\n- some raster images (at the borders) with smaller dimensions; the bounds are not filled with nodata\r\n\r\nShould rio clip use boundless windows and fill with nodata, to respect the user's bounds?\r\n\r\n(rio 1.1.3, tested in ubuntu 18.04 docker container with python 3.6)\n", "before_files": [{"content": "\"\"\"File translation command\"\"\"\n\nimport logging\n\nimport click\nfrom cligj import format_opt\n\nfrom .helpers import resolve_inout\nfrom . import options\nimport rasterio\nfrom rasterio.coords import disjoint_bounds\nfrom rasterio.crs import CRS\nfrom rasterio.windows import Window\n\n\nlogger = logging.getLogger(__name__)\n\n\n# Geographic (default), projected, or Mercator switch.\nprojection_geographic_opt = click.option(\n '--geographic',\n 'projection',\n flag_value='geographic',\n help=\"Bounds in geographic coordinates.\")\n\nprojection_projected_opt = click.option(\n '--projected',\n 'projection',\n flag_value='projected',\n default=True,\n help=\"Bounds in input's own projected coordinates (the default).\")\n\n\n# Clip command\[email protected](short_help='Clip a raster to given bounds.')\[email protected](\n 'files',\n nargs=-1,\n type=click.Path(),\n required=True,\n metavar=\"INPUT OUTPUT\")\[email protected]_opt\[email protected]_opt\[email protected](\n '--like',\n type=click.Path(exists=True),\n help='Raster dataset to use as a template for bounds')\n@format_opt\n@projection_geographic_opt\n@projection_projected_opt\[email protected]_opt\[email protected]_options\[email protected]_context\ndef clip(ctx, files, output, bounds, like, driver, projection,\n overwrite, creation_options):\n \"\"\"Clips a raster using projected or geographic bounds.\n\n \\b\n $ rio clip input.tif output.tif --bounds xmin ymin xmax ymax\n $ rio clip input.tif output.tif --like template.tif\n\n The values of --bounds are presumed to be from the coordinate\n reference system of the input dataset unless the --geographic option\n is used, in which case the values may be longitude and latitude\n bounds. Either JSON, for example \"[west, south, east, north]\", or\n plain text \"west south east north\" representations of a bounding box\n are acceptable.\n\n If using --like, bounds will automatically be transformed to match the\n coordinate reference system of the input.\n\n It can also be combined to read bounds of a feature dataset using Fiona:\n\n \\b\n $ rio clip input.tif output.tif --bounds $(fio info features.shp --bounds)\n\n \"\"\"\n from rasterio.warp import transform_bounds\n\n with ctx.obj['env']:\n\n output, files = resolve_inout(files=files, output=output, overwrite=overwrite)\n input = files[0]\n\n with rasterio.open(input) as src:\n if bounds:\n if projection == 'geographic':\n bounds = transform_bounds(CRS.from_epsg(4326), src.crs, *bounds)\n if disjoint_bounds(bounds, src.bounds):\n raise click.BadParameter('must overlap the extent of '\n 'the input raster',\n param='--bounds',\n param_hint='--bounds')\n elif like:\n with rasterio.open(like) as template_ds:\n bounds = template_ds.bounds\n if template_ds.crs != src.crs:\n bounds = transform_bounds(template_ds.crs, src.crs,\n *bounds)\n\n if disjoint_bounds(bounds, src.bounds):\n raise click.BadParameter('must overlap the extent of '\n 'the input raster',\n param='--like',\n param_hint='--like')\n\n else:\n raise click.UsageError('--bounds or --like required')\n\n bounds_window = src.window(*bounds)\n bounds_window = bounds_window.intersection(\n Window(0, 0, src.width, src.height))\n\n # Get the window with integer height\n # and width that contains the bounds window.\n out_window = bounds_window.round_lengths(op='ceil')\n\n height = int(out_window.height)\n width = int(out_window.width)\n\n out_kwargs = src.profile\n out_kwargs.update({\n 'driver': driver,\n 'height': height,\n 'width': width,\n 'transform': src.window_transform(out_window)})\n out_kwargs.update(**creation_options)\n\n if 'blockxsize' in out_kwargs and out_kwargs['blockxsize'] > width:\n del out_kwargs['blockxsize']\n logger.warning(\"Blockxsize removed from creation options to accomodate small output width\")\n if 'blockysize' in out_kwargs and out_kwargs['blockysize'] > height:\n del out_kwargs['blockysize']\n logger.warning(\"Blockysize removed from creation options to accomodate small output height\")\n\n with rasterio.open(output, 'w', **out_kwargs) as out:\n out.write(src.read(window=out_window,\n out_shape=(src.count, height, width)))\n", "path": "rasterio/rio/clip.py"}], "after_files": [{"content": "\"\"\"File translation command\"\"\"\n\nimport logging\n\nimport click\nfrom cligj import format_opt\n\nfrom .helpers import resolve_inout\nfrom . import options\nimport rasterio\nfrom rasterio.coords import disjoint_bounds\nfrom rasterio.crs import CRS\nfrom rasterio.windows import Window\n\n\nlogger = logging.getLogger(__name__)\n\n\n# Geographic (default), projected, or Mercator switch.\nprojection_geographic_opt = click.option(\n '--geographic',\n 'projection',\n flag_value='geographic',\n help=\"Bounds in geographic coordinates.\")\n\nprojection_projected_opt = click.option(\n '--projected',\n 'projection',\n flag_value='projected',\n default=True,\n help=\"Bounds in input's own projected coordinates (the default).\")\n\n\n# Clip command\[email protected](short_help='Clip a raster to given bounds.')\[email protected](\n 'files',\n nargs=-1,\n type=click.Path(),\n required=True,\n metavar=\"INPUT OUTPUT\")\[email protected]_opt\[email protected]_opt\[email protected](\n '--like',\n type=click.Path(exists=True),\n help='Raster dataset to use as a template for bounds')\n@format_opt\n@projection_geographic_opt\n@projection_projected_opt\[email protected]_opt\[email protected]_options\[email protected](\n \"--with-complement/--without-complement\",\n default=False,\n help=\"Include the relative complement of the raster in the given bounds (giving a larger result), else return results only from the intersection of the raster and the bounds (the default).\",\n)\[email protected]_context\ndef clip(\n ctx,\n files,\n output,\n bounds,\n like,\n driver,\n projection,\n overwrite,\n creation_options,\n with_complement,\n):\n \"\"\"Clips a raster using projected or geographic bounds.\n\n \\b\n $ rio clip input.tif output.tif --bounds xmin ymin xmax ymax\n $ rio clip input.tif output.tif --like template.tif\n\n The values of --bounds are presumed to be from the coordinate\n reference system of the input dataset unless the --geographic option\n is used, in which case the values may be longitude and latitude\n bounds. Either JSON, for example \"[west, south, east, north]\", or\n plain text \"west south east north\" representations of a bounding box\n are acceptable.\n\n If using --like, bounds will automatically be transformed to match the\n coordinate reference system of the input.\n\n It can also be combined to read bounds of a feature dataset using Fiona:\n\n \\b\n $ rio clip input.tif output.tif --bounds $(fio info features.shp --bounds)\n\n \"\"\"\n from rasterio.warp import transform_bounds\n\n with ctx.obj['env']:\n\n output, files = resolve_inout(files=files, output=output, overwrite=overwrite)\n input = files[0]\n\n with rasterio.open(input) as src:\n if bounds:\n if projection == 'geographic':\n bounds = transform_bounds(CRS.from_epsg(4326), src.crs, *bounds)\n if disjoint_bounds(bounds, src.bounds):\n raise click.BadParameter('must overlap the extent of '\n 'the input raster',\n param='--bounds',\n param_hint='--bounds')\n elif like:\n with rasterio.open(like) as template_ds:\n bounds = template_ds.bounds\n if template_ds.crs != src.crs:\n bounds = transform_bounds(template_ds.crs, src.crs,\n *bounds)\n\n if disjoint_bounds(bounds, src.bounds):\n raise click.BadParameter('must overlap the extent of '\n 'the input raster',\n param='--like',\n param_hint='--like')\n\n else:\n raise click.UsageError('--bounds or --like required')\n\n bounds_window = src.window(*bounds)\n\n if not with_complement:\n bounds_window = bounds_window.intersection(\n Window(0, 0, src.width, src.height)\n )\n\n # Get the window with integer height\n # and width that contains the bounds window.\n out_window = bounds_window.round_lengths(op='ceil')\n\n height = int(out_window.height)\n width = int(out_window.width)\n\n out_kwargs = src.profile\n out_kwargs.update({\n 'driver': driver,\n 'height': height,\n 'width': width,\n 'transform': src.window_transform(out_window)})\n out_kwargs.update(**creation_options)\n\n if 'blockxsize' in out_kwargs and out_kwargs['blockxsize'] > width:\n del out_kwargs['blockxsize']\n logger.warning(\"Blockxsize removed from creation options to accomodate small output width\")\n if 'blockysize' in out_kwargs and out_kwargs['blockysize'] > height:\n del out_kwargs['blockysize']\n logger.warning(\"Blockysize removed from creation options to accomodate small output height\")\n\n with rasterio.open(output, \"w\", **out_kwargs) as out:\n out.write(\n src.read(\n window=out_window,\n out_shape=(src.count, height, width),\n boundless=True,\n )\n )\n", "path": "rasterio/rio/clip.py"}]}
| 1,884 | 487 |
gh_patches_debug_2733
|
rasdani/github-patches
|
git_diff
|
GeotrekCE__Geotrek-admin-805
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ADMIN - Tronçon bouclant sur lui-même
Impossible de saisir le CIRCUIT DES LACS correctement.
Renvoie souvent une 504 BAD GATEWAY quand on enregistre. L'itinéraire a pourtant été modifié mais différemment de la façon dont il a été saisi. A creuser.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `geotrek/core/forms.py`
Content:
```
1 from django.utils.translation import ugettext_lazy as _
2
3 import floppyforms as forms
4
5 from geotrek.common.forms import CommonForm
6 from .models import Path
7 from .helpers import PathHelper
8 from .fields import TopologyField, SnappedLineStringField
9
10
11 class TopologyForm(CommonForm):
12 """
13 This form is a bit specific :
14
15 We use a field (topology) in order to edit the whole instance.
16 Thus, at init, we load the instance into field, and at save, we
17 save the field into the instance.
18
19 The geom field is fully ignored, since we edit a topology.
20 """
21 topology = TopologyField(label="")
22
23 def __init__(self, *args, **kwargs):
24 super(TopologyForm, self).__init__(*args, **kwargs)
25 if self.instance and self.instance.pk:
26 self.fields['topology'].initial = self.instance
27
28 def clean(self, *args, **kwargs):
29 data = super(TopologyForm, self).clean()
30 # geom is computed at db-level and never edited
31 if 'geom' in self.errors:
32 del self.errors['geom']
33 return data
34
35 def save(self, *args, **kwargs):
36 topology = self.cleaned_data.pop('topology')
37 instance = super(TopologyForm, self).save(*args, **kwargs)
38 instance.mutate(topology)
39 return instance
40
41 geomfields = ['topology']
42
43 class Meta(CommonForm.Meta):
44 fields = CommonForm.Meta.fields + ['topology']
45
46 MEDIA_JS = ("core/dijkstra.js",
47 "core/leaflet-geomutils.js",
48 "core/multipath.js",
49 "core/topology_helper.js") + CommonForm.MEDIA_JS
50
51
52 class PathForm(CommonForm):
53 geom = SnappedLineStringField()
54
55 reverse_geom = forms.BooleanField(required=False,
56 label=_("Reverse path"),
57 help_text=_("The path will be reversed once saved"))
58
59 geomfields = ['geom']
60
61 class Meta(CommonForm.Meta):
62 model = Path
63 fields = CommonForm.Meta.fields + \
64 ['structure',
65 'name', 'stake', 'comfort', 'trail', 'departure', 'arrival', 'comments',
66 'datasource', 'networks', 'usages', 'valid', 'reverse_geom', 'geom']
67
68 def __init__(self, *args, **kwargs):
69 super(PathForm, self).__init__(*args, **kwargs)
70 self.fields['geom'].label = ''
71
72 def clean_geom(self):
73 geom = self.cleaned_data['geom']
74 if geom is None:
75 raise forms.ValidationError(_("Invalid snapped geometry."))
76 if not geom.simple:
77 raise forms.ValidationError(_("Geometry is not simple."))
78 if not PathHelper.disjoint(geom, self.cleaned_data.get('pk') or -1):
79 raise forms.ValidationError(_("Geometry overlaps another."))
80 return geom
81
82 def save(self, commit=True):
83 path = super(PathForm, self).save(commit=False)
84
85 if self.cleaned_data.get('reverse_geom'):
86 path.reverse()
87
88 if commit:
89 path.save()
90 self.save_m2m()
91
92 return path
93
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/geotrek/core/forms.py b/geotrek/core/forms.py
--- a/geotrek/core/forms.py
+++ b/geotrek/core/forms.py
@@ -44,7 +44,6 @@
fields = CommonForm.Meta.fields + ['topology']
MEDIA_JS = ("core/dijkstra.js",
- "core/leaflet-geomutils.js",
"core/multipath.js",
"core/topology_helper.js") + CommonForm.MEDIA_JS
|
{"golden_diff": "diff --git a/geotrek/core/forms.py b/geotrek/core/forms.py\n--- a/geotrek/core/forms.py\n+++ b/geotrek/core/forms.py\n@@ -44,7 +44,6 @@\n fields = CommonForm.Meta.fields + ['topology']\n \n MEDIA_JS = (\"core/dijkstra.js\",\n- \"core/leaflet-geomutils.js\",\n \"core/multipath.js\",\n \"core/topology_helper.js\") + CommonForm.MEDIA_JS\n", "issue": "ADMIN - Tron\u00e7on bouclant sur lui-m\u00eame\nImpossible de saisir le CIRCUIT DES LACS correctement.\nRenvoie souvent une 504 BAD GATEWAY quand on enregistre. L'itin\u00e9raire a pourtant \u00e9t\u00e9 modifi\u00e9 mais diff\u00e9remment de la fa\u00e7on dont il a \u00e9t\u00e9 saisi. A creuser.\n\n", "before_files": [{"content": "from django.utils.translation import ugettext_lazy as _\n\nimport floppyforms as forms\n\nfrom geotrek.common.forms import CommonForm\nfrom .models import Path\nfrom .helpers import PathHelper\nfrom .fields import TopologyField, SnappedLineStringField\n\n\nclass TopologyForm(CommonForm):\n \"\"\"\n This form is a bit specific :\n\n We use a field (topology) in order to edit the whole instance.\n Thus, at init, we load the instance into field, and at save, we\n save the field into the instance.\n\n The geom field is fully ignored, since we edit a topology.\n \"\"\"\n topology = TopologyField(label=\"\")\n\n def __init__(self, *args, **kwargs):\n super(TopologyForm, self).__init__(*args, **kwargs)\n if self.instance and self.instance.pk:\n self.fields['topology'].initial = self.instance\n\n def clean(self, *args, **kwargs):\n data = super(TopologyForm, self).clean()\n # geom is computed at db-level and never edited\n if 'geom' in self.errors:\n del self.errors['geom']\n return data\n\n def save(self, *args, **kwargs):\n topology = self.cleaned_data.pop('topology')\n instance = super(TopologyForm, self).save(*args, **kwargs)\n instance.mutate(topology)\n return instance\n\n geomfields = ['topology']\n\n class Meta(CommonForm.Meta):\n fields = CommonForm.Meta.fields + ['topology']\n\n MEDIA_JS = (\"core/dijkstra.js\",\n \"core/leaflet-geomutils.js\",\n \"core/multipath.js\",\n \"core/topology_helper.js\") + CommonForm.MEDIA_JS\n\n\nclass PathForm(CommonForm):\n geom = SnappedLineStringField()\n\n reverse_geom = forms.BooleanField(required=False,\n label=_(\"Reverse path\"),\n help_text=_(\"The path will be reversed once saved\"))\n\n geomfields = ['geom']\n\n class Meta(CommonForm.Meta):\n model = Path\n fields = CommonForm.Meta.fields + \\\n ['structure',\n 'name', 'stake', 'comfort', 'trail', 'departure', 'arrival', 'comments',\n 'datasource', 'networks', 'usages', 'valid', 'reverse_geom', 'geom']\n\n def __init__(self, *args, **kwargs):\n super(PathForm, self).__init__(*args, **kwargs)\n self.fields['geom'].label = ''\n\n def clean_geom(self):\n geom = self.cleaned_data['geom']\n if geom is None:\n raise forms.ValidationError(_(\"Invalid snapped geometry.\"))\n if not geom.simple:\n raise forms.ValidationError(_(\"Geometry is not simple.\"))\n if not PathHelper.disjoint(geom, self.cleaned_data.get('pk') or -1):\n raise forms.ValidationError(_(\"Geometry overlaps another.\"))\n return geom\n\n def save(self, commit=True):\n path = super(PathForm, self).save(commit=False)\n\n if self.cleaned_data.get('reverse_geom'):\n path.reverse()\n\n if commit:\n path.save()\n self.save_m2m()\n\n return path\n", "path": "geotrek/core/forms.py"}], "after_files": [{"content": "from django.utils.translation import ugettext_lazy as _\n\nimport floppyforms as forms\n\nfrom geotrek.common.forms import CommonForm\nfrom .models import Path\nfrom .helpers import PathHelper\nfrom .fields import TopologyField, SnappedLineStringField\n\n\nclass TopologyForm(CommonForm):\n \"\"\"\n This form is a bit specific :\n\n We use a field (topology) in order to edit the whole instance.\n Thus, at init, we load the instance into field, and at save, we\n save the field into the instance.\n\n The geom field is fully ignored, since we edit a topology.\n \"\"\"\n topology = TopologyField(label=\"\")\n\n def __init__(self, *args, **kwargs):\n super(TopologyForm, self).__init__(*args, **kwargs)\n if self.instance and self.instance.pk:\n self.fields['topology'].initial = self.instance\n\n def clean(self, *args, **kwargs):\n data = super(TopologyForm, self).clean()\n # geom is computed at db-level and never edited\n if 'geom' in self.errors:\n del self.errors['geom']\n return data\n\n def save(self, *args, **kwargs):\n topology = self.cleaned_data.pop('topology')\n instance = super(TopologyForm, self).save(*args, **kwargs)\n instance.mutate(topology)\n return instance\n\n geomfields = ['topology']\n\n class Meta(CommonForm.Meta):\n fields = CommonForm.Meta.fields + ['topology']\n\n MEDIA_JS = (\"core/dijkstra.js\",\n \"core/multipath.js\",\n \"core/topology_helper.js\") + CommonForm.MEDIA_JS\n\n\nclass PathForm(CommonForm):\n geom = SnappedLineStringField()\n\n reverse_geom = forms.BooleanField(required=False,\n label=_(\"Reverse path\"),\n help_text=_(\"The path will be reversed once saved\"))\n\n geomfields = ['geom']\n\n class Meta(CommonForm.Meta):\n model = Path\n fields = CommonForm.Meta.fields + \\\n ['structure',\n 'name', 'stake', 'comfort', 'trail', 'departure', 'arrival', 'comments',\n 'datasource', 'networks', 'usages', 'valid', 'reverse_geom', 'geom']\n\n def __init__(self, *args, **kwargs):\n super(PathForm, self).__init__(*args, **kwargs)\n self.fields['geom'].label = ''\n\n def clean_geom(self):\n geom = self.cleaned_data['geom']\n if geom is None:\n raise forms.ValidationError(_(\"Invalid snapped geometry.\"))\n if not geom.simple:\n raise forms.ValidationError(_(\"Geometry is not simple.\"))\n if not PathHelper.disjoint(geom, self.cleaned_data.get('pk') or -1):\n raise forms.ValidationError(_(\"Geometry overlaps another.\"))\n return geom\n\n def save(self, commit=True):\n path = super(PathForm, self).save(commit=False)\n\n if self.cleaned_data.get('reverse_geom'):\n path.reverse()\n\n if commit:\n path.save()\n self.save_m2m()\n\n return path\n", "path": "geotrek/core/forms.py"}]}
| 1,194 | 104 |
gh_patches_debug_943
|
rasdani/github-patches
|
git_diff
|
microsoft__ptvsd-259
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Debugging of modules using -m is broken
Used to be able to debug Flask & other code using the `-m` flag.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ptvsd/debugger.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License. See LICENSE in the project root
3 # for license information.
4
5 import sys
6
7 # import the wrapper first, so that it gets a chance
8 # to detour pydevd socket functionality.
9 import ptvsd.wrapper
10
11
12 __author__ = "Microsoft Corporation <[email protected]>"
13 __version__ = "4.0.0a4"
14
15 DONT_DEBUG = []
16
17
18 def debug(filename, port_num, debug_id, debug_options, run_as):
19 # TODO: docstring
20 address = (None, port_num)
21 if run_as == 'module':
22 _run_module(address, filename)
23 else:
24 _run_file(address, filename)
25
26
27 def _run_module(address, modname):
28 filename = modname + ':'
29 argv = _run_argv(address, filename)
30 argv.append('--module')
31 _run(argv)
32
33
34 def _run_file(address, filename):
35 argv = _run_argv(address, filename)
36 _run(argv)
37
38
39 def _run_argv(address, filename):
40 host, port = address
41 if host is None:
42 host = '127.0.0.1'
43 return [
44 '--port', str(port),
45 '--client', host,
46 '--file', filename,
47 ]
48
49
50 def _run(argv):
51 import pydevd
52 sys.argv[1:0] = argv
53 try:
54 pydevd.main()
55 except SystemExit as ex:
56 ptvsd.wrapper.ptvsd_sys_exit_code = int(ex.code)
57 raise
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ptvsd/debugger.py b/ptvsd/debugger.py
--- a/ptvsd/debugger.py
+++ b/ptvsd/debugger.py
@@ -27,7 +27,7 @@
def _run_module(address, modname):
filename = modname + ':'
argv = _run_argv(address, filename)
- argv.append('--module')
+ argv.insert(argv.index('--file'), '--module')
_run(argv)
|
{"golden_diff": "diff --git a/ptvsd/debugger.py b/ptvsd/debugger.py\n--- a/ptvsd/debugger.py\n+++ b/ptvsd/debugger.py\n@@ -27,7 +27,7 @@\n def _run_module(address, modname):\n filename = modname + ':'\n argv = _run_argv(address, filename)\n- argv.append('--module')\n+ argv.insert(argv.index('--file'), '--module')\n _run(argv)\n", "issue": "Debugging of modules using -m is broken\nUsed to be able to debug Flask & other code using the `-m` flag.\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nimport sys\n\n# import the wrapper first, so that it gets a chance\n# to detour pydevd socket functionality.\nimport ptvsd.wrapper\n\n\n__author__ = \"Microsoft Corporation <[email protected]>\"\n__version__ = \"4.0.0a4\"\n\nDONT_DEBUG = []\n\n\ndef debug(filename, port_num, debug_id, debug_options, run_as):\n # TODO: docstring\n address = (None, port_num)\n if run_as == 'module':\n _run_module(address, filename)\n else:\n _run_file(address, filename)\n\n\ndef _run_module(address, modname):\n filename = modname + ':'\n argv = _run_argv(address, filename)\n argv.append('--module')\n _run(argv)\n\n\ndef _run_file(address, filename):\n argv = _run_argv(address, filename)\n _run(argv)\n\n\ndef _run_argv(address, filename):\n host, port = address\n if host is None:\n host = '127.0.0.1'\n return [\n '--port', str(port),\n '--client', host,\n '--file', filename,\n ]\n\n\ndef _run(argv):\n import pydevd\n sys.argv[1:0] = argv\n try:\n pydevd.main()\n except SystemExit as ex:\n ptvsd.wrapper.ptvsd_sys_exit_code = int(ex.code)\n raise\n", "path": "ptvsd/debugger.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nimport sys\n\n# import the wrapper first, so that it gets a chance\n# to detour pydevd socket functionality.\nimport ptvsd.wrapper\n\n\n__author__ = \"Microsoft Corporation <[email protected]>\"\n__version__ = \"4.0.0a3\"\n\nDONT_DEBUG = []\n\n\ndef debug(filename, port_num, debug_id, debug_options, run_as):\n # TODO: docstring\n address = (None, port_num)\n if run_as == 'module':\n _run_module(address, filename)\n else:\n _run_file(address, filename)\n\n\ndef _run_module(address, modname):\n filename = modname + ':'\n argv = _run_argv(address, filename)\n argv.insert(argv.index('--file'), '--module')\n _run(argv)\n\n\ndef _run_file(address, filename):\n argv = _run_argv(address, filename)\n _run(argv)\n\n\ndef _run_argv(address, filename):\n host, port = address\n if host is None:\n host = '127.0.0.1'\n return [\n '--port', str(port),\n '--client', host,\n '--file', filename,\n ]\n\n\ndef _run(argv):\n import pydevd\n sys.argv[1:0] = argv\n try:\n pydevd.main()\n except SystemExit as ex:\n ptvsd.wrapper.ptvsd_sys_exit_code = int(ex.code)\n raise\n", "path": "ptvsd/debugger.py"}]}
| 744 | 103 |
gh_patches_debug_8163
|
rasdani/github-patches
|
git_diff
|
microsoft__ptvsd-84
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Debugging modules (using -m switch)
This is currently not supported
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ptvsd/debugger.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License. See LICENSE in the project root
3 # for license information.
4
5 import sys
6
7
8 __author__ = "Microsoft Corporation <[email protected]>"
9 __version__ = "4.0.0a1"
10
11 DONT_DEBUG = []
12
13
14 def debug(filename, port_num, debug_id, debug_options, run_as):
15 # TODO: docstring
16
17 # import the wrapper first, so that it gets a chance
18 # to detour pydevd socket functionality.
19 import ptvsd.wrapper
20 import pydevd
21
22 sys.argv[1:0] = [
23 '--port', str(port_num),
24 '--client', '127.0.0.1',
25 '--file', filename,
26 ]
27 try:
28 pydevd.main()
29 except SystemExit as ex:
30 ptvsd.wrapper.ptvsd_sys_exit_code = ex.code
31 raise
32
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ptvsd/debugger.py b/ptvsd/debugger.py
--- a/ptvsd/debugger.py
+++ b/ptvsd/debugger.py
@@ -19,11 +19,16 @@
import ptvsd.wrapper
import pydevd
- sys.argv[1:0] = [
+ args = [
'--port', str(port_num),
'--client', '127.0.0.1',
- '--file', filename,
]
+ if run_as == 'module':
+ args.append('--module')
+ args.extend(('--file', filename + ":"))
+ else:
+ args.extend(('--file', filename))
+ sys.argv[1:0] = args
try:
pydevd.main()
except SystemExit as ex:
|
{"golden_diff": "diff --git a/ptvsd/debugger.py b/ptvsd/debugger.py\n--- a/ptvsd/debugger.py\n+++ b/ptvsd/debugger.py\n@@ -19,11 +19,16 @@\n import ptvsd.wrapper\n import pydevd\n \n- sys.argv[1:0] = [\n+ args = [\n '--port', str(port_num),\n '--client', '127.0.0.1',\n- '--file', filename,\n ]\n+ if run_as == 'module':\n+ args.append('--module')\n+ args.extend(('--file', filename + \":\"))\n+ else:\n+ args.extend(('--file', filename))\n+ sys.argv[1:0] = args\n try:\n pydevd.main()\n except SystemExit as ex:\n", "issue": "Debugging modules (using -m switch)\nThis is currently not supported\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nimport sys\n\n\n__author__ = \"Microsoft Corporation <[email protected]>\"\n__version__ = \"4.0.0a1\"\n\nDONT_DEBUG = []\n\n\ndef debug(filename, port_num, debug_id, debug_options, run_as):\n # TODO: docstring\n\n # import the wrapper first, so that it gets a chance\n # to detour pydevd socket functionality.\n import ptvsd.wrapper\n import pydevd\n\n sys.argv[1:0] = [\n '--port', str(port_num),\n '--client', '127.0.0.1',\n '--file', filename,\n ]\n try:\n pydevd.main()\n except SystemExit as ex:\n ptvsd.wrapper.ptvsd_sys_exit_code = ex.code\n raise\n", "path": "ptvsd/debugger.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nimport sys\n\n\n__author__ = \"Microsoft Corporation <[email protected]>\"\n__version__ = \"4.0.0a1\"\n\nDONT_DEBUG = []\n\n\ndef debug(filename, port_num, debug_id, debug_options, run_as):\n # TODO: docstring\n\n # import the wrapper first, so that it gets a chance\n # to detour pydevd socket functionality.\n import ptvsd.wrapper\n import pydevd\n\n args = [\n '--port', str(port_num),\n '--client', '127.0.0.1',\n ]\n if run_as == 'module':\n args.append('--module')\n args.extend(('--file', filename + \":\"))\n else:\n args.extend(('--file', filename))\n sys.argv[1:0] = args\n try:\n pydevd.main()\n except SystemExit as ex:\n ptvsd.wrapper.ptvsd_sys_exit_code = ex.code\n raise\n", "path": "ptvsd/debugger.py"}]}
| 542 | 185 |
gh_patches_debug_16822
|
rasdani/github-patches
|
git_diff
|
tough-dev-school__education-backend-560
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
User.DoesNotExist: User matching query does not exist.
Sentry Issue: [EDUCATION-BACKEND-23](https://sentry.io/organizations/f213/issues/2200858697/?referrer=github_integration)
```
User.DoesNotExist: User matching query does not exist.
File "celery/app/autoretry.py", line 34, in run
return task._orig_run(*args, **kwargs)
File "app/tasks.py", line 77, in subscribe_to_mailchimp
user=apps.get_model('users.User').objects.get(pk=user_id),
File "django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "django/db/models/query.py", line 435, in get
raise self.model.DoesNotExist(
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/app/tasks.py`
Content:
```
1 from typing import List, Union
2
3 from anymail.exceptions import AnymailRequestsAPIError
4 from django.apps import apps
5 from django.conf import settings
6 from requests.exceptions import RequestException
7
8 from app.celery import celery
9 from app.integrations import tg
10 from app.integrations.clickmeeting import ClickMeetingClient, ClickMeetingHTTPException
11 from app.integrations.mailchimp import AppMailchimp, MailchimpException
12 from app.integrations.zoomus import ZoomusClient, ZoomusHTTPException
13 from app.mail.owl import TemplOwl
14
15
16 @celery.task(
17 autoretry_for=[AnymailRequestsAPIError],
18 retry_kwargs={
19 'max_retries': 10,
20 'countdown': 5,
21 },
22 )
23 def send_mail(to: Union[List, str], template_id, subject: str = '', ctx: dict = None, disable_antispam=False):
24 TemplOwl(
25 to=to,
26 template_id=template_id,
27 subject=subject,
28 ctx=ctx,
29 disable_antispam=disable_antispam,
30 ).send()
31
32
33 @celery.task(
34 autoretry_for=[RequestException, ClickMeetingHTTPException],
35 retry_kwargs={
36 'max_retries': 10,
37 'countdown': 5,
38 },
39 )
40 def invite_to_clickmeeting(room_url: str, email: str):
41 client = ClickMeetingClient()
42 client.invite(room_url, email)
43
44
45 @celery.task(
46 autoretry_for=[RequestException, ZoomusHTTPException],
47 retry_kwargs={
48 'max_retries': 10,
49 'countdown': 5,
50 },
51 )
52 def invite_to_zoomus(webinar_id: str, user_id: int):
53 user = apps.get_model('users.User').objects.get(pk=user_id)
54
55 client = ZoomusClient()
56 client.invite(webinar_id, user)
57
58
59 @celery.task(
60 autoretry_for=[RequestException, MailchimpException],
61 retry_kwargs={
62 'max_retries': 10,
63 'countdown': 5,
64 },
65 )
66 def subscribe_to_mailchimp(user_id: int, list_id=None, tags=None):
67 if list_id is None:
68 list_id = settings.MAILCHIMP_CONTACT_LIST_ID
69
70 if not list_id:
71 return
72
73 mailchimp = AppMailchimp()
74
75 mailchimp.subscribe_django_user(
76 list_id=list_id,
77 user=apps.get_model('users.User').objects.get(pk=user_id),
78 tags=tags,
79 )
80
81
82 @celery.task
83 def send_happiness_message(text):
84 tg.send_happiness_message(text)
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/app/tasks.py b/src/app/tasks.py
--- a/src/app/tasks.py
+++ b/src/app/tasks.py
@@ -3,6 +3,7 @@
from anymail.exceptions import AnymailRequestsAPIError
from django.apps import apps
from django.conf import settings
+from django.core.exceptions import ObjectDoesNotExist
from requests.exceptions import RequestException
from app.celery import celery
@@ -57,11 +58,12 @@
@celery.task(
- autoretry_for=[RequestException, MailchimpException],
+ autoretry_for=[RequestException, MailchimpException, ObjectDoesNotExist],
retry_kwargs={
'max_retries': 10,
'countdown': 5,
},
+ rate_limit='1/s',
)
def subscribe_to_mailchimp(user_id: int, list_id=None, tags=None):
if list_id is None:
|
{"golden_diff": "diff --git a/src/app/tasks.py b/src/app/tasks.py\n--- a/src/app/tasks.py\n+++ b/src/app/tasks.py\n@@ -3,6 +3,7 @@\n from anymail.exceptions import AnymailRequestsAPIError\n from django.apps import apps\n from django.conf import settings\n+from django.core.exceptions import ObjectDoesNotExist\n from requests.exceptions import RequestException\n \n from app.celery import celery\n@@ -57,11 +58,12 @@\n \n \n @celery.task(\n- autoretry_for=[RequestException, MailchimpException],\n+ autoretry_for=[RequestException, MailchimpException, ObjectDoesNotExist],\n retry_kwargs={\n 'max_retries': 10,\n 'countdown': 5,\n },\n+ rate_limit='1/s',\n )\n def subscribe_to_mailchimp(user_id: int, list_id=None, tags=None):\n if list_id is None:\n", "issue": "User.DoesNotExist: User matching query does not exist.\nSentry Issue: [EDUCATION-BACKEND-23](https://sentry.io/organizations/f213/issues/2200858697/?referrer=github_integration)\n\n```\nUser.DoesNotExist: User matching query does not exist.\n File \"celery/app/autoretry.py\", line 34, in run\n return task._orig_run(*args, **kwargs)\n File \"app/tasks.py\", line 77, in subscribe_to_mailchimp\n user=apps.get_model('users.User').objects.get(pk=user_id),\n File \"django/db/models/manager.py\", line 85, in manager_method\n return getattr(self.get_queryset(), name)(*args, **kwargs)\n File \"django/db/models/query.py\", line 435, in get\n raise self.model.DoesNotExist(\n```\n", "before_files": [{"content": "from typing import List, Union\n\nfrom anymail.exceptions import AnymailRequestsAPIError\nfrom django.apps import apps\nfrom django.conf import settings\nfrom requests.exceptions import RequestException\n\nfrom app.celery import celery\nfrom app.integrations import tg\nfrom app.integrations.clickmeeting import ClickMeetingClient, ClickMeetingHTTPException\nfrom app.integrations.mailchimp import AppMailchimp, MailchimpException\nfrom app.integrations.zoomus import ZoomusClient, ZoomusHTTPException\nfrom app.mail.owl import TemplOwl\n\n\[email protected](\n autoretry_for=[AnymailRequestsAPIError],\n retry_kwargs={\n 'max_retries': 10,\n 'countdown': 5,\n },\n)\ndef send_mail(to: Union[List, str], template_id, subject: str = '', ctx: dict = None, disable_antispam=False):\n TemplOwl(\n to=to,\n template_id=template_id,\n subject=subject,\n ctx=ctx,\n disable_antispam=disable_antispam,\n ).send()\n\n\[email protected](\n autoretry_for=[RequestException, ClickMeetingHTTPException],\n retry_kwargs={\n 'max_retries': 10,\n 'countdown': 5,\n },\n)\ndef invite_to_clickmeeting(room_url: str, email: str):\n client = ClickMeetingClient()\n client.invite(room_url, email)\n\n\[email protected](\n autoretry_for=[RequestException, ZoomusHTTPException],\n retry_kwargs={\n 'max_retries': 10,\n 'countdown': 5,\n },\n)\ndef invite_to_zoomus(webinar_id: str, user_id: int):\n user = apps.get_model('users.User').objects.get(pk=user_id)\n\n client = ZoomusClient()\n client.invite(webinar_id, user)\n\n\[email protected](\n autoretry_for=[RequestException, MailchimpException],\n retry_kwargs={\n 'max_retries': 10,\n 'countdown': 5,\n },\n)\ndef subscribe_to_mailchimp(user_id: int, list_id=None, tags=None):\n if list_id is None:\n list_id = settings.MAILCHIMP_CONTACT_LIST_ID\n\n if not list_id:\n return\n\n mailchimp = AppMailchimp()\n\n mailchimp.subscribe_django_user(\n list_id=list_id,\n user=apps.get_model('users.User').objects.get(pk=user_id),\n tags=tags,\n )\n\n\[email protected]\ndef send_happiness_message(text):\n tg.send_happiness_message(text)\n", "path": "src/app/tasks.py"}], "after_files": [{"content": "from typing import List, Union\n\nfrom anymail.exceptions import AnymailRequestsAPIError\nfrom django.apps import apps\nfrom django.conf import settings\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom requests.exceptions import RequestException\n\nfrom app.celery import celery\nfrom app.integrations import tg\nfrom app.integrations.clickmeeting import ClickMeetingClient, ClickMeetingHTTPException\nfrom app.integrations.mailchimp import AppMailchimp, MailchimpException\nfrom app.integrations.zoomus import ZoomusClient, ZoomusHTTPException\nfrom app.mail.owl import TemplOwl\n\n\[email protected](\n autoretry_for=[AnymailRequestsAPIError],\n retry_kwargs={\n 'max_retries': 10,\n 'countdown': 5,\n },\n)\ndef send_mail(to: Union[List, str], template_id, subject: str = '', ctx: dict = None, disable_antispam=False):\n TemplOwl(\n to=to,\n template_id=template_id,\n subject=subject,\n ctx=ctx,\n disable_antispam=disable_antispam,\n ).send()\n\n\[email protected](\n autoretry_for=[RequestException, ClickMeetingHTTPException],\n retry_kwargs={\n 'max_retries': 10,\n 'countdown': 5,\n },\n)\ndef invite_to_clickmeeting(room_url: str, email: str):\n client = ClickMeetingClient()\n client.invite(room_url, email)\n\n\[email protected](\n autoretry_for=[RequestException, ZoomusHTTPException],\n retry_kwargs={\n 'max_retries': 10,\n 'countdown': 5,\n },\n)\ndef invite_to_zoomus(webinar_id: str, user_id: int):\n user = apps.get_model('users.User').objects.get(pk=user_id)\n\n client = ZoomusClient()\n client.invite(webinar_id, user)\n\n\[email protected](\n autoretry_for=[RequestException, MailchimpException, ObjectDoesNotExist],\n retry_kwargs={\n 'max_retries': 10,\n 'countdown': 5,\n },\n rate_limit='1/s',\n)\ndef subscribe_to_mailchimp(user_id: int, list_id=None, tags=None):\n if list_id is None:\n list_id = settings.MAILCHIMP_CONTACT_LIST_ID\n\n if not list_id:\n return\n\n mailchimp = AppMailchimp()\n\n mailchimp.subscribe_django_user(\n list_id=list_id,\n user=apps.get_model('users.User').objects.get(pk=user_id),\n tags=tags,\n )\n\n\[email protected]\ndef send_happiness_message(text):\n tg.send_happiness_message(text)\n", "path": "src/app/tasks.py"}]}
| 1,175 | 196 |
gh_patches_debug_26342
|
rasdani/github-patches
|
git_diff
|
zalando__patroni-1535
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
features/steps/standby_cluster.py produces error under Windows
Trying to execute:
`>behave -i standby_cluster.feature`
will produce error:
```
2020-05-07 19:27:19,407 ERROR: Failed to execute ['c:userspashaappdatalocalprogramspythonpython37-32python.exe', 'features/callback.py', '5362', 'on_role_change', 'master', 'batman']
Traceback (most recent call last):
File "C:\Users\pasha\Code\zalando-patroni\patroni\postgresql\cancellable.py", line 28, in _start_process
self._process = psutil.Popen(cmd, *args, **kwargs)
File "c:\users\pasha\appdata\local\programs\python\python37-32\lib\site-packages\psutil\__init__.py", line 1431, in __init__
self.__subproc = subprocess.Popen(*args, **kwargs)
File "c:\users\pasha\appdata\local\programs\python\python37-32\lib\subprocess.py", line 775, in __init__
restore_signals, start_new_session)
File "c:\users\pasha\appdata\local\programs\python\python37-32\lib\subprocess.py", line 1178, in _execute_child
startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file specified
```
This error produced by `postgresql/cancellable.py`:
```
def _start_process(self, cmd, *args, **kwargs):
"""This method must be executed only when the `_lock` is acquired"""
try:
self._process_children = []
self._process_cmd = cmd
self._process = psutil.Popen(cmd, *args, **kwargs)
^^^^^^^^^^^^^^^^^
except Exception:
return logger.exception('Failed to execute %s', cmd)
return True
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `features/steps/standby_cluster.py`
Content:
```
1 import os
2 import sys
3 import time
4
5 from behave import step
6
7
8 select_replication_query = """
9 SELECT * FROM pg_catalog.pg_stat_replication
10 WHERE application_name = '{0}'
11 """
12
13 callback = sys.executable + " features/callback2.py "
14
15
16 @step('I start {name:w} with callback configured')
17 def start_patroni_with_callbacks(context, name):
18 return context.pctl.start(name, custom_config={
19 "postgresql": {
20 "callbacks": {
21 "on_role_change": sys.executable + " features/callback.py"
22 }
23 }
24 })
25
26
27 @step('I start {name:w} in a cluster {cluster_name:w}')
28 def start_patroni(context, name, cluster_name):
29 return context.pctl.start(name, custom_config={
30 "scope": cluster_name,
31 "postgresql": {
32 "callbacks": {c: callback + name for c in ('on_start', 'on_stop', 'on_restart', 'on_role_change')},
33 "backup_restore": {
34 "command": (sys.executable + " features/backup_restore.py --sourcedir=" +
35 os.path.join(context.pctl.patroni_path, 'data', 'basebackup'))}
36 }
37 })
38
39
40 @step('I start {name:w} in a standby cluster {cluster_name:w} as a clone of {name2:w}')
41 def start_patroni_standby_cluster(context, name, cluster_name, name2):
42 # we need to remove patroni.dynamic.json in order to "bootstrap" standby cluster with existing PGDATA
43 os.unlink(os.path.join(context.pctl._processes[name]._data_dir, 'patroni.dynamic.json'))
44 port = context.pctl._processes[name2]._connkwargs.get('port')
45 context.pctl._processes[name].update_config({
46 "scope": cluster_name,
47 "bootstrap": {
48 "dcs": {
49 "ttl": 20,
50 "loop_wait": 2,
51 "retry_timeout": 5,
52 "standby_cluster": {
53 "host": "localhost",
54 "port": port,
55 "primary_slot_name": "pm_1",
56 "create_replica_methods": ["backup_restore", "basebackup"]
57 }
58 }
59 },
60 "postgresql": {
61 "callbacks": {c: callback + name for c in ('on_start', 'on_stop', 'on_restart', 'on_role_change')}
62 }
63 })
64 return context.pctl.start(name)
65
66
67 @step('{pg_name1:w} is replicating from {pg_name2:w} after {timeout:d} seconds')
68 def check_replication_status(context, pg_name1, pg_name2, timeout):
69 bound_time = time.time() + timeout
70
71 while time.time() < bound_time:
72 cur = context.pctl.query(
73 pg_name2,
74 select_replication_query.format(pg_name1),
75 fail_ok=True
76 )
77
78 if cur and len(cur.fetchall()) != 0:
79 break
80
81 time.sleep(1)
82 else:
83 assert False, "{0} is not replicating from {1} after {2} seconds".format(pg_name1, pg_name2, timeout)
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/features/steps/standby_cluster.py b/features/steps/standby_cluster.py
--- a/features/steps/standby_cluster.py
+++ b/features/steps/standby_cluster.py
@@ -10,7 +10,8 @@
WHERE application_name = '{0}'
"""
-callback = sys.executable + " features/callback2.py "
+executable = sys.executable if os.name != 'nt' else sys.executable.replace('\\', '/')
+callback = executable + " features/callback2.py "
@step('I start {name:w} with callback configured')
@@ -18,7 +19,7 @@
return context.pctl.start(name, custom_config={
"postgresql": {
"callbacks": {
- "on_role_change": sys.executable + " features/callback.py"
+ "on_role_change": executable + " features/callback.py"
}
}
})
@@ -31,7 +32,7 @@
"postgresql": {
"callbacks": {c: callback + name for c in ('on_start', 'on_stop', 'on_restart', 'on_role_change')},
"backup_restore": {
- "command": (sys.executable + " features/backup_restore.py --sourcedir=" +
+ "command": (executable + " features/backup_restore.py --sourcedir=" +
os.path.join(context.pctl.patroni_path, 'data', 'basebackup'))}
}
})
|
{"golden_diff": "diff --git a/features/steps/standby_cluster.py b/features/steps/standby_cluster.py\n--- a/features/steps/standby_cluster.py\n+++ b/features/steps/standby_cluster.py\n@@ -10,7 +10,8 @@\n WHERE application_name = '{0}'\n \"\"\"\n \n-callback = sys.executable + \" features/callback2.py \"\n+executable = sys.executable if os.name != 'nt' else sys.executable.replace('\\\\', '/')\n+callback = executable + \" features/callback2.py \"\n \n \n @step('I start {name:w} with callback configured')\n@@ -18,7 +19,7 @@\n return context.pctl.start(name, custom_config={\n \"postgresql\": {\n \"callbacks\": {\n- \"on_role_change\": sys.executable + \" features/callback.py\"\n+ \"on_role_change\": executable + \" features/callback.py\"\n }\n }\n })\n@@ -31,7 +32,7 @@\n \"postgresql\": {\n \"callbacks\": {c: callback + name for c in ('on_start', 'on_stop', 'on_restart', 'on_role_change')},\n \"backup_restore\": {\n- \"command\": (sys.executable + \" features/backup_restore.py --sourcedir=\" +\n+ \"command\": (executable + \" features/backup_restore.py --sourcedir=\" +\n os.path.join(context.pctl.patroni_path, 'data', 'basebackup'))}\n }\n })\n", "issue": "features/steps/standby_cluster.py produces error under Windows\nTrying to execute:\r\n`>behave -i standby_cluster.feature`\r\nwill produce error:\r\n```\r\n2020-05-07 19:27:19,407 ERROR: Failed to execute ['c:userspashaappdatalocalprogramspythonpython37-32python.exe', 'features/callback.py', '5362', 'on_role_change', 'master', 'batman']\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\pasha\\Code\\zalando-patroni\\patroni\\postgresql\\cancellable.py\", line 28, in _start_process\r\n self._process = psutil.Popen(cmd, *args, **kwargs)\r\n File \"c:\\users\\pasha\\appdata\\local\\programs\\python\\python37-32\\lib\\site-packages\\psutil\\__init__.py\", line 1431, in __init__\r\n self.__subproc = subprocess.Popen(*args, **kwargs)\r\n File \"c:\\users\\pasha\\appdata\\local\\programs\\python\\python37-32\\lib\\subprocess.py\", line 775, in __init__\r\n restore_signals, start_new_session)\r\n File \"c:\\users\\pasha\\appdata\\local\\programs\\python\\python37-32\\lib\\subprocess.py\", line 1178, in _execute_child\r\n startupinfo)\r\nFileNotFoundError: [WinError 2] The system cannot find the file specified\r\n```\r\n\r\nThis error produced by `postgresql/cancellable.py`: \r\n```\r\n def _start_process(self, cmd, *args, **kwargs):\r\n \"\"\"This method must be executed only when the `_lock` is acquired\"\"\"\r\n\r\n try:\r\n self._process_children = []\r\n self._process_cmd = cmd\r\n self._process = psutil.Popen(cmd, *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^\r\n except Exception:\r\n return logger.exception('Failed to execute %s', cmd)\r\n return True\r\n```\r\n\n", "before_files": [{"content": "import os\nimport sys\nimport time\n\nfrom behave import step\n\n\nselect_replication_query = \"\"\"\nSELECT * FROM pg_catalog.pg_stat_replication\nWHERE application_name = '{0}'\n\"\"\"\n\ncallback = sys.executable + \" features/callback2.py \"\n\n\n@step('I start {name:w} with callback configured')\ndef start_patroni_with_callbacks(context, name):\n return context.pctl.start(name, custom_config={\n \"postgresql\": {\n \"callbacks\": {\n \"on_role_change\": sys.executable + \" features/callback.py\"\n }\n }\n })\n\n\n@step('I start {name:w} in a cluster {cluster_name:w}')\ndef start_patroni(context, name, cluster_name):\n return context.pctl.start(name, custom_config={\n \"scope\": cluster_name,\n \"postgresql\": {\n \"callbacks\": {c: callback + name for c in ('on_start', 'on_stop', 'on_restart', 'on_role_change')},\n \"backup_restore\": {\n \"command\": (sys.executable + \" features/backup_restore.py --sourcedir=\" +\n os.path.join(context.pctl.patroni_path, 'data', 'basebackup'))}\n }\n })\n\n\n@step('I start {name:w} in a standby cluster {cluster_name:w} as a clone of {name2:w}')\ndef start_patroni_standby_cluster(context, name, cluster_name, name2):\n # we need to remove patroni.dynamic.json in order to \"bootstrap\" standby cluster with existing PGDATA\n os.unlink(os.path.join(context.pctl._processes[name]._data_dir, 'patroni.dynamic.json'))\n port = context.pctl._processes[name2]._connkwargs.get('port')\n context.pctl._processes[name].update_config({\n \"scope\": cluster_name,\n \"bootstrap\": {\n \"dcs\": {\n \"ttl\": 20,\n \"loop_wait\": 2,\n \"retry_timeout\": 5,\n \"standby_cluster\": {\n \"host\": \"localhost\",\n \"port\": port,\n \"primary_slot_name\": \"pm_1\",\n \"create_replica_methods\": [\"backup_restore\", \"basebackup\"]\n }\n }\n },\n \"postgresql\": {\n \"callbacks\": {c: callback + name for c in ('on_start', 'on_stop', 'on_restart', 'on_role_change')}\n }\n })\n return context.pctl.start(name)\n\n\n@step('{pg_name1:w} is replicating from {pg_name2:w} after {timeout:d} seconds')\ndef check_replication_status(context, pg_name1, pg_name2, timeout):\n bound_time = time.time() + timeout\n\n while time.time() < bound_time:\n cur = context.pctl.query(\n pg_name2,\n select_replication_query.format(pg_name1),\n fail_ok=True\n )\n\n if cur and len(cur.fetchall()) != 0:\n break\n\n time.sleep(1)\n else:\n assert False, \"{0} is not replicating from {1} after {2} seconds\".format(pg_name1, pg_name2, timeout)\n", "path": "features/steps/standby_cluster.py"}], "after_files": [{"content": "import os\nimport sys\nimport time\n\nfrom behave import step\n\n\nselect_replication_query = \"\"\"\nSELECT * FROM pg_catalog.pg_stat_replication\nWHERE application_name = '{0}'\n\"\"\"\n\nexecutable = sys.executable if os.name != 'nt' else sys.executable.replace('\\\\', '/')\ncallback = executable + \" features/callback2.py \"\n\n\n@step('I start {name:w} with callback configured')\ndef start_patroni_with_callbacks(context, name):\n return context.pctl.start(name, custom_config={\n \"postgresql\": {\n \"callbacks\": {\n \"on_role_change\": executable + \" features/callback.py\"\n }\n }\n })\n\n\n@step('I start {name:w} in a cluster {cluster_name:w}')\ndef start_patroni(context, name, cluster_name):\n return context.pctl.start(name, custom_config={\n \"scope\": cluster_name,\n \"postgresql\": {\n \"callbacks\": {c: callback + name for c in ('on_start', 'on_stop', 'on_restart', 'on_role_change')},\n \"backup_restore\": {\n \"command\": (executable + \" features/backup_restore.py --sourcedir=\" +\n os.path.join(context.pctl.patroni_path, 'data', 'basebackup'))}\n }\n })\n\n\n@step('I start {name:w} in a standby cluster {cluster_name:w} as a clone of {name2:w}')\ndef start_patroni_standby_cluster(context, name, cluster_name, name2):\n # we need to remove patroni.dynamic.json in order to \"bootstrap\" standby cluster with existing PGDATA\n os.unlink(os.path.join(context.pctl._processes[name]._data_dir, 'patroni.dynamic.json'))\n port = context.pctl._processes[name2]._connkwargs.get('port')\n context.pctl._processes[name].update_config({\n \"scope\": cluster_name,\n \"bootstrap\": {\n \"dcs\": {\n \"ttl\": 20,\n \"loop_wait\": 2,\n \"retry_timeout\": 5,\n \"standby_cluster\": {\n \"host\": \"localhost\",\n \"port\": port,\n \"primary_slot_name\": \"pm_1\",\n \"create_replica_methods\": [\"backup_restore\", \"basebackup\"]\n }\n }\n },\n \"postgresql\": {\n \"callbacks\": {c: callback + name for c in ('on_start', 'on_stop', 'on_restart', 'on_role_change')}\n }\n })\n return context.pctl.start(name)\n\n\n@step('{pg_name1:w} is replicating from {pg_name2:w} after {timeout:d} seconds')\ndef check_replication_status(context, pg_name1, pg_name2, timeout):\n bound_time = time.time() + timeout\n\n while time.time() < bound_time:\n cur = context.pctl.query(\n pg_name2,\n select_replication_query.format(pg_name1),\n fail_ok=True\n )\n\n if cur and len(cur.fetchall()) != 0:\n break\n\n time.sleep(1)\n else:\n assert False, \"{0} is not replicating from {1} after {2} seconds\".format(pg_name1, pg_name2, timeout)\n", "path": "features/steps/standby_cluster.py"}]}
| 1,555 | 313 |
gh_patches_debug_654
|
rasdani/github-patches
|
git_diff
|
pex-tool__pex-2123
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.133
On the docket:
+ [x] python<=3.8 symlink with a suffix (eg 3.7m) can create a venv without a pythonX.Y symlink which breaks pex assumptions that pythonX.Y is always available #2119
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.132"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.132"
+__version__ = "2.1.133"
|
{"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.132\"\n+__version__ = \"2.1.133\"\n", "issue": "Release 2.1.133\nOn the docket:\r\n+ [x] python<=3.8 symlink with a suffix (eg 3.7m) can create a venv without a pythonX.Y symlink which breaks pex assumptions that pythonX.Y is always available #2119\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.132\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.133\"\n", "path": "pex/version.py"}]}
| 373 | 98 |
gh_patches_debug_5511
|
rasdani/github-patches
|
git_diff
|
napalm-automation__napalm-692
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pip >= 10.0.0 incompatibility
I am not going to create these for every single napalm module... but did for https://github.com/napalm-automation/napalm-ansible/issues/123 where I also saw this issue
pip 10.x no longer provides pip.req as needed in setup.py
https://github.com/pypa/pip/issues/5156
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 """setup.py file."""
2 import uuid
3
4 from setuptools import setup, find_packages
5
6 from pip.req import parse_requirements
7
8
9 install_reqs = parse_requirements('requirements.txt', session=uuid.uuid1())
10 reqs = [str(ir.req) for ir in install_reqs]
11
12 __author__ = 'David Barroso <[email protected]>'
13
14 setup(
15 name="napalm",
16 version='2.3.0',
17 packages=find_packages(exclude=("test*", )),
18 test_suite='test_base',
19 author="David Barroso, Kirk Byers, Mircea Ulinic",
20 author_email="[email protected], [email protected], [email protected]",
21 description="Network Automation and Programmability Abstraction Layer with Multivendor support",
22 classifiers=[
23 'Topic :: Utilities',
24 'Programming Language :: Python',
25 'Programming Language :: Python :: 2',
26 'Programming Language :: Python :: 2.7',
27 'Programming Language :: Python :: 3',
28 'Programming Language :: Python :: 3.4',
29 'Programming Language :: Python :: 3.5',
30 'Programming Language :: Python :: 3.6',
31 'Operating System :: POSIX :: Linux',
32 'Operating System :: MacOS',
33 ],
34 url="https://github.com/napalm-automation/napalm",
35 include_package_data=True,
36 install_requires=reqs,
37 entry_points={
38 'console_scripts': [
39 'cl_napalm_configure=napalm.base.clitools.cl_napalm_configure:main',
40 'cl_napalm_test=napalm.base.clitools.cl_napalm_test:main',
41 'cl_napalm_validate=napalm.base.clitools.cl_napalm_validate:main',
42 'napalm=napalm.base.clitools.cl_napalm:main',
43 ],
44 }
45 )
46
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,13 +1,9 @@
"""setup.py file."""
-import uuid
-
from setuptools import setup, find_packages
-from pip.req import parse_requirements
-
+with open("requirements.txt", "r") as fs:
+ reqs = [r for r in fs.read().splitlines() if (len(r) > 0 and not r.startswith("#"))]
-install_reqs = parse_requirements('requirements.txt', session=uuid.uuid1())
-reqs = [str(ir.req) for ir in install_reqs]
__author__ = 'David Barroso <[email protected]>'
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,13 +1,9 @@\n \"\"\"setup.py file.\"\"\"\n-import uuid\n-\n from setuptools import setup, find_packages\n \n-from pip.req import parse_requirements\n-\n+with open(\"requirements.txt\", \"r\") as fs:\n+ reqs = [r for r in fs.read().splitlines() if (len(r) > 0 and not r.startswith(\"#\"))]\n \n-install_reqs = parse_requirements('requirements.txt', session=uuid.uuid1())\n-reqs = [str(ir.req) for ir in install_reqs]\n \n __author__ = 'David Barroso <[email protected]>'\n", "issue": "pip >= 10.0.0 incompatibility \nI am not going to create these for every single napalm module... but did for https://github.com/napalm-automation/napalm-ansible/issues/123 where I also saw this issue\r\n\r\npip 10.x no longer provides pip.req as needed in setup.py\r\nhttps://github.com/pypa/pip/issues/5156\r\n\n", "before_files": [{"content": "\"\"\"setup.py file.\"\"\"\nimport uuid\n\nfrom setuptools import setup, find_packages\n\nfrom pip.req import parse_requirements\n\n\ninstall_reqs = parse_requirements('requirements.txt', session=uuid.uuid1())\nreqs = [str(ir.req) for ir in install_reqs]\n\n__author__ = 'David Barroso <[email protected]>'\n\nsetup(\n name=\"napalm\",\n version='2.3.0',\n packages=find_packages(exclude=(\"test*\", )),\n test_suite='test_base',\n author=\"David Barroso, Kirk Byers, Mircea Ulinic\",\n author_email=\"[email protected], [email protected], [email protected]\",\n description=\"Network Automation and Programmability Abstraction Layer with Multivendor support\",\n classifiers=[\n 'Topic :: Utilities',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Operating System :: POSIX :: Linux',\n 'Operating System :: MacOS',\n ],\n url=\"https://github.com/napalm-automation/napalm\",\n include_package_data=True,\n install_requires=reqs,\n entry_points={\n 'console_scripts': [\n 'cl_napalm_configure=napalm.base.clitools.cl_napalm_configure:main',\n 'cl_napalm_test=napalm.base.clitools.cl_napalm_test:main',\n 'cl_napalm_validate=napalm.base.clitools.cl_napalm_validate:main',\n 'napalm=napalm.base.clitools.cl_napalm:main',\n ],\n }\n)\n", "path": "setup.py"}], "after_files": [{"content": "\"\"\"setup.py file.\"\"\"\nfrom setuptools import setup, find_packages\n\nwith open(\"requirements.txt\", \"r\") as fs:\n reqs = [r for r in fs.read().splitlines() if (len(r) > 0 and not r.startswith(\"#\"))]\n\n\n__author__ = 'David Barroso <[email protected]>'\n\nsetup(\n name=\"napalm\",\n version='2.3.0',\n packages=find_packages(exclude=(\"test*\", )),\n test_suite='test_base',\n author=\"David Barroso, Kirk Byers, Mircea Ulinic\",\n author_email=\"[email protected], [email protected], [email protected]\",\n description=\"Network Automation and Programmability Abstraction Layer with Multivendor support\",\n classifiers=[\n 'Topic :: Utilities',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Operating System :: POSIX :: Linux',\n 'Operating System :: MacOS',\n ],\n url=\"https://github.com/napalm-automation/napalm\",\n include_package_data=True,\n install_requires=reqs,\n entry_points={\n 'console_scripts': [\n 'cl_napalm_configure=napalm.base.clitools.cl_napalm_configure:main',\n 'cl_napalm_test=napalm.base.clitools.cl_napalm_test:main',\n 'cl_napalm_validate=napalm.base.clitools.cl_napalm_validate:main',\n 'napalm=napalm.base.clitools.cl_napalm:main',\n ],\n }\n)\n", "path": "setup.py"}]}
| 839 | 155 |
gh_patches_debug_33129
|
rasdani/github-patches
|
git_diff
|
Kinto__kinto-1259
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
If id, schema or last_modified are marked as required, record can't be validated
Because we pop those fields before validating them with the collection schema.
We can either:
* prevent those fields to be mentioned as `required` when defining the collection schema
* or not pop them if they are present in the schema before validating
If id, schema or last_modified are marked as required, record can't be validated
Because we pop those fields before validating them with the collection schema.
We can either:
* prevent those fields to be mentioned as `required` when defining the collection schema
* or not pop them if they are present in the schema before validating
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/views/records.py`
Content:
```
1 import copy
2
3 import jsonschema
4 from kinto.core import resource, utils
5 from kinto.core.errors import raise_invalid
6 from jsonschema import exceptions as jsonschema_exceptions
7 from pyramid.security import Authenticated
8 from pyramid.settings import asbool
9
10 from kinto.views import object_exists_or_404
11
12
13 _parent_path = '/buckets/{{bucket_id}}/collections/{{collection_id}}'
14
15
16 @resource.register(name='record',
17 collection_path=_parent_path + '/records',
18 record_path=_parent_path + '/records/{{id}}')
19 class Record(resource.ShareableResource):
20
21 schema_field = 'schema'
22
23 def __init__(self, request, **kwargs):
24 # Before all, first check that the parent collection exists.
25 # Check if already fetched before (in batch).
26 collections = request.bound_data.setdefault('collections', {})
27 collection_uri = self.get_parent_id(request)
28 if collection_uri not in collections:
29 # Unknown yet, fetch from storage.
30 collection_parent_id = utils.instance_uri(request, 'bucket',
31 id=self.bucket_id)
32 collection = object_exists_or_404(request,
33 collection_id='collection',
34 parent_id=collection_parent_id,
35 object_id=self.collection_id)
36 collections[collection_uri] = collection
37
38 super().__init__(request, **kwargs)
39 self._collection = collections[collection_uri]
40
41 def get_parent_id(self, request):
42 self.bucket_id = request.matchdict['bucket_id']
43 self.collection_id = request.matchdict['collection_id']
44 return utils.instance_uri(request, 'collection',
45 bucket_id=self.bucket_id,
46 id=self.collection_id)
47
48 def process_record(self, new, old=None):
49 """Validate records against collection schema, if any."""
50 new = super().process_record(new, old)
51
52 schema = self._collection.get('schema')
53 settings = self.request.registry.settings
54 schema_validation = 'experimental_collection_schema_validation'
55 if not schema or not asbool(settings.get(schema_validation)):
56 return new
57
58 collection_timestamp = self._collection[self.model.modified_field]
59
60 try:
61 stripped = copy.deepcopy(new)
62 stripped.pop(self.model.id_field, None)
63 stripped.pop(self.model.modified_field, None)
64 stripped.pop(self.model.permissions_field, None)
65 stripped.pop(self.schema_field, None)
66 jsonschema.validate(stripped, schema)
67 except jsonschema_exceptions.ValidationError as e:
68 if e.validator_value:
69 field = e.validator_value[-1]
70 else:
71 field = e.schema_path[-1]
72 raise_invalid(self.request, name=field, description=e.message)
73
74 new[self.schema_field] = collection_timestamp
75 return new
76
77 def collection_get(self):
78 result = super().collection_get()
79 self._handle_cache_expires(self.request.response)
80 return result
81
82 def get(self):
83 result = super().get()
84 self._handle_cache_expires(self.request.response)
85 return result
86
87 def _handle_cache_expires(self, response):
88 """If the parent collection defines a ``cache_expires`` attribute,
89 then cache-control response headers are sent.
90
91 .. note::
92
93 Those headers are also sent if the
94 ``kinto.record_cache_expires_seconds`` setting is defined.
95 """
96 is_anonymous = Authenticated not in self.request.effective_principals
97 if not is_anonymous:
98 return
99
100 cache_expires = self._collection.get('cache_expires')
101 if cache_expires is None:
102 by_bucket = '{}_record_cache_expires_seconds'.format(self.bucket_id)
103 by_collection = '{}_{}_record_cache_expires_seconds'.format(
104 self.bucket_id, self.collection_id)
105 settings = self.request.registry.settings
106 cache_expires = settings.get(by_collection,
107 settings.get(by_bucket))
108
109 if cache_expires is not None:
110 response.cache_expires(seconds=int(cache_expires))
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kinto/views/records.py b/kinto/views/records.py
--- a/kinto/views/records.py
+++ b/kinto/views/records.py
@@ -1,5 +1,3 @@
-import copy
-
import jsonschema
from kinto.core import resource, utils
from kinto.core.errors import raise_invalid
@@ -55,15 +53,21 @@
if not schema or not asbool(settings.get(schema_validation)):
return new
- collection_timestamp = self._collection[self.model.modified_field]
-
+ # Remove internal and auto-assigned fields from schema and record.
+ internal_fields = (self.model.id_field,
+ self.model.modified_field,
+ self.schema_field,
+ self.model.permissions_field)
+ required_fields = [f for f in schema.get('required', []) if f not in internal_fields]
+ if required_fields:
+ schema = {**schema, 'required': required_fields}
+ else:
+ schema = {f: v for f, v in new.items() if f != 'required'}
+ data = {f: v for f, v in new.items() if f not in internal_fields}
+
+ # Validate or fail with 400.
try:
- stripped = copy.deepcopy(new)
- stripped.pop(self.model.id_field, None)
- stripped.pop(self.model.modified_field, None)
- stripped.pop(self.model.permissions_field, None)
- stripped.pop(self.schema_field, None)
- jsonschema.validate(stripped, schema)
+ jsonschema.validate(data, schema)
except jsonschema_exceptions.ValidationError as e:
if e.validator_value:
field = e.validator_value[-1]
@@ -71,7 +75,10 @@
field = e.schema_path[-1]
raise_invalid(self.request, name=field, description=e.message)
+ # Assign the schema version (collection object timestamp) to the record.
+ collection_timestamp = self._collection[self.model.modified_field]
new[self.schema_field] = collection_timestamp
+
return new
def collection_get(self):
|
{"golden_diff": "diff --git a/kinto/views/records.py b/kinto/views/records.py\n--- a/kinto/views/records.py\n+++ b/kinto/views/records.py\n@@ -1,5 +1,3 @@\n-import copy\n-\n import jsonschema\n from kinto.core import resource, utils\n from kinto.core.errors import raise_invalid\n@@ -55,15 +53,21 @@\n if not schema or not asbool(settings.get(schema_validation)):\n return new\n \n- collection_timestamp = self._collection[self.model.modified_field]\n-\n+ # Remove internal and auto-assigned fields from schema and record.\n+ internal_fields = (self.model.id_field,\n+ self.model.modified_field,\n+ self.schema_field,\n+ self.model.permissions_field)\n+ required_fields = [f for f in schema.get('required', []) if f not in internal_fields]\n+ if required_fields:\n+ schema = {**schema, 'required': required_fields}\n+ else:\n+ schema = {f: v for f, v in new.items() if f != 'required'}\n+ data = {f: v for f, v in new.items() if f not in internal_fields}\n+\n+ # Validate or fail with 400.\n try:\n- stripped = copy.deepcopy(new)\n- stripped.pop(self.model.id_field, None)\n- stripped.pop(self.model.modified_field, None)\n- stripped.pop(self.model.permissions_field, None)\n- stripped.pop(self.schema_field, None)\n- jsonschema.validate(stripped, schema)\n+ jsonschema.validate(data, schema)\n except jsonschema_exceptions.ValidationError as e:\n if e.validator_value:\n field = e.validator_value[-1]\n@@ -71,7 +75,10 @@\n field = e.schema_path[-1]\n raise_invalid(self.request, name=field, description=e.message)\n \n+ # Assign the schema version (collection object timestamp) to the record.\n+ collection_timestamp = self._collection[self.model.modified_field]\n new[self.schema_field] = collection_timestamp\n+\n return new\n \n def collection_get(self):\n", "issue": "If id, schema or last_modified are marked as required, record can't be validated\nBecause we pop those fields before validating them with the collection schema.\r\n\r\nWe can either:\r\n* prevent those fields to be mentioned as `required` when defining the collection schema\r\n* or not pop them if they are present in the schema before validating\nIf id, schema or last_modified are marked as required, record can't be validated\nBecause we pop those fields before validating them with the collection schema.\r\n\r\nWe can either:\r\n* prevent those fields to be mentioned as `required` when defining the collection schema\r\n* or not pop them if they are present in the schema before validating\n", "before_files": [{"content": "import copy\n\nimport jsonschema\nfrom kinto.core import resource, utils\nfrom kinto.core.errors import raise_invalid\nfrom jsonschema import exceptions as jsonschema_exceptions\nfrom pyramid.security import Authenticated\nfrom pyramid.settings import asbool\n\nfrom kinto.views import object_exists_or_404\n\n\n_parent_path = '/buckets/{{bucket_id}}/collections/{{collection_id}}'\n\n\[email protected](name='record',\n collection_path=_parent_path + '/records',\n record_path=_parent_path + '/records/{{id}}')\nclass Record(resource.ShareableResource):\n\n schema_field = 'schema'\n\n def __init__(self, request, **kwargs):\n # Before all, first check that the parent collection exists.\n # Check if already fetched before (in batch).\n collections = request.bound_data.setdefault('collections', {})\n collection_uri = self.get_parent_id(request)\n if collection_uri not in collections:\n # Unknown yet, fetch from storage.\n collection_parent_id = utils.instance_uri(request, 'bucket',\n id=self.bucket_id)\n collection = object_exists_or_404(request,\n collection_id='collection',\n parent_id=collection_parent_id,\n object_id=self.collection_id)\n collections[collection_uri] = collection\n\n super().__init__(request, **kwargs)\n self._collection = collections[collection_uri]\n\n def get_parent_id(self, request):\n self.bucket_id = request.matchdict['bucket_id']\n self.collection_id = request.matchdict['collection_id']\n return utils.instance_uri(request, 'collection',\n bucket_id=self.bucket_id,\n id=self.collection_id)\n\n def process_record(self, new, old=None):\n \"\"\"Validate records against collection schema, if any.\"\"\"\n new = super().process_record(new, old)\n\n schema = self._collection.get('schema')\n settings = self.request.registry.settings\n schema_validation = 'experimental_collection_schema_validation'\n if not schema or not asbool(settings.get(schema_validation)):\n return new\n\n collection_timestamp = self._collection[self.model.modified_field]\n\n try:\n stripped = copy.deepcopy(new)\n stripped.pop(self.model.id_field, None)\n stripped.pop(self.model.modified_field, None)\n stripped.pop(self.model.permissions_field, None)\n stripped.pop(self.schema_field, None)\n jsonschema.validate(stripped, schema)\n except jsonschema_exceptions.ValidationError as e:\n if e.validator_value:\n field = e.validator_value[-1]\n else:\n field = e.schema_path[-1]\n raise_invalid(self.request, name=field, description=e.message)\n\n new[self.schema_field] = collection_timestamp\n return new\n\n def collection_get(self):\n result = super().collection_get()\n self._handle_cache_expires(self.request.response)\n return result\n\n def get(self):\n result = super().get()\n self._handle_cache_expires(self.request.response)\n return result\n\n def _handle_cache_expires(self, response):\n \"\"\"If the parent collection defines a ``cache_expires`` attribute,\n then cache-control response headers are sent.\n\n .. note::\n\n Those headers are also sent if the\n ``kinto.record_cache_expires_seconds`` setting is defined.\n \"\"\"\n is_anonymous = Authenticated not in self.request.effective_principals\n if not is_anonymous:\n return\n\n cache_expires = self._collection.get('cache_expires')\n if cache_expires is None:\n by_bucket = '{}_record_cache_expires_seconds'.format(self.bucket_id)\n by_collection = '{}_{}_record_cache_expires_seconds'.format(\n self.bucket_id, self.collection_id)\n settings = self.request.registry.settings\n cache_expires = settings.get(by_collection,\n settings.get(by_bucket))\n\n if cache_expires is not None:\n response.cache_expires(seconds=int(cache_expires))\n", "path": "kinto/views/records.py"}], "after_files": [{"content": "import jsonschema\nfrom kinto.core import resource, utils\nfrom kinto.core.errors import raise_invalid\nfrom jsonschema import exceptions as jsonschema_exceptions\nfrom pyramid.security import Authenticated\nfrom pyramid.settings import asbool\n\nfrom kinto.views import object_exists_or_404\n\n\n_parent_path = '/buckets/{{bucket_id}}/collections/{{collection_id}}'\n\n\[email protected](name='record',\n collection_path=_parent_path + '/records',\n record_path=_parent_path + '/records/{{id}}')\nclass Record(resource.ShareableResource):\n\n schema_field = 'schema'\n\n def __init__(self, request, **kwargs):\n # Before all, first check that the parent collection exists.\n # Check if already fetched before (in batch).\n collections = request.bound_data.setdefault('collections', {})\n collection_uri = self.get_parent_id(request)\n if collection_uri not in collections:\n # Unknown yet, fetch from storage.\n collection_parent_id = utils.instance_uri(request, 'bucket',\n id=self.bucket_id)\n collection = object_exists_or_404(request,\n collection_id='collection',\n parent_id=collection_parent_id,\n object_id=self.collection_id)\n collections[collection_uri] = collection\n\n super().__init__(request, **kwargs)\n self._collection = collections[collection_uri]\n\n def get_parent_id(self, request):\n self.bucket_id = request.matchdict['bucket_id']\n self.collection_id = request.matchdict['collection_id']\n return utils.instance_uri(request, 'collection',\n bucket_id=self.bucket_id,\n id=self.collection_id)\n\n def process_record(self, new, old=None):\n \"\"\"Validate records against collection schema, if any.\"\"\"\n new = super().process_record(new, old)\n\n schema = self._collection.get('schema')\n settings = self.request.registry.settings\n schema_validation = 'experimental_collection_schema_validation'\n if not schema or not asbool(settings.get(schema_validation)):\n return new\n\n # Remove internal and auto-assigned fields from schema and record.\n internal_fields = (self.model.id_field,\n self.model.modified_field,\n self.schema_field,\n self.model.permissions_field)\n required_fields = [f for f in schema.get('required', []) if f not in internal_fields]\n if required_fields:\n schema = {**schema, 'required': required_fields}\n else:\n schema = {f: v for f, v in new.items() if f != 'required'}\n data = {f: v for f, v in new.items() if f not in internal_fields}\n\n # Validate or fail with 400.\n try:\n jsonschema.validate(data, schema)\n except jsonschema_exceptions.ValidationError as e:\n if e.validator_value:\n field = e.validator_value[-1]\n else:\n field = e.schema_path[-1]\n raise_invalid(self.request, name=field, description=e.message)\n\n # Assign the schema version (collection object timestamp) to the record.\n collection_timestamp = self._collection[self.model.modified_field]\n new[self.schema_field] = collection_timestamp\n\n return new\n\n def collection_get(self):\n result = super().collection_get()\n self._handle_cache_expires(self.request.response)\n return result\n\n def get(self):\n result = super().get()\n self._handle_cache_expires(self.request.response)\n return result\n\n def _handle_cache_expires(self, response):\n \"\"\"If the parent collection defines a ``cache_expires`` attribute,\n then cache-control response headers are sent.\n\n .. note::\n\n Those headers are also sent if the\n ``kinto.record_cache_expires_seconds`` setting is defined.\n \"\"\"\n is_anonymous = Authenticated not in self.request.effective_principals\n if not is_anonymous:\n return\n\n cache_expires = self._collection.get('cache_expires')\n if cache_expires is None:\n by_bucket = '{}_record_cache_expires_seconds'.format(self.bucket_id)\n by_collection = '{}_{}_record_cache_expires_seconds'.format(\n self.bucket_id, self.collection_id)\n settings = self.request.registry.settings\n cache_expires = settings.get(by_collection,\n settings.get(by_bucket))\n\n if cache_expires is not None:\n response.cache_expires(seconds=int(cache_expires))\n", "path": "kinto/views/records.py"}]}
| 1,444 | 453 |
gh_patches_debug_7892
|
rasdani/github-patches
|
git_diff
|
pwr-Solaar__Solaar-23
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
no battery status on M510 mouse
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/logitech/unifying_receiver/hidpp10.py`
Content:
```
1 #
2 #
3 #
4
5 from __future__ import absolute_import, division, print_function, unicode_literals
6
7 from logging import getLogger # , DEBUG as _DEBUG
8 _log = getLogger('LUR').getChild('hidpp10')
9 del getLogger
10
11 from .common import (strhex as _strhex,
12 NamedInts as _NamedInts,
13 FirmwareInfo as _FirmwareInfo)
14 from .hidpp20 import FIRMWARE_KIND
15
16 #
17 # constants
18 #
19
20 DEVICE_KIND = _NamedInts(
21 keyboard=0x01,
22 mouse=0x02,
23 numpad=0x03,
24 presenter=0x04,
25 trackball=0x08,
26 touchpad=0x09)
27
28 POWER_SWITCH_LOCATION = _NamedInts(
29 base=0x01,
30 top_case=0x02,
31 edge_of_top_right_corner=0x03,
32 top_left_corner=0x05,
33 bottom_left_corner=0x06,
34 top_right_corner=0x07,
35 bottom_right_corner=0x08,
36 top_edge=0x09,
37 right_edge=0x0A,
38 left_edge=0x0B,
39 bottom_edge=0x0C)
40
41 NOTIFICATION_FLAG = _NamedInts(
42 battery_status=0x100000,
43 wireless=0x000100,
44 software_present=0x0000800)
45
46 ERROR = _NamedInts(
47 invalid_SubID__command=0x01,
48 invalid_address=0x02,
49 invalid_value=0x03,
50 connection_request_failed=0x04,
51 too_many_devices=0x05,
52 already_exists=0x06,
53 busy=0x07,
54 unknown_device=0x08,
55 resource_error=0x09,
56 request_unavailable=0x0A,
57 unsupported_parameter_value=0x0B,
58 wrong_pin_code=0x0C)
59
60 PAIRING_ERRORS = _NamedInts(
61 device_timeout=0x01,
62 device_not_supported=0x02,
63 too_many_devices=0x03,
64 sequence_timeout=0x06)
65
66 #
67 # functions
68 #
69
70 def get_register(device, name, default_number=-1):
71 known_register = device.registers[name]
72 register = known_register or default_number
73 if register > 0:
74 reply = device.request(0x8100 + (register & 0xFF))
75 if reply:
76 return reply
77
78 if not known_register and device.ping():
79 _log.warn("%s: failed to read '%s' from default register 0x%02X, blacklisting", device, name, default_number)
80 device.registers[-default_number] = name
81
82
83 def get_battery(device):
84 """Reads a device's battery level, if provided by the HID++ 1.0 protocol."""
85 reply = get_register(device, 'battery', 0x0D)
86 if reply:
87 charge = ord(reply[:1])
88 status = ord(reply[2:3]) & 0xF0
89 status = ('discharging' if status == 0x30
90 else 'charging' if status == 0x50
91 else 'fully charged' if status == 0x90
92 else None)
93 return charge, status
94
95 reply = get_register(device, 'battery_status', 0x07)
96 if reply:
97 battery_status = ord(reply[:1])
98 _log.info("%s: battery status %02X", device, battery_status)
99
100
101 def get_serial(device):
102 if device.kind is None:
103 dev_id = 0x03
104 receiver = device
105 else:
106 dev_id = 0x30 + device.number - 1
107 receiver = device.receiver
108
109 serial = receiver.request(0x83B5, dev_id)
110 if serial:
111 return _strhex(serial[1:5])
112
113
114 def get_firmware(device):
115 firmware = []
116
117 reply = device.request(0x81F1, 0x01)
118 if reply:
119 fw_version = _strhex(reply[1:3])
120 fw_version = '%s.%s' % (fw_version[0:2], fw_version[2:4])
121 reply = device.request(0x81F1, 0x02)
122 if reply:
123 fw_version += '.B' + _strhex(reply[1:3])
124 fw = _FirmwareInfo(FIRMWARE_KIND.Firmware, '', fw_version, None)
125 firmware.append(fw)
126
127 reply = device.request(0x81F1, 0x04)
128 if reply:
129 bl_version = _strhex(reply[1:3])
130 bl_version = '%s.%s' % (bl_version[0:2], bl_version[2:4])
131 bl = _FirmwareInfo(FIRMWARE_KIND.Bootloader, '', bl_version, None)
132 firmware.append(bl)
133
134 return tuple(firmware)
135
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lib/logitech/unifying_receiver/hidpp10.py b/lib/logitech/unifying_receiver/hidpp10.py
--- a/lib/logitech/unifying_receiver/hidpp10.py
+++ b/lib/logitech/unifying_receiver/hidpp10.py
@@ -94,8 +94,16 @@
reply = get_register(device, 'battery_status', 0x07)
if reply:
- battery_status = ord(reply[:1])
- _log.info("%s: battery status %02X", device, battery_status)
+ level = ord(reply[:1])
+ battery_status = ord(reply[2:3])
+ charge = (90 if level == 7 # full
+ else 50 if level == 5 # good
+ else 20 if level == 3 # low
+ else 5 if level == 1 # critical
+ else 0 ) # wtf?
+ status = ('charging' if battery_status == 0x25
+ else 'discharging')
+ return charge, status
def get_serial(device):
|
{"golden_diff": "diff --git a/lib/logitech/unifying_receiver/hidpp10.py b/lib/logitech/unifying_receiver/hidpp10.py\n--- a/lib/logitech/unifying_receiver/hidpp10.py\n+++ b/lib/logitech/unifying_receiver/hidpp10.py\n@@ -94,8 +94,16 @@\n \n \treply = get_register(device, 'battery_status', 0x07)\n \tif reply:\n-\t\tbattery_status = ord(reply[:1])\n-\t\t_log.info(\"%s: battery status %02X\", device, battery_status)\n+\t\tlevel = ord(reply[:1])\n+\t\tbattery_status = ord(reply[2:3])\n+\t\tcharge = (90 if level == 7 # full\n+\t\t\telse 50 if level == 5 # good\n+\t\t\telse 20 if level == 3 # low\n+\t\t\telse 5 if level == 1 # critical\n+\t\t\telse 0 ) # wtf?\n+\t\tstatus = ('charging' if battery_status == 0x25\n+\t\t\telse 'discharging')\n+\t\treturn charge, status\n \n \n def get_serial(device):\n", "issue": "no battery status on M510 mouse\n\n", "before_files": [{"content": "#\n#\n#\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom logging import getLogger # , DEBUG as _DEBUG\n_log = getLogger('LUR').getChild('hidpp10')\ndel getLogger\n\nfrom .common import (strhex as _strhex,\n\t\t\t\t\tNamedInts as _NamedInts,\n\t\t\t\t\tFirmwareInfo as _FirmwareInfo)\nfrom .hidpp20 import FIRMWARE_KIND\n\n#\n# constants\n#\n\nDEVICE_KIND = _NamedInts(\n\t\t\t\tkeyboard=0x01,\n\t\t\t\tmouse=0x02,\n\t\t\t\tnumpad=0x03,\n\t\t\t\tpresenter=0x04,\n\t\t\t\ttrackball=0x08,\n\t\t\t\ttouchpad=0x09)\n\nPOWER_SWITCH_LOCATION = _NamedInts(\n\t\t\t\tbase=0x01,\n\t\t\t\ttop_case=0x02,\n\t\t\t\tedge_of_top_right_corner=0x03,\n\t\t\t\ttop_left_corner=0x05,\n\t\t\t\tbottom_left_corner=0x06,\n\t\t\t\ttop_right_corner=0x07,\n\t\t\t\tbottom_right_corner=0x08,\n\t\t\t\ttop_edge=0x09,\n\t\t\t\tright_edge=0x0A,\n\t\t\t\tleft_edge=0x0B,\n\t\t\t\tbottom_edge=0x0C)\n\nNOTIFICATION_FLAG = _NamedInts(\n\t\t\t\tbattery_status=0x100000,\n\t\t\t\twireless=0x000100,\n\t\t\t\tsoftware_present=0x0000800)\n\nERROR = _NamedInts(\n\t\t\t\tinvalid_SubID__command=0x01,\n\t\t\t\tinvalid_address=0x02,\n\t\t\t\tinvalid_value=0x03,\n\t\t\t\tconnection_request_failed=0x04,\n\t\t\t\ttoo_many_devices=0x05,\n\t\t\t\talready_exists=0x06,\n\t\t\t\tbusy=0x07,\n\t\t\t\tunknown_device=0x08,\n\t\t\t\tresource_error=0x09,\n\t\t\t\trequest_unavailable=0x0A,\n\t\t\t\tunsupported_parameter_value=0x0B,\n\t\t\t\twrong_pin_code=0x0C)\n\nPAIRING_ERRORS = _NamedInts(\n\t\t\t\tdevice_timeout=0x01,\n\t\t\t\tdevice_not_supported=0x02,\n\t\t\t\ttoo_many_devices=0x03,\n\t\t\t\tsequence_timeout=0x06)\n\n#\n# functions\n#\n\ndef get_register(device, name, default_number=-1):\n\tknown_register = device.registers[name]\n\tregister = known_register or default_number\n\tif register > 0:\n\t\treply = device.request(0x8100 + (register & 0xFF))\n\t\tif reply:\n\t\t\treturn reply\n\n\t\tif not known_register and device.ping():\n\t\t\t_log.warn(\"%s: failed to read '%s' from default register 0x%02X, blacklisting\", device, name, default_number)\n\t\t\tdevice.registers[-default_number] = name\n\n\ndef get_battery(device):\n\t\"\"\"Reads a device's battery level, if provided by the HID++ 1.0 protocol.\"\"\"\n\treply = get_register(device, 'battery', 0x0D)\n\tif reply:\n\t\tcharge = ord(reply[:1])\n\t\tstatus = ord(reply[2:3]) & 0xF0\n\t\tstatus = ('discharging' if status == 0x30\n\t\t\t\telse 'charging' if status == 0x50\n\t\t\t\telse 'fully charged' if status == 0x90\n\t\t\t\telse None)\n\t\treturn charge, status\n\n\treply = get_register(device, 'battery_status', 0x07)\n\tif reply:\n\t\tbattery_status = ord(reply[:1])\n\t\t_log.info(\"%s: battery status %02X\", device, battery_status)\n\n\ndef get_serial(device):\n\tif device.kind is None:\n\t\tdev_id = 0x03\n\t\treceiver = device\n\telse:\n\t\tdev_id = 0x30 + device.number - 1\n\t\treceiver = device.receiver\n\n\tserial = receiver.request(0x83B5, dev_id)\n\tif serial:\n\t\treturn _strhex(serial[1:5])\n\n\ndef get_firmware(device):\n\tfirmware = []\n\n\treply = device.request(0x81F1, 0x01)\n\tif reply:\n\t\tfw_version = _strhex(reply[1:3])\n\t\tfw_version = '%s.%s' % (fw_version[0:2], fw_version[2:4])\n\t\treply = device.request(0x81F1, 0x02)\n\t\tif reply:\n\t\t\tfw_version += '.B' + _strhex(reply[1:3])\n\t\tfw = _FirmwareInfo(FIRMWARE_KIND.Firmware, '', fw_version, None)\n\t\tfirmware.append(fw)\n\n\treply = device.request(0x81F1, 0x04)\n\tif reply:\n\t\tbl_version = _strhex(reply[1:3])\n\t\tbl_version = '%s.%s' % (bl_version[0:2], bl_version[2:4])\n\t\tbl = _FirmwareInfo(FIRMWARE_KIND.Bootloader, '', bl_version, None)\n\t\tfirmware.append(bl)\n\n\treturn tuple(firmware)\n", "path": "lib/logitech/unifying_receiver/hidpp10.py"}], "after_files": [{"content": "#\n#\n#\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom logging import getLogger # , DEBUG as _DEBUG\n_log = getLogger('LUR').getChild('hidpp10')\ndel getLogger\n\nfrom .common import (strhex as _strhex,\n\t\t\t\t\tNamedInts as _NamedInts,\n\t\t\t\t\tFirmwareInfo as _FirmwareInfo)\nfrom .hidpp20 import FIRMWARE_KIND\n\n#\n# constants\n#\n\nDEVICE_KIND = _NamedInts(\n\t\t\t\tkeyboard=0x01,\n\t\t\t\tmouse=0x02,\n\t\t\t\tnumpad=0x03,\n\t\t\t\tpresenter=0x04,\n\t\t\t\ttrackball=0x08,\n\t\t\t\ttouchpad=0x09)\n\nPOWER_SWITCH_LOCATION = _NamedInts(\n\t\t\t\tbase=0x01,\n\t\t\t\ttop_case=0x02,\n\t\t\t\tedge_of_top_right_corner=0x03,\n\t\t\t\ttop_left_corner=0x05,\n\t\t\t\tbottom_left_corner=0x06,\n\t\t\t\ttop_right_corner=0x07,\n\t\t\t\tbottom_right_corner=0x08,\n\t\t\t\ttop_edge=0x09,\n\t\t\t\tright_edge=0x0A,\n\t\t\t\tleft_edge=0x0B,\n\t\t\t\tbottom_edge=0x0C)\n\nNOTIFICATION_FLAG = _NamedInts(\n\t\t\t\tbattery_status=0x100000,\n\t\t\t\twireless=0x000100,\n\t\t\t\tsoftware_present=0x0000800)\n\nERROR = _NamedInts(\n\t\t\t\tinvalid_SubID__command=0x01,\n\t\t\t\tinvalid_address=0x02,\n\t\t\t\tinvalid_value=0x03,\n\t\t\t\tconnection_request_failed=0x04,\n\t\t\t\ttoo_many_devices=0x05,\n\t\t\t\talready_exists=0x06,\n\t\t\t\tbusy=0x07,\n\t\t\t\tunknown_device=0x08,\n\t\t\t\tresource_error=0x09,\n\t\t\t\trequest_unavailable=0x0A,\n\t\t\t\tunsupported_parameter_value=0x0B,\n\t\t\t\twrong_pin_code=0x0C)\n\nPAIRING_ERRORS = _NamedInts(\n\t\t\t\tdevice_timeout=0x01,\n\t\t\t\tdevice_not_supported=0x02,\n\t\t\t\ttoo_many_devices=0x03,\n\t\t\t\tsequence_timeout=0x06)\n\n#\n# functions\n#\n\ndef get_register(device, name, default_number=-1):\n\tknown_register = device.registers[name]\n\tregister = known_register or default_number\n\tif register > 0:\n\t\treply = device.request(0x8100 + (register & 0xFF))\n\t\tif reply:\n\t\t\treturn reply\n\n\t\tif not known_register and device.ping():\n\t\t\t_log.warn(\"%s: failed to read '%s' from default register 0x%02X, blacklisting\", device, name, default_number)\n\t\t\tdevice.registers[-default_number] = name\n\n\ndef get_battery(device):\n\t\"\"\"Reads a device's battery level, if provided by the HID++ 1.0 protocol.\"\"\"\n\treply = get_register(device, 'battery', 0x0D)\n\tif reply:\n\t\tcharge = ord(reply[:1])\n\t\tstatus = ord(reply[2:3]) & 0xF0\n\t\tstatus = ('discharging' if status == 0x30\n\t\t\t\telse 'charging' if status == 0x50\n\t\t\t\telse 'fully charged' if status == 0x90\n\t\t\t\telse None)\n\t\treturn charge, status\n\n\treply = get_register(device, 'battery_status', 0x07)\n\tif reply:\n\t\tlevel = ord(reply[:1])\n\t\tbattery_status = ord(reply[2:3])\n\t\tcharge = (90 if level == 7 # full\n\t\t\telse 50 if level == 5 # good\n\t\t\telse 20 if level == 3 # low\n\t\t\telse 5 if level == 1 # critical\n\t\t\telse 0 ) # wtf?\n\t\tstatus = ('charging' if battery_status == 0x25\n\t\t\telse 'discharging')\n\t\treturn charge, status\n\n\ndef get_serial(device):\n\tif device.kind is None:\n\t\tdev_id = 0x03\n\t\treceiver = device\n\telse:\n\t\tdev_id = 0x30 + device.number - 1\n\t\treceiver = device.receiver\n\n\tserial = receiver.request(0x83B5, dev_id)\n\tif serial:\n\t\treturn _strhex(serial[1:5])\n\n\ndef get_firmware(device):\n\tfirmware = []\n\n\treply = device.request(0x81F1, 0x01)\n\tif reply:\n\t\tfw_version = _strhex(reply[1:3])\n\t\tfw_version = '%s.%s' % (fw_version[0:2], fw_version[2:4])\n\t\treply = device.request(0x81F1, 0x02)\n\t\tif reply:\n\t\t\tfw_version += '.B' + _strhex(reply[1:3])\n\t\tfw = _FirmwareInfo(FIRMWARE_KIND.Firmware, '', fw_version, None)\n\t\tfirmware.append(fw)\n\n\treply = device.request(0x81F1, 0x04)\n\tif reply:\n\t\tbl_version = _strhex(reply[1:3])\n\t\tbl_version = '%s.%s' % (bl_version[0:2], bl_version[2:4])\n\t\tbl = _FirmwareInfo(FIRMWARE_KIND.Bootloader, '', bl_version, None)\n\t\tfirmware.append(bl)\n\n\treturn tuple(firmware)\n", "path": "lib/logitech/unifying_receiver/hidpp10.py"}]}
| 1,733 | 252 |
gh_patches_debug_12366
|
rasdani/github-patches
|
git_diff
|
airctic__icevision-883
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add kwargs to EfficientDet model() method
Add kwargs to EfficientDet model() method. This will allow to pass `pretrained_backbone` argument to EfficientDet `create_model_from_config()` method. That will prevent loading pretrained weights if the user wish to do so
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `icevision/models/ross/efficientdet/model.py`
Content:
```
1 __all__ = ["model"]
2
3 from icevision.imports import *
4 from icevision.utils import *
5 from icevision.models.ross.efficientdet.utils import *
6 from icevision.models.ross.efficientdet.backbones import *
7 from effdet import get_efficientdet_config, EfficientDet, DetBenchTrain, unwrap_bench
8 from effdet import create_model_from_config
9 from effdet.efficientdet import HeadNet
10
11
12 def model(
13 backbone: EfficientDetBackboneConfig,
14 num_classes: int,
15 img_size: int,
16 ) -> nn.Module:
17 """Creates the efficientdet model specified by `model_name`.
18
19 The model implementation is by Ross Wightman, original repo
20 [here](https://github.com/rwightman/efficientdet-pytorch).
21
22 # Arguments
23 backbone: Specifies the backbone to use create the model. For pretrained models, check
24 [this](https://github.com/rwightman/efficientdet-pytorch#models) table.
25 num_classes: Number of classes of your dataset (including background).
26 img_size: Image size that will be fed to the model. Must be squared and
27 divisible by 128.
28
29 # Returns
30 A PyTorch model.
31 """
32 model_name = backbone.model_name
33 config = get_efficientdet_config(model_name=model_name)
34 config.image_size = (img_size, img_size) if isinstance(img_size, int) else img_size
35
36 model_bench = create_model_from_config(
37 config,
38 bench_task="train",
39 bench_labeler=True,
40 num_classes=num_classes - 1,
41 pretrained=backbone.pretrained,
42 )
43
44 # TODO: Break down param groups for backbone
45 def param_groups_fn(model: nn.Module) -> List[List[nn.Parameter]]:
46 unwrapped = unwrap_bench(model)
47
48 layers = [
49 unwrapped.backbone,
50 unwrapped.fpn,
51 nn.Sequential(unwrapped.class_net, unwrapped.box_net),
52 ]
53 param_groups = [list(layer.parameters()) for layer in layers]
54 check_all_model_params_in_groups2(model, param_groups)
55
56 return param_groups
57
58 model_bench.param_groups = MethodType(param_groups_fn, model_bench)
59
60 return model_bench
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/icevision/models/ross/efficientdet/model.py b/icevision/models/ross/efficientdet/model.py
--- a/icevision/models/ross/efficientdet/model.py
+++ b/icevision/models/ross/efficientdet/model.py
@@ -13,6 +13,7 @@
backbone: EfficientDetBackboneConfig,
num_classes: int,
img_size: int,
+ **kwargs,
) -> nn.Module:
"""Creates the efficientdet model specified by `model_name`.
@@ -39,6 +40,7 @@
bench_labeler=True,
num_classes=num_classes - 1,
pretrained=backbone.pretrained,
+ **kwargs,
)
# TODO: Break down param groups for backbone
|
{"golden_diff": "diff --git a/icevision/models/ross/efficientdet/model.py b/icevision/models/ross/efficientdet/model.py\n--- a/icevision/models/ross/efficientdet/model.py\n+++ b/icevision/models/ross/efficientdet/model.py\n@@ -13,6 +13,7 @@\n backbone: EfficientDetBackboneConfig,\n num_classes: int,\n img_size: int,\n+ **kwargs,\n ) -> nn.Module:\n \"\"\"Creates the efficientdet model specified by `model_name`.\n \n@@ -39,6 +40,7 @@\n bench_labeler=True,\n num_classes=num_classes - 1,\n pretrained=backbone.pretrained,\n+ **kwargs,\n )\n \n # TODO: Break down param groups for backbone\n", "issue": "Add kwargs to EfficientDet model() method\nAdd kwargs to EfficientDet model() method. This will allow to pass `pretrained_backbone` argument to EfficientDet `create_model_from_config()` method. That will prevent loading pretrained weights if the user wish to do so\n", "before_files": [{"content": "__all__ = [\"model\"]\n\nfrom icevision.imports import *\nfrom icevision.utils import *\nfrom icevision.models.ross.efficientdet.utils import *\nfrom icevision.models.ross.efficientdet.backbones import *\nfrom effdet import get_efficientdet_config, EfficientDet, DetBenchTrain, unwrap_bench\nfrom effdet import create_model_from_config\nfrom effdet.efficientdet import HeadNet\n\n\ndef model(\n backbone: EfficientDetBackboneConfig,\n num_classes: int,\n img_size: int,\n) -> nn.Module:\n \"\"\"Creates the efficientdet model specified by `model_name`.\n\n The model implementation is by Ross Wightman, original repo\n [here](https://github.com/rwightman/efficientdet-pytorch).\n\n # Arguments\n backbone: Specifies the backbone to use create the model. For pretrained models, check\n [this](https://github.com/rwightman/efficientdet-pytorch#models) table.\n num_classes: Number of classes of your dataset (including background).\n img_size: Image size that will be fed to the model. Must be squared and\n divisible by 128.\n\n # Returns\n A PyTorch model.\n \"\"\"\n model_name = backbone.model_name\n config = get_efficientdet_config(model_name=model_name)\n config.image_size = (img_size, img_size) if isinstance(img_size, int) else img_size\n\n model_bench = create_model_from_config(\n config,\n bench_task=\"train\",\n bench_labeler=True,\n num_classes=num_classes - 1,\n pretrained=backbone.pretrained,\n )\n\n # TODO: Break down param groups for backbone\n def param_groups_fn(model: nn.Module) -> List[List[nn.Parameter]]:\n unwrapped = unwrap_bench(model)\n\n layers = [\n unwrapped.backbone,\n unwrapped.fpn,\n nn.Sequential(unwrapped.class_net, unwrapped.box_net),\n ]\n param_groups = [list(layer.parameters()) for layer in layers]\n check_all_model_params_in_groups2(model, param_groups)\n\n return param_groups\n\n model_bench.param_groups = MethodType(param_groups_fn, model_bench)\n\n return model_bench\n", "path": "icevision/models/ross/efficientdet/model.py"}], "after_files": [{"content": "__all__ = [\"model\"]\n\nfrom icevision.imports import *\nfrom icevision.utils import *\nfrom icevision.models.ross.efficientdet.utils import *\nfrom icevision.models.ross.efficientdet.backbones import *\nfrom effdet import get_efficientdet_config, EfficientDet, DetBenchTrain, unwrap_bench\nfrom effdet import create_model_from_config\nfrom effdet.efficientdet import HeadNet\n\n\ndef model(\n backbone: EfficientDetBackboneConfig,\n num_classes: int,\n img_size: int,\n **kwargs,\n) -> nn.Module:\n \"\"\"Creates the efficientdet model specified by `model_name`.\n\n The model implementation is by Ross Wightman, original repo\n [here](https://github.com/rwightman/efficientdet-pytorch).\n\n # Arguments\n backbone: Specifies the backbone to use create the model. For pretrained models, check\n [this](https://github.com/rwightman/efficientdet-pytorch#models) table.\n num_classes: Number of classes of your dataset (including background).\n img_size: Image size that will be fed to the model. Must be squared and\n divisible by 128.\n\n # Returns\n A PyTorch model.\n \"\"\"\n model_name = backbone.model_name\n config = get_efficientdet_config(model_name=model_name)\n config.image_size = (img_size, img_size) if isinstance(img_size, int) else img_size\n\n model_bench = create_model_from_config(\n config,\n bench_task=\"train\",\n bench_labeler=True,\n num_classes=num_classes - 1,\n pretrained=backbone.pretrained,\n **kwargs,\n )\n\n # TODO: Break down param groups for backbone\n def param_groups_fn(model: nn.Module) -> List[List[nn.Parameter]]:\n unwrapped = unwrap_bench(model)\n\n layers = [\n unwrapped.backbone,\n unwrapped.fpn,\n nn.Sequential(unwrapped.class_net, unwrapped.box_net),\n ]\n param_groups = [list(layer.parameters()) for layer in layers]\n check_all_model_params_in_groups2(model, param_groups)\n\n return param_groups\n\n model_bench.param_groups = MethodType(param_groups_fn, model_bench)\n\n return model_bench\n", "path": "icevision/models/ross/efficientdet/model.py"}]}
| 917 | 166 |
gh_patches_debug_1
|
rasdani/github-patches
|
git_diff
|
kartoza__prj.app-866
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Only one organisation can be created per account
It seems that only one organisation can be created from a login account. The folks at Camptocamp have two separate organisations (companies) and are unable to create the second organisation from their login.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django_project/core/settings/__init__.py`
Content:
```
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/django_project/core/settings/__init__.py b/django_project/core/settings/__init__.py
--- a/django_project/core/settings/__init__.py
+++ b/django_project/core/settings/__init__.py
@@ -0,0 +1 @@
+# coding=utf-8
|
{"golden_diff": "diff --git a/django_project/core/settings/__init__.py b/django_project/core/settings/__init__.py\n--- a/django_project/core/settings/__init__.py\n+++ b/django_project/core/settings/__init__.py\n@@ -0,0 +1 @@\n+# coding=utf-8\n", "issue": "Only one organisation can be created per account \nIt seems that only one organisation can be created from a login account. The folks at Camptocamp have two separate organisations (companies) and are unable to create the second organisation from their login.\r\n\n", "before_files": [{"content": "", "path": "django_project/core/settings/__init__.py"}], "after_files": [{"content": "# coding=utf-8\n", "path": "django_project/core/settings/__init__.py"}]}
| 313 | 62 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.